Microsoft Research: AI Systems Cannot Be Made Fully Secure


Microsoft researchers who tested more than 100 of the company’s AI products concluded that AI systems can never be made fully secure, according to a new pre-print paper. The 26-author study, which included Azure CTO Mark Russinovich, found that large language models amplify existing security risks and create new vulnerabilities. While defensive measures can increase the cost of attacks, the researchers warned that AI systems will remain vulnerable to threats ranging from gradient-based attacks to simpler techniques like interface manipulation for phishing.



Source link