Cybersecurity professionals have an urgent duty to secure AI tools, ensuring that these technologies are only used for social good, was the strong message from the RSA 2024 conference.
AI brings enormous promise to the real world, including helping to diagnose health problems more quickly and accurately.
However, with the pace of innovation and adoption of AI accelerating at an unprecedented rate, security safeguards must be put in place as early as possible to ensure they meet their huge promises, as many people have called for.
This should be done keeping in mind concepts such as privacy and fairness.
“We have a responsibility to create a safe and secure exploration space,” emphasized Vasu Jakkal, vice president of security, compliance, identity and management at Microsoft.
Separately, Dan Hendrycks, founder of the Center for AI Safety, said AI carries a tremendous amount of risk, both societal and technical, given its growing influence and potential in the physical world.
“This is a broader socio-technical problem than just a technical problem,” he said.
Bruce Schneier, security technologist, researcher and lecturer at the Harvard Kennedy School, added: “Security is now our security, and that’s why we need to think about these things more broadly. »
Threats to AI integrity
Employees are using publicly available generative AI tools, such as ChatGPT, for their work, a phenomenon that Dan Lohrmann, CISO at Presidio, calls “Bring Your Own AI.”
Mike Aiello, Chief Technology Officer at Secureworks, said Information security that he sees an analogy with the appearance of the first subscription software services (SaaS), which led many company employees to create subscriptions.
“Organizations are seeing the same thing with the use of AI, like signing up for ChatGPT, and it’s a little uncontrolled in the enterprise,” he noted.
This trend raises many security and privacy concerns for businesses, such as entering sensitive corporate data into these models, which could make this information publicly available.
Other problems threaten the integrity of AI tool results. These include data poisoning, in which the behavior of models is altered accidentally or intentionally by changing the data they are trained on, and rapid injection attacks, in which AI models manipulate to perform involuntary actions.
Such problems threaten to undermine trust in AI technologies, causing problems such as hallucinations and even bias and discrimination. This in turn could limit their use and potential to solve major societal problems.
AI is a governance issue
Experts speaking at the RSA conference advocated that organizations treat AI tools like any other application they need to secure.
Heather Adkins, vice president of security engineering at Google, noted that AI systems are essentially the same as other applications, with inputs and outputs.
“Many of the techniques we have developed over the last 30 years as an industry also apply here,” she commented.
According to Jakkal, at the heart of securing AI systems is a robust risk management governance system. To do this, she defined the three pillars of Microsoft:
- Discover: Understand what AI tools are used in your environment and how employees use them
- Protect: Mitigate risks in all systems you have and implement
- Governance : Compliance with regulatory policies and code of conduct, and training staff on the safe use of AI tools
Lohrmann emphasized that the first step organizations need to take is visibility of AI within their workforce. “You have to know what’s going on before you can do something,” he said. Information security.
Secureworks’ Aiello also advocated keeping humans in the loop when handing off work to AI models. Although the company uses data analysis tools, its analysts will check that data and provide feedback when problems such as hallucinations arise, he explained.
Conclusion
We are in the early stages of understanding the true impact that AI can have on society. For this potential to be realized, these systems must be supported by strong security, otherwise they will face limits or even bans across organizations and countries.
Organizations are still grappling with the explosion of generative AI tools in the workplace and must act quickly to develop policies and tools that can safely manage this use.
The cybersecurity industry’s current approach to this issue is likely to strongly influence the future role of AI.