If you’re wondering if new generative artificial intelligence (gen AI) tools are putting your business at risk, the answer is: probably. Even more so with the increased use of AI tools in the workplace.
A recent Deloitte study found that more than 60% of knowledge workers use AI tools at work. Although these tools bring many benefits, including improved productivity, experts agree that they add more risks. According to NSA Cybersecurity Director Dave LuberAI offers unprecedented opportunities while also presenting opportunities for malicious activity. Many common tools lack important defenses and protections.
The risk is already on the radar for many organizations. According to the IBM Institute for Business Value96% of leaders say they adopt Generative AI makes a security breach in their organization likely within the next three years. Additionally, 47% of executives fear that adopting generative AI in operations will lead to new types of attacks targeting their own AI models, data or services.
What are the cybersecurity risks associated with using gen AI tools?
Earlier this year, the NSA Artificial Intelligence Security Center (AISC) has published a Cybersecurity Fact Sheet (CSI) on best practices for deploying secure and resilient AI systems to help organizations understand risks and adopt best practices to reduce vulnerabilities. For the CSI, the NSA partnered with the FBI, CISA, the Australian Cyber Security Center (ACSC) of the Australian Signals Directorate, the Canadian Center for Cyber Security, the National Cyber Security Center of New -Zealand (NCSC-NZ) and the UK National Cyber Security Centre. Cyber Security Center (NCSC-UK).
“Malicious actors targeting AI systems can use attack vectors unique to AI systems, as well as standard techniques used against traditional IT. Due to the wide variety of attack vectors, defenses must be diverse and comprehensive. Advanced threat actors often combine multiple vectors to execute more complex operations. Such combinations can penetrate layered defenses more effectively,” said the CSI.
Explore AI cybersecurity solutions
Here are common ways generational AI tools increase cybersecurity risks:
- More specific social engineering threats: Since generative AI tools save data entered into the system, malicious actors can use this data to design realistic solutions. social engineering attacks. By entering prompts that extract stored data for training purposes, cybercriminals can quickly craft a phishing email that is more likely to be effective. To reduce this risk, companies should disable tools that use data for training purposes or consider using proprietary tools.
- Expanding the threat zone for insider attacks: Although proprietary systems reduce some risks, they also make it easier for insiders to leak data due to larger data surface. Additionally, insider knowledge can help circumvent audit trails by knowing how logging and monitoring systems work on proprietary systems, which are typically less robust than commercial products.
- Data leak via chatbots: Many companies are using generative AI to create both chatbots used internally and externally. However, these tools can be hacked and then used to leak sensitive data, even proprietary secrets or corporate financial data.
How can organizations reduce their risks?
As generative AI is a powerful tool that can bring significant benefits to all organizations, organizations should focus on reducing risks rather than eliminating their use.
Here are some good practices NSA CSI:
- Validate the AI system before and during use. Consider using one of the many methods available, such as cryptographic methods, digital signatures, or checksums. You can then confirm the origin and integrity of each artifact against unauthorized use.
- Ensure a robust deployment environment architecture. Establish security protections for borders between IT environment and the AI system. You should also identify and protect any proprietary data sources that the organization will use for training or fine-tuning the AI model. Other areas of focus should be addressing blind spots in border protection and other security-related areas in the AI system identified by the threat model.
- Secure exposed APIs. Secure exposures application programming interfaces (APIs) by implementing authentication and authorization mechanisms for access to APIs.
As generative AI continues to expand, both in terms of features and use cases, organizations should carefully monitor cybersecurity trends and best practices. By proactively taking precautions to reduce risk, organizations can achieve productivity gains while reducing risk.