Brief
In a recent article titled Cybersecurity of General AI and LLMs: Current Issues and ConcernsThe Cyber Security Agency of Singapore (CSA) provides expert advice on the security and privacy challenges associated with generative artificial intelligence (Gen-AI) and large language models (LLMs). The article outlines issues such as accidental data leaks, vulnerabilities in AI-generated code, and the potential misuse of AI by malicious actors, before providing recommendations on steps tech companies can take to address these concerns.
The rapid growth of Gen-AI and LLMs has raised significant security and privacy concerns, and the key issues highlighted by the CSA include:
- Accidental data leaks: Gen-AI systems, especially LLMs, are prone to accidental data leaks, which can occur due to overfitting or inadequate data sanitization. Sensitive information can be exposed when employees use ChatGPT to code. The increasing integration of AI into personal devices also increases the risk of accidental data transfer to the cloud.
- Risks associated with AI-generated code: The use of AI in coding increases cybersecurity risks because, without supervision, this code can contain undetected security vulnerabilities. Human supervision remains essential to mitigate these risks.
- Misuse of AI: Malicious actors can leverage LLMs to exploit vulnerabilities identified in Common Vulnerabilities and Exposures (CVE) reports. These risks are typically reduced when training data does not include CVE descriptions.
- Mitigating privacy concerns: Tech companies are helping to address privacy concerns by controlling data usage, such as giving users the ability to delete stored information and preventing data from being used to train models. However, users are advised to refrain from sharing sensitive data with AI platforms.
The CSA’s list of best practices to address privacy and security concerns associated with Gen-AI and LLMs includes the following:
- Improve employee awareness and training on associated risks
- Review and update of IT and data loss prevention policies
- Providing human supervision of Gen-AI systems and LLMs
- Stay informed about Gen-AI developments and associated risks
The article demonstrates the CSA’s cautiously optimistic view of general AI and LLMs, highlighting the delicate balance needed to develop general AI and responsible LLMs. Understanding these realities and implementing the necessary safeguards will be critical for organizations looking to integrate general AI and LLMs into their business processes.
* * * * *
© 2024 Baker & McKenzie.Wong & Leow. All rights reserved. Baker & McKenzie.Wong & Leow is a limited liability partnership and is a member firm of Baker & McKenzie International, a global law firm with member law firms worldwide. Consistent with common terminology used in professional services organizations, reference to a “director” means a person who is a partner, or equivalent, in such law firm. Similarly, reference to an “office” means an office of such law firm. This may be referred to as “lawyer advertising” requiring notice in some jurisdictions. Past performance does not guarantee a similar result.