On Wednesday, October 16, 2024, the New York Department of Financial Services (DFS) announced new guidance to identify and provide a plan to protect against AI-specific cybersecurity risks. Driven primarily by advances in AI that are having a significant impact on cybersecurity, including facilitating new ways to commit cybercrimes, the DFS guidance aims to specifically protect New York businesses, but applies to all businesses keen to increase their cybersecurity and manage the risks posed by emerging technologies. The guidelines address the “most significant” AI-related threats to cybersecurity that organizations should consider when developing a cybersecurity program, internal protocols or implementing cybersecurity controls, as well as recommendations for these cybersecurity programs.
Who is affected
Existing regulations on DFS cybersecurity, codified in 23 NYCRR Part 500is not affected by these new directions. This regulation was published in November 2023 and applies to any person operating or required to operate under a license, registration or other similar authorization under the New York banking, insurance or financial services laws ( called covered entities), which include banks, insurance companies. , mortgage brokers, financial institutions and third party vendors who process non-public information on behalf of these financial entities. The new DFS guidance is specifically aimed at covered entities; However, the advice in the guide is useful to all businesses that have access to data and are connected to the Internet.
Cybersecurity risks posed by AI
- AI-based social engineering: Identified by DFS as the “most significant (cyber) threat to the financial services industry,” cybercriminals are increasingly using AI to create realistic photos, videos, audio or fake text that they can exploit in their phishing attacks to steal login credentials, convince individuals to transfer funds to malicious actors, and gain access to company systems.
- AI-enhanced cybersecurity attacks: AI can be used to exponentially expand the scope and scope of attacks against companies’ technical infrastructure. And once a cyberattack has occurred, AI can be used for reconnaissance to mine larger amounts of data. Additionally, AI has lowered the barriers to entry for cybercriminals who otherwise would not have the technical expertise to carry out a cyberattack.
- Exposure or theft of large amounts of non-public information: Covered entities that develop AI tools or use them in their own businesses to process large amounts of sensitive data are particularly at risk, as the potential access to a large amount of sensitive data creates an attractive target for cybercriminals seeking to extract sensitive data for financial purposes. or other malicious motives. Additionally, more data processed means more data needs to be protected. Finally, some AI tools require the storage of biometric data, which can be misused by cybercriminals to create highly realistic deepfakes or to carry out additional data theft.
- Increased vulnerabilities due to supply chain dependencies: If covered entities use AI tools (and those tools integrate other third-party AI tools), there are multiple points of vulnerability at each link in the supply chain. In the modern, interconnected business world, if one link is compromised by a cyberattack, the entire chain is exposed and subject to attack. In other words, a company’s cybersecurity is only as strong as the weakest link in its supply chain.
Particularly vulnerable industries and businesses
- Companies that develop or use AI tools that process large amounts of data, as these entities provide the most attractive target for malicious actors to maximize the theft of sensitive information.
- Companies in an AI supply chain (g.when the covered entity uses an AI tool that integrates other third-party AI tools into its offering).
Guidance recommendations
Under existing DFS cybersecurity regulations, covered entities are already required to assess risks and implement minimum cybersecurity standards to mitigate those risks. The new DFS guidance builds on these requirements and recommends the following:
- Evaluate the entity’s internal use of AI as part of a risk assessment, including by evaluating the third-party tools it integrates.
- Adopt specific protocols to detect and mitigate AI-based social engineering. This includes adopting access controls (such as multi-factor authentication to resist AI-manipulated deepfakes), including digital certificates and physical security keys, or employing multiple modalities simultaneously. authentication.
- For covered entities that develop or use AI tools to process large amounts of data, their cybersecurity programs must include periodic training of staff on how to secure and defend AI systems against attacks and develop AI systems safely, and where and how to use human review instead. to take advantage of AI.
- Develop procedures to conduct due diligence before working with a third-party vendor, especially one that provides an AI tool or services. These due diligence procedures should consider potential threats to the third party through its own use of AI and how those threats could impact the covered entity.
- For covered entities that use third-party service providers and their AI offerings, incorporate AI-specific representations and warranties into commercial agreements.
- Implement training specifically on social engineering, including how the use of AI can make social engineering, phishing, or deepfake efforts more difficult for individuals to detect.
- For covered entities that enable their personnel to use generative AI tools, monitoring must be implemented to detect unusual usage behavior that may indicate a cyber threat (g.asking ChatGPT how to infiltrate a network or what code is needed to deploy malware).
- Implement effective data management and inventory protocols to limit exposure if malicious actors gain access to company systems. If a covered entity uses AI or relies on AI tools, additional controls must be implemented to prevent access to data used in connection with AI (either for training or training purposes). treatment).
Although DFS’s guidance identifies covered entities as its target audience, all businesses could benefit from implementing these recommended actions, which can improve their cybersecurity risk profile, whether or not they use cybersecurity tools. AI.