On October 16, 2024, the New York Department of Financial Services (“NYDFS” or “DFS”) issued advice raise awareness about combatting cybersecurity risks arising from artificial intelligence (“AI”) used by DFS licensees, such as insurers and virtual currency businesses. The risks revolve around both the offensive use of AI by threat actors and the growing reliance on AI by businesses. These brief guidelines recognize that many AI risks exist, focus on specific cybersecurity risks, and highlight some of the key NYDFS risks, including:
- AI-powered social engineering – Threat actors are increasingly using AI for deepfakes to obtain sensitive information, bypass biometric verification, and defraud businesses.
- AI-enhanced cybersecurity attacks – AI can amplify the power, scale and speed of attacks, and can also reduce the technical skills needed to launch them.
- Exposure or theft of large amounts of nonpublic information – AI relies on large amounts of information, which often includes non-public data and is an attractive target for malicious actors.
- Increased vulnerabilities due to dependencies on third parties, suppliers and other supply chain dependencies – Collecting and curating the large amounts of data needed for AI often involves multiple third parties, and each link in the chain introduces potential security vulnerabilities.
The guidance also provides examples of measures, listed below, that can help mitigate AI-related cybersecurity risks. It recognizes that, when used together, these actions provide multiple layers of security controls with overlapping protections. Therefore, if one of them fails, other controls exist to prevent or reduce the impact of attacks. Unfortunately, due to the brevity of the guidelines (and the fact that the controls are fact-dependent), the measures are higher level and more sophisticated companies should already be implementing them to some extent. The real learning lies in the importance of integrating AI into these existing metrics.
- Risk assessments and risk-based programs, policies, procedures and plans – Revise to take into account AI-related threats.
- Third Party Service Provider and Vendor Management – Perform due diligence on third-party service providers considering AI-related threats for each provider.
- Access controls – Implement multi-factor authentication (“MFA”) with authentication forms more likely to resist AI bypass, such as digital certificates or physical security keys.
- Cybersecurity training – Conduct for all company personnel, taking into account AI-related threats.
- Monitoring – Monitor AI-powered products or services, looking for unusual query behavior that might indicate an attempt to extract sensitive information and blocking queries that might publicly expose that information.
- Data management including minimization, inventories and access restrictions.
Our opinion
Although the guidance is aimed at businesses regulated by DFS, the risks and strategies discussed therein are relevant to any organization attempting to address cyber risks arising from the growth of generative AI.
Confidential business information. Businesses should focus not only on AI-related risks to personally identifiable information (“PII”) involving reporting obligations, but also on threats to confidential business information, such as trade secrets, the theft of which could be more damaging to the business in the long run than any breach of personal information.
Vendors and Breach Notifications. When preparing to contract with suppliers, businesses should be aware of situations where the activity may be regulated by an entity such as NYDFS, but the supplier may not be, and determine the impact this may have. have on notification in the event of a violation by a third party. Especially if a vendor is smaller and has less developed incident response capabilities, companies can benefit from measures to ensure they maintain control of response and notification processes.
Suppliers and AI. When due diligence on vendors using or providing AI, assess the potential risks to the vendor and the business.
Data inventories and data minimization. DFS reminded licensees that they must have a data inventory in place by Saturday, November 1, 2025. Of course, deleting data that is no longer needed not only reduces the risk of an information breach (and the regulatory fine that frequently accompanies it), but it also reduces what needs to be inventoried.
Vendors and data minimization. From our own experience, a serious risk lies in excessive data retention by providers, including legacy providers. As companies terminate suppliers, it is critical to have closing processes that migrate and delete existing data. For long-term relationships with suppliers, businesses should consider providing retention periods for their data so that it is retained in the normal course of business.