AI Risks
Deepfakes
DFS highlights a significant increase in the number of cyberattacks involving deepfakes. Deepfakes refer to a type of synthetic video or audio recording created using AI that manipulates existing content to make it appear as if someone is doing or saying something they did not do . Although phishing attacks have been used for decades, deepfakes can make them more effective by making a request appear to be coming from someone you trust.
For example, earlier this year, a financial worker in a multinational company received an email appearing to be from the company’s CFO discussing the need to complete a transaction. The worker’s initial doubts about the authenticity of this request were allayed during a video conference with the CFO and several other employees, during which the CFO asked the worker to transfer approximately $25 million. However, what appeared to be live video of the CFO during that conference call was actually a deepfake. The financial agent responsible for transferring the money was the only real person on the video call.
Other cyberattacks
Although deepfakes are a new form of attack, AI also augments more traditional cyberattacks. AI can be used to analyze information, identify security vulnerabilities, and develop a malware variant in a fraction of the time it would take a human to do the same. One of the concerns raised by the DFS is how this could increase the number of people capable of carrying out cyberattacks. While attackers once had to be technologically savvy to carry out these attacks, AI opens the door for those without advanced technical knowledge.
DFS Regulations and Guidelines
Below DFS Part 500 Cybersecurity Regulationcovered entities, such as licensed insurers, are required to conduct periodic cybersecurity risk assessments, updating the assessment annually and whenever a material change in business or technology modifies the risks entities. Additionally, entities must maintain a cybersecurity program and policies based on this risk assessment.
Currently, there are no regulations that specifically address the use of AI, but DFS guidance directs entities to consider AI when conducting their risk assessments. Specifically, the DFS recommended considering the following factors:
- The entity’s own use of AI.
- Use of AI by service providers and third-party vendors — ensuring policies and procedures meet minimum requirements and requiring the entity to be informed of any cybersecurity events.
- Vulnerabilities arising from AI applications that pose risks to the confidentiality, integrity and availability of the entity’s information systems or non-public information.
In addition to adapting risk assessment to better identify threats posed by AI, the circular letter makes a number of recommendations, including adopting training on AI threats, implementing implementation of robust access controls and the requirement for third parties to notify any cybersecurity events. Some of these recommendations are based on regulatory requirements that will take effect in November 2025, but DFS encourages rapid adoption.(1)
Benefits of AI in Security
Despite the known threats of AI as an attack tool, DFS recognizes that AI can also be a security tool. DFS encourages entities to explore the use of AI for tasks such as security log review, data analysis, anomaly detection, and security threat prediction. As technology continues to advance, entities must be aware not only of security threats, but also how they can be used to enhance security.
ArentFox Schiff is a leading law firm advising clients on cybersecurity and AI. Please contact us if we can help you.
(1) Beginning November 1, 2025, DFS will require multi-factor authentication to be used for all authorized users accessing entity information systems or nonpublic information. This includes not only employees and third parties, but also customers. In addition, entities will be required to maintain inventories of data used to identify risks.