The New York Department of Financial Services (“NYDFS”) recently released guidance on managing AI-related cyber risks for the financial services and insurance industry. Although the circular letter does not introduce any “new” obligations per se, the guidance does speak to the Agency’s expectations for considering AI under its existing cybersecurity regulations.
The letter identifies specific AI-related cybersecurity threats, such as AI-based social engineering. AI can also augment traditional cybersecurity attacks by amplifying the power, scale and speed of an attack. The letter also notes that AI modules can exploit large volumes of non-public information and become the target of an attack. Additionally, relying on third-party suppliers and vendors for AI tools introduces vulnerabilities into the supply chain.
To mitigate these risks, the NYDFS advises regulated companies to consider specific AI-related risks when conducting comprehensive risk assessments. These assessments should consider not only the organization’s own use of AI, but also any AI technology used by a third-party service provider. Based on the results of risk assessments, policies, procedures and incident response plans may need to be updated to sufficiently address these AI risks. The NYDFS also highlights the need for cybersecurity training for all staff (including senior management), which includes awareness of AI threats and response strategies.
Put into practice: This latest thinking from the NYDFS adds to the growing patchwork of regulatory guidance on specific AI-related considerations (here, cybersecurity risks). Other guidance has largely focused on other types of harm caused by AI, such as bias and discrimination. It also serves as a reminder to businesses who do not use AI themselves to be aware of the potential risks of relying on third parties who do and implement appropriate mitigation measures.