On October 16, 2024, the New York Department of Financial Services (DFS) published a letter to the industry to regulated entities entitled “Cybersecurity risks arising from artificial intelligence and strategies to combat associated risks”.
The letter “is intended to be a tool to help covered entities understand and assess the cybersecurity risks associated with the use of AI and the controls that can be used to mitigate those risks.” It does not impose additional compliance requirements beyond the DFS Cybersecurity Regulation, but is designed to provide guidance on how the regulation framework can be used to assess and mitigate risks arising from the artificial intelligence (AI).
The guidelines highlight risks such as AI-based social engineering, AI-enhanced cybersecurity attacks, exposure or theft of large amounts of non-public information, and increased vulnerabilities due to dependencies on third parties, suppliers and other supply chains.
The guidelines provide suggestions on how organizations can use controls and measures to mitigate AI-related threats, including: risk assessments and risk-based programs, policies, procedures and plans; management of third-party service providers and suppliers; access controls; and cybersecurity training, monitoring and data management. DFS suggests that AI threats are evolving and “it is essential that covered entities review and reevaluate their cybersecurity programs and controls at regular intervals, as required by Part 500.” Although the guidelines do not impose any additional compliance obligations, in the event of a DFS audit these basic measures will undoubtedly be assessed. Whether or not your organization is a DFS regulated entity, these guidelines provide basic cybersecurity hygiene for any organization regarding AI risks and mitigation. So they are worth looking into.