In 2017, the New York Department of Financial Services was the first state regulator to issue mandatory cybersecurity regulations for financial institutions such as banks, insurance companies, and financial services companies.
and now it has taken the initiative to issue new guidelines to address the threat and promise of artificial intelligence.
Although the new guidelines do not impose specific new requirements beyond the obligations already contained in the current cybersecurity regulations of 23 NYCRR Part 500, they explain how banks, insurance companies and security companies Financial services can use the current regulatory framework to assess and address. with the cybersecurity risks posed by AI.
The new guidelines highlight four specific areas where AI has increased the risk of cyberattacks.
The first area is social engineering, which the guidelines describe as “one of the most significant threats to the financial services industry”. Through social engineering, cybercriminals used specifically targeted spear phishing emails, vishing phone calls, and smishing text messages in which they pose as a legitimate customer, government agent, vendor, or another trusted source to try to trick their targeted victims into providing personal information or clicking. on a link infected with malware. Although some sophisticated cybercriminals have proven extremely adept at social engineering, in the past many attempts were almost laughable, particularly when the socially engineered communications were created by foreign cybercriminals whose primary language was not l ‘English. But no more. Thanks to AI, cybercriminals are now able to create completely credible spear phishing emails, vishing phone calls and text messages. And things aren’t as bad as you think, they’re much worse. Readily available deepfake technology allows cybercriminals to imitate the voice or appearance of bank officials or others to make their cyberattacks even more credible. According to identity verification company Onfido, deepfake attacks have increased by 3,000% in the last year alone.
The second area of cybersecurity risk mentioned in the guidelines concerns AI-enhanced cybersecurity attacks, in which AI-enhanced malware, such as ransomware, uploaded via social engineering is more sophisticated and of increasingly capable of avoiding defensive security. Additionally, AI is increasingly being used by less sophisticated cybercriminals to create highly complex malware. As noted in the guidelines, “this lower barrier to entry for threat actors, combined with the speed of deployment of AI, is likely to increase the number and severity of cyberattacks.”
The third area of cybersecurity risk identified in the guidelines is the vulnerability of large amounts of non-public information maintained by banks, insurance companies and financial services companies, including biometric data such as facial recognition and fingerprints used for authentication purposes which, once stolen by cybercriminals, would allow them to bypass certain forms of multi-factor authentication and create credible deepfakes.
The fourth area of cybersecurity risk outlined in the guidance relates to increased vulnerabilities due to supply chain dependencies. Even if a company has a robust cybersecurity program, it will be vulnerable to supply chain attacks where cybercriminals attack developers of software used by banks, insurance companies, financial services and others and l ‘infect with malware which is then downloaded by users of this software who are the real targets. Supply chain attacks have been responsible for major ransomware attacks and data breaches, such as the SolarWinds supply chain attack that affected 18,000 companies using its software, including Microsoft.
Although the guidelines do not require specific cybersecurity measures, they advise that, to comply with 23 NYCRR Part 500, in light of continuing AI threats, companies “provide multiple layers of cybersecurity controls.” security with overlapping protections so that if one control fails, other controls are there to prevent or mitigate the impact of an attack. And while AI can be a weapon used by cybercriminals, it can also be used to defend against cyberattacks. The guidelines state that “organizations should explore the substantial cybersecurity benefits that can be achieved by integrating AI into cybersecurity tools, controls and strategies.” AI’s ability to analyze large amounts of data quickly and accurately is extremely valuable for: automating routine repetitive tasks, such as reviewing security logs and alerts, analyzing behaviors, detecting anomalies and prediction of potential security threats; effectively identify assets, vulnerabilities and threats; respond quickly once a threat is detected; and accelerate the resumption of normal operations. So while AI may be the problem, it can also help provide the solution.