Last year, the incident of a distraught mother who received a worrying call from “kidnappers” who had “kidnapped” her daughter, raised alarms in the US Senate about the harmful impact of artificial intelligence. The news took the country by storm, as the said “kidnappers” and the girl’s voice were nothing but hackers using generative AI to extort money. With the proliferation of such cases, the human perception of what is real and what is just generative AI is slowly eroding.
Sophisticated Cyber Threats
While it is true that generative AI has exceptionally transformed the way we operate, with its integration into sectors such as education, banking, healthcare and manufacturing, it has also transformed the paradigm of cyber- risks and safety as we know it. With the generative AI industry expected to increase global GDP by $7-10 trillion, the development of generative AI solutions (such as ChatGPT in November 2022) has triggered a vicious cycle of benefits and disadvantages. According to a recently released report, there has been a 1,265% increase in phishing incidents/emails, as well as a 967% increase in credential phishing since Q4 2022, due to the use /exaggerated manipulation of generative AI.
With the rise of sophisticated cyber threats, organizations and individuals are susceptible to new avenues of cyberattacks, pushing businesses to adapt to ever-changing technology. According to a study conducted by Deep Instinct, approximately 75% of professionals have witnessed an increase in cyberattacks in the past year alone, while 85% of respondents attributed the increased risk to generative AI.
It is becoming imperative, more than ever, to develop collaborative solutions to protect confidential information, identities and even human rights.
As generative AI continues to mature, new, more complex threats have emerged: through cognitive-behavioral manipulation, extremely dangerous incidents have emerged, with voice-activated toys and gadgets encouraging dangerous behaviors in children. children and/or pose a serious threat to privacy and security. At the same time, remote and real-time biometric identification systems (such as facial recognition) have further compromised the right to privacy and massively endangered individuals on several occasions in recent times.
While generative AI has had a significant impact on productivity in the industrial domain, with 70% of professionals reporting an increase in productivity, increasing manipulation via generative AI (especially over the last two years) has has resulted in organizations becoming increasingly vulnerable to attacks, with most citing undetectable phishing attacks (37%), increasing attack volume (33%) and growing privacy concerns (39%) as the top challenges.
The recent identification by several cybersecurity conglomerates of complex hacker groups using generative AI solutions has raised alarm bells: AI models are being leveraged to translate and identify coding errors to maximize impact of cyberattacks.
Faced with the proliferation of these multifaceted cyberattacks, robust initiatives have become necessary. Although strict ethical and legislative frameworks are in place to combat growing AI-driven cybercrime, gaps and lack of industry understanding/understanding in the regulation of generative AI persist.
Bletchley’s statement
Given growing concerns over the growing misuse of generative AI, it becomes imperative to protect consumers from the challenges posed by these advanced technologies, allowing them to navigate digital spaces safely.
World leaders have also launched collaborative efforts to understand the potential catastrophic harm caused by the harmful use of AI, as demonstrated by the recent signing of the Bletchley Declaration at the AI Security Summit. The countries that signed the agreement are China, the European Union, France, Germany, India, the United Arab Emirates, the United Kingdom and the United States.
At the institutional level, strong policy efforts are essential to strengthen the position in the face of growing challenges via solutions such as strengthening the position on watermarking to identify AI-generated content. This could help reduce cyber threats from AI-generated content, warning consumers to take appropriate action. Additionally, a collaborative effort between institutional and industrial stakeholders may require the process of improving and implementing a realistic, practical and effective framework, with the inclusion of public comments to further strengthen the drafting of these regulations.
Promote digital awareness
At the corporate level, more emphasis should be placed on raising digital awareness through professional media and digital literacy training sessions, fostering strong digital fluency in the workspace while identifying and filling digital knowledge gaps among employees. This could enable staff to effectively navigate the digital landscape, identify credibility and verify authentication sources.
However, for a truly holistic approach to cybersecurity in an AI-driven world, we cannot neglect the crucial role of non-governmental and other advocacy organizations that introduce individuals to the wonders of the digital world while equipping them with the essential tools. of cyberliteracy. By fostering a digitally savvy citizenry, we can build a stronger defense against the ever-evolving threats in this AI-driven digital landscape.
As we move toward the development of more sophisticated systems and technologies, collaborative efforts are paramount to nurturing a sense of security, enabling individuals and organizations to more empower communities to protect their personal interests and identities .
Charu Kapoor is National Director of NIIT Foundation