By Ankush Tiwari
In the digital age, where technology is seamlessly integrated into daily life, cyber threats have evolved in complexity and scale. As AI emerges, so do the opportunities for malicious actors.
A few months ago, the World Economic Forum warned in its report that the global shortage of people in the cybersecurity sector stood at 4 million. As AI-based cyber threats increase, the AI ecosystem lacks the necessary defenses. A Skillsoft report confirmed this concern when responses to a survey of more than 5,100 global IT decision-makers found that AI and cybersecurity topped IT executives’ investment lists. Given that AI and cybersecurity are areas where there is considerable overlap, it is accurate to conclude that the largest share of global investment in technology will be channeled into AI capable of mitigating security threats. cybersecurity.
Deepfake frauds: the dark side of AI
Deepfakes are hyper-realistic images, audio files, or videos generated by AI that are indistinguishable from real images. Undetectable to the naked eye, deepfake scams thrive on AI-powered digital interactions to defraud and defame individuals and organizations. It relies on social conditioning to let down our guard and strike even tech-savvy professionals. CEOs and CFOs who have access to sensitive access are also human and just as vulnerable, whether they accept it or not. The costs of a scandalous CFO deepfake are too high. This can wreak havoc within the company and on its stock price. A temporary jolt eliminates thousands of crores from its market capitalization.
The NSE CEO’s deepfake, which prompted the stock exchange to issue a clarification, is like a warning shot to CEOs of all listed companies. Businesses are just one step away from instability if they do not have preventative mechanisms in place to combat AI.
Imagine a cybercriminal or short seller who wants to profit from a collapse in stock prices. They only need the audio clips, images, and body language of the company’s CXO to craft a harmful deepfake. In most cases, social networks, websites and YouTube channels feature the required videos, making them easily accessible to scammers. Using AI tools, it can create and distribute a deepfake, for example, in which the CEO, CFO or auditor expresses concerns that the recently published annual report does not take into account that the he going concern assumption, as required by the IND AS, is not adequately disclosed or accounted for. If such a video goes viral overnight, mutual funds, HNIs and hedge funds will sell their stocks in the first trading session. At the same time, company executives will realize their stock prices are falling before they even realize what hit them.
Welcome to the age of AI, where you unwittingly give up control of data and will have to pay a ransom or fall prey to malicious deepfakes. In addition to employee training, deepfake detection tools that rely on AI models to discover fakes and alert humans are needed today. Companies and governments around the world are investing in these detection mechanisms to protect sensitive information and prevent the misuse of synthetic media.
Cyberbullying: AI as a shield
Cyberbullying is another growing concern in today’s hyper-connected society. Unlike traditional forms of bullying, cyberbullying exploits anonymity and reach, often leaving victims feeling powerless and exposed. The emotional toll on victims is profound, leading to anxiety, depression and, in extreme cases, self-harm.
With the evolution of AI, we have entered an era where you may be intimidated by AI language learning models. They may mimic the behavior of bullies to coerce you into submitting to their influence. If combined with deepfakes of police officers or government agencies, they can mentally denigrate you until their goal is achieved. AI tools that detect synthetic activities in your IT system are a prerequisite for sanity.
India has already seen several cases where AI has been used to digital arrest by deepfakes posing as income tax officials or police officers who intimidate and demand bribes by threatening people to trap them in fake deals and drug deals. In many cases, elaborate stories about their loved ones were developed, making the threat credible. These digital interactions carried out on the shoulders of deepfakes of real civil servants are expected to multiply. Businesses and governments need AI safeguards to achieve this. If a man can run a fake court in Gujarat for many years without getting caught, as was discovered in Ahmedabad, cyberbullying is not a far-fetched threat.
Identity fraud: a persistent cyber threat
Identity fraud is a long-standing cybersecurity problem, exacerbated by the digital shift of personal and financial data. Fraudsters exploit stolen credentials to impersonate individuals, access sensitive information, or carry out unauthorized transactions. Traditional methods of combatting identity theft, such as password protection and manual verification, are increasingly proving insufficient in the face of sophisticated techniques such as phishing and data breaches.
Is your Aadhar card similar enough to you? This is unlikely because the quality of the image makes it look like it is their gorilla cousin. In the case of computer systems using facial recognition or facial verification, your laptop’s camera may provide similar output quality. What if AI was used to create better images, audio and videos to create fake bank accounts in your business name and carry out benami transactions? Not only will the faces of the company’s top executives look more realistic in a fake bank account than in a real bank account, but companies will also have a hard time convincing a human banker that it’s not you. This risk is several times higher when the company is a financial institution where video KYCs are part of the routine customer onboarding process. The only possible solution to mitigate threats is to create AI protections that detect and alert deepfakes rather than removing video KYC altogether.
Conclusion
About a few years ago, AI craze hit the world when internet users first discovered the marvel called ChatGPT. Since then, big tech has been scrambling to create innovative ecosystems to stand out in the AI arms race. Advances and innovations in AI, while transformative, have also introduced significant cybersecurity challenges as malicious actors increasingly exploit the accessibility and capabilities of these technologies. Until now, unknown threats and new ways to carry out cyberattacks are expected to emerge as AI becomes more and more sophisticated. As the AI conundrum deepens, it is encouraging to see that IT investment decision-makers are rightly focusing on cybersecurity and AI when making investment decisions.
(The author is Ankush Tiwari, founder and CEO of pi-labs.ai, and the opinions expressed in this article are his own)