Transformative technologies threaten our lives, offering the potential to rewrite truth with the touch of a keyboard. Not long ago, a video surfaced purporting to show a Malaysian political aide in a compromising situation with a minister. This deepfake not only necessitated an investigation into allegations of corruption, but also shook the very foundations of the country’s government. The repercussions were immediate and profound: a coalition government found itself on the brink of collapse.
In another corner of the world, deepfake technology was used as a weapon against a UK-based energy company, convincing it to part with almost £200,000 based on a simple voice, a falsified echo of that of its CEO. Earlier this year, fakes of various political leaders surfaced online, fueling outrage and sparking heated debates in the run-up to the US primaries. These incidents pierce the abstract veil of cyber threats, demonstrating their brutal impact on real-world stability and trust.
Kiran Sharma Panchangam Nivarthi has witnessed the escalation of these cyber threats. With over sixteen years of experience in cybersecurity, he has stood at the forefront of tackling the complexities introduced by artificial intelligence (AI) and machine learning (ML). Nivarthi’s technical background is also closely tied to the nuances of cybersecurity law, earning him accolades such as the CSO50, Eminent Fellow of Academic and Scientific Society of Researchersand the title of Fellow of Information Privacy.
As an author of essential security articles, Nivarthi understands the nuances of AI and ML in cyber defense and offense. His point is clear: these tools hold immense power to detect data patterns and vulnerabilities that are beyond human capabilities. But they also introduce unprecedented risks. Hackers can now leverage AI to orchestrate more sophisticated attacks, even going so far as to manipulate the very data that AI defenses are trained on.
“AI and ML models are far more effective than any human being at analyzing data, whether it is traffic between two servers or personal data extracted from social media sites.” he observes in a research paper in the International Journal of Trends in Scientific Research and Engineering. “They can identify patterns in data that hackers and other human malicious agents don’t even think to look for, opening the door to new attack vectors. This inherent strength of AI/ML can be leveraged to detect system vulnerabilities and target the most vulnerable. significant vulnerability in cyber defense, i.e. humans.
Nivarthi advocates a proactive stance. “Our defense systems” he reiterates, “must evolve faster than threats.” A glance at another article by Nivarthi reveals his concerns that AI perpetuates bias and threatens the right to privacy.
“As AI systems become more advanced and complex, there are growing concerns about the risk of algorithmic discrimination and the erosion of privacy rights.” he posits in his article, Rights in the Age of Intelligence: Exploring the Intersection of AI and Legal Principles. THE research paper explores the intersection of AI, algorithmic discrimination protections, and data privacy from a Bill of Rights perspective.
The challenge of AI-based cyberattacks is akin to an arms race, with each side continually upping the ante. As Nivarthi notes in his research paper, “Evolved/enhanced AI-based cyberattacks are a natural consequence of advances in AI and ML and easy access to powerful AI and ML models and systems.” Nivarthi’s leadership demonstrates that to stay ahead of the curve, we must harness the power of AI and ML not as mere tools but as allies, integrating them with privacy and ethics principles to build a resilient digital fortress. This is a battle of wits against a changing enemy, so our strategies must also evolve to ensure our collective digital future remains secure and grounded in reality.
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
*This is a contributed article and this content does not necessarily represent the views of techtimes.com