(TNS) – Applications of artificial intelligence – AI – are growing exponentially and will continue to do so as technology further advances.
Today, CNHI and the West Virginia Schedule Let’s continue an ongoing series on AI and its potential benefits and concerns in various parts of daily life. This final part revisits AI and its use in cybersecurity. Previous parts of the series have examined the use of AI in education, healthcare, business, social media, emergency response, travel and journalism.
Artificial intelligence may constitute a new front line in the perpetual war between white and black hackers.
According to experts, AI has the potential to be a game-changer in digital security due to its ability to detect threats. Its ability, through algorithms and machine learning, to sift through an ocean of data to identify and neutralize threats places it far beyond human capabilities, perhaps providing an ever-alert and tireless sentinel protecting important digital fortresses.
“AI is like a double-edged sword. On the one hand, it is the vigilant guardian of the digital domain,” said Joseph Harisson, CEO of the Dallas-based Network of IT Companies. “AI algorithms act like digital bloodhounds, detecting anomalies and threats with a precision that human analysts might miss.”
However, it is this tremendous power of rapid analysis of large data sets that also makes AI a powerful tool for criminals and other malicious actors.
“They use AI to create more sophisticated cyberattacks, turning the hunter into the hunted,” Harisson said. “These AI-powered threats are like chameleons, constantly evolving to blend into their digital environment, making them harder to detect and thwart. It’s a perpetual game of cat and mouse, both parties leveraging AI to outwit the other.”
Researchers are building computer networks that resemble the structure of the human brain, leading to breakthroughs in AI research. This research not only serves to strengthen cybersecurity, but also to improve real-world security. Biometric scanning, such as fingerprints and facial recognition, helps law enforcement secure important sites like airports and government buildings. Security companies also use these technologies to secure their clients’ assets. This has even reached the home sector, with companies like Ring providing home security solutions.
Katerina Goseva-Popstojanova, a professor in the Lane Department of Computer Science and Engineering at West Virginia University, said AI has been part of the cybersecurity landscape for a long time. Machine learning, an integral part of AI, has been used for various purposes in this field.
Take antivirus software, Goseva-Popstojanova said. Software such as Norton antivirus or Kaspersky have built-in AI that is trained on known viruses so it can detect viruses on host machines. Spam filters work the same way. Although chatGPT has made AI a household issue, the technology itself has long been used behind the scenes.
DEMOLATION OF FORTRESSES
Aleksa Krstic, CEO of Localizely, a software-as-a-service translation platform based in Belgrade, Serbia, said AI-powered cameras can analyze video feeds in real time and identify objects or potential threats.
“AI algorithms can recognize individuals, enabling more effective access control and tracking,” he said. “AI systems can learn what “normal” behavior looks like in a specific environment and trigger alerts when deviations occur.”
However, AI can also be used to destroy cyberfortresses created by governments and corporations. Krstic said AI can automate attacks at scale, generating sophisticated phishing emails or launching automated botnets. AI, through fake videos and its ability to generate videos at scale, can spread misinformation or manipulate public opinion for personal gain.
“From my current point of view, everything can be used for better or for worse,” Goseva-Popstojanova said. “So let’s say dynamite. You can use dynamite to build tunnels or mines or you can use dynamite to kill people. It’s the same with AI.”
Goseva-Popstojanova said generative AI like ChatGPT can be used by cybercriminals to scour the internet for publicly available information to quickly create a person’s profile. This profile can be used to commit a crime, whether it is identity theft, scam or spam. The weakest link in cybersecurity is the human element. Social engineering, or using social skills to manipulate an individual into carrying out a desired action, becomes much easier with AI tools such as deep fakes or spoofing voice identity.
“There’s something called phishing, or vishing, if it’s done over the phone and now it’s done over text, where someone pretends to be someone and scams the person,” she said . “One of the reasons the MGM Hotels attack happened is because it wasn’t a sophisticated problem. It was just someone using an engineered attack to obtain the information needed to log into their system.”
A cyberattack on MGM Resorts this fall cost the company millions of dollars in lost revenue, exposed the personal information of tens of millions of loyalty rewards guests, and disabled some on-site computer systems.
Deceptive AI
In the physical world, criminals can use tactics like spoofing to fool AI. The technique can involve simple measures, such as using a person’s photo to fool facial recognition. Or, if someone wants to avoid being recognized in public, a hoodie made from a special material that reflects light differently than skin can be used to break the facial recognition algorithm. More sophisticated AI can look for signs of life to avoid being fooled by a photo. However, a video of a person’s face might do the trick. Makeup, masks, 3D masks can all be used. Finally, you have to hack the database itself and change the settings so that the attacker’s face or fingerprint is allowed by the system.
Adversarial machine learning is the area of research that examines how machine learning can be used to attack other AI systems. Goseva-Popstojanova said this is a huge area of research today, looking for ways in which algorithms can be fooled and classify malicious activity as non-malicious. This allows researchers to find more robust ways to secure a system. A previous version of ChatGPT could be tricked into disclosing an individual’s private information, such as their emails or home addresses, by repeatedly spamming the AI with specific words. Researchers deliberately worked on ways to break the AI to disclose this information, then reported it to OpenAI to fix the flaw.
One thing is clear: Pandora’s box is open and AI is now part of the world, officials said. Although algorithms and machine code lie behind the veneer of everyday life, the invisible war between white and black hackers will define the lives of people around the world.
In October, FBI Director Christopher Wray spoke at a conference with leaders of the Five Eyes, a coalition of the United States, United Kingdom, Canada, Australia and New York. Zeeland. The coalition was created in the aftermath of World War II to share intelligence and collaborate on security. The conference took aim at China, which Wray called the main threat to global innovation, and accused the country’s government of stealing AI research to further its own hacking efforts. Thus, AI extends from the individual level to the global political level.
“We are interested in the AI space from a security and cybersecurity perspective and are therefore proactively aligning our resources to collaborate with the intelligence community and our private sector partners to better understand the technology and any potential downstream impacts,” the FBI’s national press office wrote in a statement. E-mail. “The FBI is particularly focused on anticipating and defending against threats from those who use AI and machine learning to fuel malicious cyber activity, commit fraud, spread violent crime, and threaten our national security. We are working to stop actors who attack or degrade AI. /ML systems are used for legitimate and lawful purposes.”
Dhanvin Sriram, founder of Prompt Vibes and an AI expert, said machine learning has more than proven its value in quickly analyzing data and finding patterns that might indicate risk. However, caution must be exercised when evaluating any new technology that may be paradigm-shifting.
“The real challenge is developing AI systems that not only strengthen defenses, but also thwart malicious AIs,” he said. “It is a constant game of cat and mouse where staying ahead requires continuous innovation and a careful approach to ethical considerations. In this dynamic security landscape, the conflict between AI-based defense and malicious AI highlights the need for continued advancement to ensure AI stays the course. a force of protection, not exploitation.
©2023 The Times West Virginian, distributed by Tribune Content Agency, LLC.