- Cybersecurity developers use AI to create code to mitigate threats like spyware on mobile apps.
- One product manager said effective AI tools should be trained on “high-quality datasets.”
- This article is part of “Built it“, a series on digital technology and innovation trends that are disrupting industries.
Mobile applications are an integral part of our lives. As their foothold in society strengthens, their vulnerability to cyberattacks also increases.
With the emergence of new application security threats, cybersecurity professionals and developers are turning to artificial intelligence to improve the development, deployment and effectiveness of security patches.
Cybersecurity vulnerabilities in mobile applications
Jake Moore, global cybersecurity advisor at ESET, told Business Insider that the biggest cybersecurity threats affecting mobile apps are data leaks, spyware and phishing attacks.
He said poor data privacy safeguards contribute to these problems and could cause leaks of sensitive information. He added that spyware campaigns such as Pegasus often target smartphone users, imploring them to open links in emails or text messages that expose them to dangerous software and viruses.
Outdated operating systems and software downloaded from third-party applications also pose risks. “They may contain malicious apps designed to steal data or spy on the device,” Moore said.
Jimmy Desai, consulting commercial lawyer at Keystone Law specializing in data protection, said cybersecurity incidents This can also happen when someone loses their device. “People often use their cell phones for both work and social purposes, which can pose problems from a practical and legal perspective,” he said.
How Cybersecurity Professionals Can Use AI to Strengthen Mobile App Security
While cybersecurity threats to mobile apps become increasingly complex and in scale, advances in AI can provide practical solutions.
Moore said advanced AI algorithms could help cybersecurity experts identify and mitigate malware, phishing attacks and other threats before they affect the user. He argued that because a trove of data fuels AI, the technology will continue to learn, improve, and ultimately make mobile devices more secure.
These AI tools, Moore said, can “detect patterns and anomalies” that indicate malicious activity and can outperform traditional security measures. “This is particularly crucial in the rapidly evolving landscape of mobile applications, where new threats are constantly evolving or emerging,” he added.
Candid Wüest, vice president of product management at Acronis, said AI could help cybersecurity professionals understand how secure an application’s lifecycle is. He told BI that coding platforms like GitHub’s Copilot tool use AI to help software developers design robust and secure code for mobile apps.
One of Copilot’s features, Wüest said, can determine whether code that a developer writes or modifies would introduce new threats to mobile devices. He added that such tools could help ensure that mobile app code is secure and constantly tested for cybersecurity vulnerabilities.
Wüest said AI could also help detect anomalies in user activities within the app and identify fraud. “For example, if a loyalty app user logs in 100 times an hour and goes straight to a contest’s ‘submit’ page and submits their ID to win a prize, then it’s likely a robot that tries to cheat and win,” he said.
Christian Schläger, co-founder and CEO of app protection service Build38, said AI also helps mobile app developers implement countermeasures to maximize security “while minimizing impact on user experience”.
Wüest said his biggest recommendations for effectively using AI in mobile cybersecurity would be to collect and use “high-quality datasets for training”, continually update AI models for s adapt to new threats and integrate AI into security tools to strengthen their defenses.
The pitfalls of using AI in mobile app cybersecurity
Moore said that AI is not without its flaws and that there is a “relatively high” chance that he will make mistakes. AI algorithms, he said, could generate false positives due to discrepancies in the data sets used to train them, leading developers to mistake “legitimate activities for threats.”
He added that cybersecurity professionals should train AI algorithms on “comprehensive, diverse, unbiased and up-to-date data sets” and merge AI with existing security infrastructure in a way that “complements and enhances their current tools and processes rather than replacing them. “
Schläger said another major problem is that cybercriminals are using AI to reverse-engineer mobile apps to understand how they work and “develop new attack scenarios.”
This can exacerbate privacy violations. But Wüest told BI that developers could mitigate the damage through data anonymization, more diverse training datasets, and persistent data monitoring. He added that developers should design AI tools to learn continuously, as well as “develop ethical guidelines and verify thelocal laws to combat the use of AI in cybersecurity.”
How users can improve the security of their applications
Moore said people could improve security on their phones by securing accounts with unique passwords, setting up multi-factor authentication, backing up data and regularly updating software.
Moore and Schläger said using private WiFi networks, as opposed to public hotspots, also provides good protection, especially when conducting sensitive business and transactions.
“Awareness and vigilance are essential to protect personal information from hackers and cybersecurity threats,” Schläger said.