Hundreds of cybersecurity professionals, analysts and decision-makers gathered earlier this month to ESET World 2024a conference that showcased the company’s vision and technological advancements and provided a number of in-depth discussions on the latest trends in cybersecurity and beyond.
The topics ran the gamut, but it’s safe to say that the topics that resonated the most included ESET’s cutting-edge technologies. threat research and perspectives on artificial intelligence (AI). Now let’s take a brief look at a few sessions that touched on the topic that’s on everyone’s lips these days: AI.
Back to basics
First, Juraj Malcho, Chief Technology Officer (CTO) of ESET, provided an overview of the field, offering his perspective on the key challenges and opportunities offered by AI. He didn’t stop there, however, and went on to seek answers to some of the fundamental questions surrounding AI, including: “Is it as revolutionary as everyone claims?”
Current iterations of AI technology primarily come in the form of large language models (LLMs) and various digital assistants that make the technology look very real. However, they are still quite limited and we need to thoroughly define how we want to use the technology to strengthen our own processes, including its cybersecurity uses.
For example, AI can simplify cyber defense by deconstructing complex attacks and reducing resource demands. In this way, it improves the security capabilities of IT operations of understaffed companies.
Demystifying AI
Juraj J.ANoifkdirector of artificial intelligence at ESET, and Philip MazAnotSenior Manager of Advanced Threat Detection and AI at ESET, then presented a comprehensive view of the world of AI and machine learning, exploring their roots and distinctive characteristics.
Mr. Mazademonstrated how they are fundamentally based on human biology, in which AI networks mimic certain aspects of the functioning of biological neurons to create artificial neural networks with variable parameters. The more complex the network, the greater its predictive power, leading to the advances seen in digital assistants like Alexa and LLMs like ChatGPT or Claude.
Later, Mr. MazApointed out that as AI models become more complex, their usefulness may decrease. As we move closer to recreating the human brain, the increasing number of parameters requires extensive refinement. This process requires human oversight to continually monitor and refine the model operations.
Indeed, lighter models are sometimes better. Mr. MazAna describes how ESET’s strict use of in-house AI capabilities enables faster and more accurate threat detection, meeting the need for quick and accurate responses to all kinds of threats.
He also echoed Mr. Malcho and highlighted some of the limitations of Large Language Models (LLMs). These models operate on the basis of predictions and involve connections of meanings, which can easily become confused and give rise to hallucinations. In other words, the usefulness of these models has its limits.
Other Limitations of Current AI Technology
Additionally, Mr. Jánošík continued to address other limitations of contemporary AI:
- Explainability: Current models are made up of complex parameters, which makes their decision-making processes difficult to understand. Unlike the human brain, which operates based on causal explanations, these models operate through statistical correlations, which are not intuitive to humans.
- Transparency: High-end models are proprietary (walled gardens), without visibility into their internal functioning. This lack of transparency means there is no accountability for how these models are set up or the results they produce.
- Hallucinations: Generative AI chatbots often generate plausible but incorrect information. These models can exude a lot of trust while providing false information, leading to incidents or even legal problems, for example after Air Canada chatbot presented false information about a discount for a passenger.
Fortunately, limits also apply to the misuse of AI technology for malicious purposes. While chatbots can easily formulate plausible messages to facilitate spearphishing or business email compromise attacks are not that well equipped has create dangerous malware. This limitation is due to their propensity for “hallucinations” – producing plausible but incorrect or illogical results – and their underlying weaknesses in generating logically connected and functional code. Therefore, creating effective new malware usually requires a true expert to fix and refine the code, making the process more difficult than some might think.
Finally, as Mr Jánošík pointed out, AI is just one tool among others that we need to understand and use responsibly.
The rise of the clones
During the next session, Jake MooreGlobal Cybersecurity Advisor at ESET, gave a taste of what is currently possible with the right tools, from cloning RFID cards and hacking CCTV to creating convincing deepfakes – and how this can put data and security at risk. the company’s finances.
In particular, he showed how easy it is to compromise a company’s premises by using a well-known name. hacking gadget to copy employee entry cards or to hack (with permission!) a social media account belonging to the CEO of the company. He then used a tool to clone his image, both facial and voice, to create a convincing deepfake video which he then posted to one of the CEO’s social media accounts.
The video – which The fact that the future CEO announced a “challenge” to cycle from the UK to Australia and racked up over 5,000 views – was so convincing that people started offering sponsorships. Indeed, even the company’s CFO was also fooled by the video, asking the CEO where he was in the future. Only one person was not fooled: the CEO’s 14-year-old daughter.
In just a few steps, Mr. Moore demonstrated the danger posed by the rapid spread of deepfakes. Indeed, seeing is no longer believing – businesses, and individuals themselves, must carefully examine everything they find online. And with the advent of AI tools like Sora, capable of creating videos based on a few lines of input, dangerous times could be near.
The final touch
The final session dedicated to the nature of AI was a panel including Mr. Jánošík, Mr. MazAn, and Mr. Moore and was led by Ms. Pavlova. It started with a question about the current state of AI, where panelists agreed that the latest models are crammed with many parameters and need further refinement.
The discussion then moved to the immediate dangers and concerns for businesses. Mr. Moore pointed out that a significant number of people are unaware of the capabilities of AI, which bad actors can exploit. Although panelists agreed that sophisticated AI-generated malware does not currently pose an imminent threat, other dangers, such as enhanced phishing email generation and deepfakes created using public templatesare very real.
Furthermore, as pointed out by Mr. Janosik, the biggest danger lies in the data privacy aspect of AI, given the amount of data these models receive from users. In the EU, for example, the GDPR And AI Law have set some frameworks for data protection, but this is not enough because they are not global acts.
Mr Moore added that businesses should ensure their data stays in-house. Enterprise versions of generative models can do the trick, avoiding the “need” to rely on (free) versions that store data on external servers, possibly putting sensitive enterprise data at risk.
To address data privacy concerns, Mr. MazAna suggested that companies should start from the bottom up, relying on open source models that can work for simpler use cases, such as generating summaries. Only if these prove inadequate should companies turn to cloud-based solutions from other parties.
Mr. Janosik concluded by saying that businesses often overlook the downsides of using AI – guidelines for secure use of AI are indeed necessary, but even common sense goes a long way in keeping their data safe . As Mr. Moore summarized in his response regarding how AI should be regulated, there is an urgent need to raise awareness of the potential of AI, including the potential for harm. Encouraging critical thinking is crucial to ensuring safety in our increasingly AI-driven world.