Is AI already something that can be used by everyone when it comes to cybersecurity?
Talking about AI and its use in the workplace often makes me wonder how many companies are doing it and what is needed to ensure its effective use.
For example, this recent research found that 91% of organizations are prioritizing AI to improve their security posture and are also leveraging AI for proactive threat prevention. Additionally, any visit to a conference will see vendor booths adorned with AI terminology and promises of what can be achieved.
Last week, when SC UK was invited to the Kaspersky Next conference in Athens, this point was raised. Marco Preuss, deputy director of the company’s global research and analytics team, said AI “is for the few, not the many.”
Specifically, Preuss said that “AI promises a lot and it is not so accessible” because it is mainly in the hands of the few and not the many.
In particular, a few data centers are running it, but to do AI you need “very expensive hardware” that is owned by a few companies, and you need hardware that can be run at home, but at the moment “it’s a very elite selection” area of technology.
Jump on the bandwagon?
We had the opportunity to ask him to expand on this point further, including why free tools are used and if there is a risk of “jumping on the bandwagon” so as not to miss the trend? He gave the example of everyone who owned a horse in the past, then the car came along and you could no longer imagine anyone using a horse for transportation – but rather owning horses for fun: and generally it is those with the highest income who own a horse!
“Usually the technology starts with a very small elite group and then breaks down into a classic pyramid,” he said, while emphasizing that ChatGPT belongs “to the few, not the many.”
“So small businesses will use ChatGPT, but they don’t own their users, and in most cases there’s even concern that the AI won’t improve,” he said. “So who benefits the most? Small businesses using the Large Language Model (LLM), or the owner of the LLM – because it is essentially trained for free with highly relevant closed data. That’s what I mean by little. »
Essentially, I took him to mean that technologies are commandeered by larger companies and the results of what is produced from the requests come from those that provide the largest amount of training data. At the end of the day, are we all just users of a company’s product?
Preuss said that after this consideration of ownership, then comes complexity and cost: “So it’s a matter of evolution until you get to the point where it could be used by more people of people rather than what we can get from a few. .” He explained that in this case, the owners are few, and the more people involved, then we have a better basis for making decisions “in terms of regulation, in terms of safety, to integrate all the world on board.
A possible direction
Also speaking at the event, AI language expert Lilian Balatsou said of all the models used: “What we see is not the end state, it’s a direction: it is formed and developed by and for specific targets and a group of people. »
These people may be “data hungry” and, therefore, they determine the shape of the AI model, whether “we think generationally and develop all types of technologies to meet the needs of others, or to provide opportunities and solutions for other use cases.” , then we can say that this could help them for us and that it will be more democratizing.
This is all considered around the use of AI, which is different from training AI, and different types of AI can produce different results depending on the training performed.
Research Since the start of this year, Kaspersky has found that AI is already used by 54% of companies respectively, and that a third of the 560 IT security leaders surveyed plan to adopt it within two years.
Is it because AI is expensive or unreliable, or because it is built and trained by certain companies who are looking for a specific outcome from their AI research? Everyone who uses AI is a product of it, and these web tools may be used by the many, controlled by the few.
Written by
Dan Raywood is a seasoned B2B journalist with over 20 years of experience, specializing in cybersecurity for 15 years. He has extensively covered topics ranging from advanced persistent threats and nation-state hackers to major data breaches and regulatory changes. Outside of work, Dan enjoys supporting Tottenham Hotspur, managing mischievous cats and enjoying craft beers.