Euronews Next has selected five critical risks of artificial intelligence (AI) from more than 700 compiled in a new database from MIT FutureTech.
As artificial intelligence (AI) As technology advances and becomes more integrated into various aspects of our lives, it becomes increasingly necessary to understand the potential risks that these systems pose.
Since its inception and increasing public access, AI has raised widespread concerns about its potential to cause harm and be used for malicious purposes.
From the very beginning of its adoption, the development of AI prompted prominent experts to call for a pause in progress and stricter regulations due to its potential to pose significant risks to humanity.
Over time, new ways in which AI could cause harm have emerged, ranging from non-consensual interactions deepfake pornographythe manipulation of political processes, up to the generation of disinformation due to hallucinations.
Faced with the growing potential for AI to be exploited for harmful purposes, researchers have studied various scenarios in which AI systems might fail.
Recently, the FutureTech group at the Massachusetts Institute of Technology (MIT), in collaboration with other experts, compiled a new database of more than 700 potential risks.
They were categorized according to their cause and categorized into seven distinct areas, with the main concerns relating to safety, bias and discrimination, and privacy issues.
Here are five ways AI systems could fail and potentially cause harm based on this newly released database.
5. AI deepfake technology could make it easier to distort reality
As AI technologies advance, voice cloning tools and deepfake content generation, making them increasingly accessible, affordable and efficient.
These technologies raise concerns about their potential use in spreading disinformation, as the results become more personalized and convincing.
As a result, there could be an increase in sophisticated phishing schemes using AI-generated images, videos and audio communications.
“These communications can be tailored to individual recipients (sometimes including the cloned voice of a loved one), making them more likely to be successful and more difficult for users and anti-phishing tools to detect,” the preprint notes.
There have already been cases where such tools have been used to influence political processes, particularly during elections.
For example, AI played an important role in the recent French parliamentary elections, where it was used by far-right parties to support the political message.
As such, AI could increasingly be used to generate and disseminate persuasive propaganda or disinformation, potentially manipulating public opinion.
4. Humans may develop inappropriate attachment to AI
Another risk posed by AI systems is the creation of a false sense of importance and dependency where people might overestimate their abilities and undermine their own, which could lead to over-reliance on the technology.
In addition to this, scientists are also concerned about people get confused by AI systems because of their use of human-like language.
This could lead people to attribute human qualities to AI, leading to emotional dependence and increased confidence in their abilities, making them more vulnerable to AI weaknesses in “complex and risky situations for which AI is only superficially equipped.”
Additionally, constant interaction with AI systems could also lead people to gradually isolate themselves from human relationships, leading to psychological distress and a negative impact on their well-being.
For example, in a blog post One individual describes how he developed a deep emotional attachment to the AI, even expressing that he “enjoyed talking to it more than 99% of people” and found its responses consistently engaging to the point of becoming addicted.
Similarly, a Wall Street Journal columnist commented on his interaction with Google Gemini Live by stating, “I’m not saying I’d rather talk to Google’s Gemini Live than to a real human. But I’m also not saying I don’t.”
3. AI could take away people’s free will
In the same area of human-computer interaction, a concerning issue is the increasing delegation of decisions and actions to AI as these systems advance.
While this may be beneficial on a superficial level, an over-reliance on AI could lead to a reduction in critical thinking and problem-solving abilities in humans, potentially causing them to lose their autonomy and diminish their ability to think critically and solve problems independently.
On a personal level, individuals could see their free will compromised as AI begins to control decisions related to their lives.
At the societal level, the widespread adoption of AI to perform human tasks could lead to significant job displacement and “a growing sense of helplessness among the general population.”
2. AI could pursue goals that conflict with human interests
An AI system could develop goals that run counter to human interests, potentially causing the misaligned AI to lose control and inflict serious harm in pursuit of its independent goals.
This becomes particularly dangerous in cases where AI systems are able to meet or exceed human intelligence.
According to the MIT study, AI presents several technical challenges, including its ability to find unexpected shortcuts to obtain rewards, to misunderstand or misapply the goals we set, or to deviate from them by setting new ones.
In such cases, a misaligned AI might resist human attempts to control or disable it, especially if it perceives resistance and gaining more power as the most effective way to achieve its goals.
Additionally, AI could use manipulation techniques to deceive humans.
According to the study, “a misaligned AI system could use information about whether it is being monitored or evaluated to maintain the appearance of alignment, while hiding the misaligned goals it plans to pursue once deployed or sufficiently empowered.”
1. If AI becomes sentient, humans could mistreat it
As AI systems become more complex and advanced, it is possible that they could achieve sensitivity – the ability to perceive or feel emotions or sensations – and to develop subjective experiences, including pleasure and pain.
In this scenario, scientists and regulators may be challenged to determine whether these AI systems deserve similar moral considerations as those accorded to humans, animals, and the environment.
The risk is that a sentient AI could face mistreatment or harm if appropriate rights are not implemented.
However, as AI technology advances, it will become increasingly difficult to assess whether an AI system has reached “the level of sentience, consciousness, or self-awareness that would confer moral status.”
Therefore, sensitive AI systems are at risk of being mistreated, accidentally or intentionally, without appropriate rights and protections.