Is it right for an AI to appeal to a user’s emotions? Yes, he says Łukasz Mądrzak-Wecke Tangent, it just has to be done right. Advertisers, after all, have been doing it for years…
As AI technology advances, the idea that it will become more humanized in order to create deeper connections with users is becoming more prevalent. You may have noticed the increase in AI chatbots with human names, characteristics, and sometimes even faces.
Companies see this as a golden opportunity to increase engagement and ultimately increase profits. However, this path is not without ethical considerations.
In recent years, we have seen a rise in the number of virtual assistants and chatbots designed specifically to simulate human interactions. Companies are leveraging these technologies to offer services ranging from simple fashion advice to emotional support, when the situation allows.
The idea is simple: the more human-like the AI is, the stronger the bond it can create with the user. This bond can lead to increased engagement and loyalty, which in turn can generate revenue. At least, that’s what the theory goes.
Powered by AI
Discover the frequently asked questions
Ethical considerations
This approach raises important ethical questions. When users develop deep emotional connections with AI, as with other humans, they are likely to experience real harm and emotional distress. So what might happen to the end user if their favorite AI service were changed, rebranded, or even abandoned?
The ethical question arises because companies are responsible for creating emotional connections through their AI services. The question is: should companies be allowed to try to create these connections without oversight, or should regulations be put in place to protect users?
I think humanization shouldn’t be completely dismissed – after all, exploiting emotions has been a part of advertising and marketing campaigns for decades. Why shouldn’t AI and technology take the same approach?
But how does humanization happen? In most cases, there are two paths: conscious or unconscious. Conscious humanization happens when companies deliberately design AI to build deep personal connections. This is common in services like virtual coaching or therapeutic robots, where the AI gets to know the user and maintains a consistent interaction.
Unconscious humanization, on the other hand, occurs when users themselves assign human characteristics to AI. Even simple chatbots can be given names and personalities when users project their emotions and thoughts onto them. This unintentional humanization can lead to ethical issues, as users form connections that developers did not intend.
Balancing Business
It is critical that every company building AI-based solutions thinks about the ethical implications of the solutions they produce. This means releasing them responsibly, tracking their use and impacts, and then iterating accordingly. As our understanding improves, we will be better equipped to iteratively develop guidelines and rules for building AI solutions that generate value for businesses in a responsible and ethical manner.
For businesses today, the challenge is to balance the desire for increased engagement with the ethical implications of their AI designs. That means being transparent about what your AI service is supposed to do. Whether it’s providing fashion advice or emotional support, make sure users understand the scope and limitations of AI, and articulate that clearly whenever possible.
It is also important to avoid creating an overly generalized AI that users could rely on for a wide range of personal issues. An AI with narrowly focused usage goals can mitigate the risk of users forming inappropriate connections.
Companies must uphold and comply with regulations that protect users. The European Union’s AI Act, for example, requires AI systems to identify themselves as non-human, which can help manage user expectations and prevent unwarranted emotional attachments. As mentioned above, there is no universal guideline on this yet, and even then, it will be subject to near-constant evolution. Human oversight of your AI products will always be necessary.
Finally, companies must also ensure that users have control over their data and interactions with AI. This not only builds trust, but also aligns with ethical best practices.
Too human?
We are at a critical juncture as we watch this amazing technology take off. It is not unreasonable to imagine a future in which everyone has their own AI assistant, similar to the chatbots we know today. It would listen to us, represent our interests, help us navigate today’s complex world, and protect us from malicious behavior, including from other AIs.
As AI technology evolves, the lines between human and machine interactions will continue to blur. The goal should be to harness the power of AI to create value for users and preserve their emotional well-being. Over time, companies can gain valuable insights that will inform the design of future AI technologies.
This future requires a deep understanding of what benevolent and effective AI looks like, whether humanized or not. But we can’t get there by thinking in a vacuum. We need to bring the solutions to market. Yes, it’s important to be careful and thoughtful – but we shouldn’t shy away from the risks either.
Suggested newsletters for you