(DT) — A technology researcher wrote this week that artificial intelligence chatbots are invading online communities intended for human connections.
And some of the early results have been shocking, she said.
Casey Fiesler, a professor of information science at the University of Colorado Boulder, said companies should take more time to determine whether AI is actually useful before deploying it on a growing number of platforms .
“Right now, many companies are using generative AI like a hammer, and as a result, everything looks like a nail,” she wrote in an article for The Conversation.
Fiesler focused on integrating Meta AI and chatbot into social media.
Meta has integrated AI response systems into Facebook, Instagram, WhatsApp and Messenger.
THE the company touts that Meta AI is built on a large, powerful language model, and people can use AI “in feeds, chats, search and more in our apps to get things done and access information in real time, without having to exit the application you are using. »
But Fiesler cited recent examples of Facebook chatbots impersonating other human users.
Also in an example reported by Associated Press and others, Fiesler said an AI chatbot told a mother seeking advice in a Facebook group that it also had a gifted and disabled child.
In another case, a Facebook chatbot allegedly tried to give non-existent items to a user. The AI gifted someone in a Facebook group a “gently used” camera and an “almost new” air conditioner.
“The concerns I already had about the role of AI in online communities were simply confirmed by the example of the parenting Facebook group,” she said via email on Tuesday. “I hope this negative attention will encourage Meta and others to be much more careful about how chatbots are deployed in the future, but if they don’t, I think we’ll see many more examples like this.”
Andrew Selepak, a social media expert who teaches at the University of Florida, said there is nothing inherently wrong with chatbots on social media. They could be useful for answering simple, factual user questions for a number of brands and at any time of the day.
What are your schedules ?
When do you make a sale?
But Selepak said AI posing as a human with compassionate responses in a Facebook group is a different story.
“In many ways it almost seems bad, because we as humans want an empathetic response,” he said. “We want to connect with others, and that’s part of who we are as a species. And it doesn’t come from AI, especially AI that simulates its human character. »
The Facebook moms group chatbot, for example, would have answered: “I have a child who is also a 2nd grader (twice exceptional) and was a part of the NYC G&T program. We have had a positive experience with the citywide program, particularly the Anderson School program.
Selepak said such AI responses erode users’ trust in the legitimacy of any post.
And they go against the original intent of social media, he said.
“If we look at what social media — you know Jack Dorsey, Mark Zuckerberg, the early days of social media — it was about human connection,” Selepak said. “And the AI is not a human. AI does not provide human connection.
Meta claims its AI chatbots will give a response in a group when someone tags it, @MetaAI, or after a user’s message goes unanswered for an hour.
Selepak said it’s about generating engagement, not connections.
“It’s like if we let you know someone responded to your message, you’ll go back and look at the ad,” he said.
Anton DahburaAI expert and co-director of the Johns Hopkins Institute for Assured Autonomy, said there is a “mad race” to deploy AI-based tools, often without considering the well-being of users or employees. clients.
Even a large tech company like Meta can fall victim to the lure of AI, he said.
“It seems appealing that AI can be used as an enabler” of engagement, Dahbura said.
But companies are skipping steps in the process of determining whether people want this solution.
“We kind of jump right into it without really understanding all the implications,” Dahbura said.
Facebook group administrators can disable chatbot functionalityMeta said.