Tech giant Google has announced upgrades to its artificial intelligence technologies, just a day after rival OpenAI announced similar changes to its offerings, as both companies attempt to dominate the rapidly emerging market where Human beings can ask questions about computer systems – and get answers in the future. human response style.
This is part of an effort to make AI systems like ChatGPT not only faster, but also more comprehensive in their immediate responses without having to ask multiple questions.
On Tuesday, Google demonstrated how AI responses would be merged with some results from its influential search engine. As part of its annual developers conference, Google promised that it would start using AI to provide summaries of questions and searches, with at least some of them labeled as AI at the top of the page.
Google’s AI-generated summaries are currently only available in the United States, but they will be written in conversational language.
Meanwhile, OpenAI’s recently announced GPT-4o system will be capable of conversational responses with a more human-like voice.
It drew attention Monday for its ability to interact with users while engaging in natural conversation with very little delay — at least in demo mode. OpenAI researchers demonstrated new capabilities of ChatGPT’s voice assistant, including using new visual and voice capabilities to talk to a researcher while solving a math equation on a sheet of paper.
At one point, an OpenAI researcher told the chatbot that he was in a good mood because it demonstrated “how helpful and amazing you are.”
ChatGPT replied: “Oh, stop that! You’re making me blush!”
“It looks like movie AI,” OpenAI CEO Sam Altman wrote in a blog post. “Talking to a computer never really felt natural to me; now it does.”
AI responses are not always correct
But researchers in the technology and artificial intelligence sector warn that as people get information from AI systems in more user-friendly ways, they also need to watch out for inaccurate or misleading answers to their queries.
And since AI systems often don’t reveal how they reached a conclusion because companies want to protect the trade secrets behind their operation, they also don’t tend to show as much raw results or source data as traditional search engines.
This means, according to Richard Lachman, that they may be more likely to provide answers that seem confident, even if they are incorrect.
The associate professor of digital media at the RTA School of Media at Metropolitan Toronto University says these changes are a response to what consumers demand when using a search engine: a quick and definitive answer when they have need information.
“We’re not necessarily looking at 10 websites; we want an answer to one question. And this can do that,” Lachman said,
However, he points out that when AI gives an answer to a question, it can be wrong.
Unlike more traditional search results where multiple links and sources are displayed in a long list, it is very difficult to analyze the source of an answer given by an AI system such as ChatGPT.
Lachman’s point is that it might seem easier for people to trust a response from an AI chatbot if it convincingly plays a human role by making jokes or simulating emotions that produce a feeling of comfort.
“It makes you maybe more comfortable than you should be with the quality of the responses you’re getting,” he said.
Businesses see a boom in AI
Here in Canada, at least one company working in the field of artificial intelligence is excited about a more human interface for AI systems like Google or OpenAI.
“Make no mistake, we are in a competitive arms race here when it comes to generative AI and there is a tremendous amount of capital and innovation,” said Duncan Mundell of Alberta-based AltaML.
“It just opens the door to additional capabilities that we can leverage,” he said of artificial intelligence in a general sense, mentioning the products his company creates with AI, such as software capable of predicting the movement of forest fires.
He emphasized that while the technological improvements are not revolutionary in his view, they are moving artificial intelligence in a direction he welcomes.
“What OpenAI has done with this release brings us even closer to human cognition, right? » Mundell said.
Researcher calls sentient AI ‘nonsense’
Upgrades to Google’s or OpenAI’s AI systems could remind science fiction fans of the highly conversational computer on Star Trek: The Next Generationbut a Western University researcher says he sees the new improvements as decorative, rather than a real change in how information is processed.
“A lot of the notable features of these new versions are, I guess you could say, bells and whistles,” said Luke Stark, assistant professor in the Faculty of Information and Media Studies at Western University. .
“In terms of these systems being able to go beyond what they’ve been able to do so far… it’s not a big step forward,” said Stark, who called the idea that sentient artificial intelligence could exist with today’s technology “a kind of absurdity”.
Companies pushing AI innovations make it difficult to clarify “what these systems are good at and not so good at,” he said.
It’s a position echoed by Lachman, who says the lack of clarity will force users to be aware of what they read online in a new way, because
“Right now, when you and I talk, I usually think that anything that looks like a person is a person,” he said, pointing out that human users may assume that anything that looks like a person is a person. looks like another human will have the same basic understanding of what’s going on. how the world works.
But even if a computer appears to resemble a human, it won’t have that knowledge, he says.
“There isn’t this sense of common understanding of the fundamental rules of society. But it seems like there is.”