In their race to do more with AI, big tech advanced quickly and then rolled back big advances.
Microsoft became the latest to scale back an artificial intelligence feature within a month of its announcement, following backlash.
On Thursday, Microsoft announced that it remove an AI tool of its new range of computers called PC Copilot+. The feature will now only be available to a small group of people who are part of its Windows Insider program instead of being widely available to PC Copilot+ users on June 18.
The AI feature is called Recall and acts like the computer’s “photographic memory”: it can take screenshots of everything the user is looking at on their PC and help them quickly find where they are stored something from a conversation prompt.
But privacy advocates gave the alarm almost immediately on Microsoft’s callback feature. They were turned off by the idea that the device might take screenshots of their users’ activity every few seconds.
Microsoft, for its part, said that users can disable the feature and that images are only stored internally.
“We are adjusting Recall’s release model to leverage the expertise of the Windows Insider community to ensure the experience meets our high standards for quality and security,” the company said. wrote in a blog post Thursday.
Microsoft did not respond to a request for comment from Business Insider sent outside of normal business hours.
However, big tech companies seem to be diving headlong into rapidly deploying AI features, then turning around when things get complicated.
Take, for example, recent events at Google, Adobe and OpenAI. Granted, each company provided reasons for rolling back, but all companies had to reexamine deployments after release.
In May, Google revised downwards the use of AI-generated answers in search results called AI insights. This was after the feature made some shocking errors, including advising users to put glue in their pizza sauce. Google also ended AI-generated facial images in February after the Gemini tool. created images riddled with historical inaccuracies.
“We have already made more than a dozen technical updates to our systems and are committed to continuing to improve when and how we display AI previews,” a Google representative told BI.
Also in May, OpenAI launched a voice option, Sky, which “strangely” resembled Scarlett Johansson, angry with the actress. The ChatGPT creator said it wasn’t Johansson’s voice, apologized, and then removed the voice from his platform.
Earlier this week, Adobe joined the club. It sent users a new acceptance of its “Terms of Use”, which led some people to think that AI would suppress their art and content. Some Adobe employees questioned the company’s communications capabilities, and the company has since delayed rolling out the updated changes.
“This got us thinking about the language we use in our terms and the opportunity we have to be more clear and address concerns raised by the community,” Adobe wrote in a blog post Monday.
Representatives for Adobe and OpenAI did not respond to BI’s requests for comment sent outside of normal business hours.