As part of its efforts to improve its search engine capabilities, Google has announced new updates to Google Search powered by AI. According to an announcement from VP of Search Elizabeth Reid, the company has introduced two major updates that will make it easier for users to get answers to queries: Circle to Search and a multi-search experience based on AI.
She says this is part of the company’s approach to harnessing the ability of generative artificial intelligence (AI) to understand natural language to make it possible to ask questions on Google Search in a more natural and intuitive way. Previous results include previous announcements that users can voice search and search with their camera using Lens.
Here’s a closer look at what that entails:
Circle (or highlight or scribble) to search
When something captures your interest (like these adorable dog glasses!), it can be confusing to stop what you’re doing and use another app or browser to start searching for information.
Circle to Search is a new way to search for anything on your Android phone screen without having to switch apps. All you have to do is select images, text or videos in the way that comes naturally to you (like circling, highlighting, scribbling or tapping) and find the information you need right where you are.
So now, whether you’re texting friends, scrolling through social media, or watching a video, you can search what’s on your screen the moment your curiosity strikes. As with other Google search options, ads will appear in dedicated ad spaces on the results page.
Circle to Search launches globally on select premium Android smartphones on January 31, starting with the Pixel 8, Pixel 8 Pro, and the new Galaxy S24 series.
Similar: Here are 9 ways Google optimizes its products with AI
Point your camera, ask a question, get help from AI
How many times have you tried to find the perfect piece of clothing, a tutorial for recreating nail art, or even instructions on how to care for a plant someone gave you, but you didn’t have all the words to describe what you were looking for. For?
11 months ago, Google unveiled multisearch in Lens as a new way to search multimodally, with both images and text. With Google Multisearch, users can ask questions about an object in front of them by taking a photo or refine their search by color, brand or other additional attributes. The functionality is powered by the latest computer vision and language understanding techniques.
Now, thanks to recent advances in generative AI, Google is making it easier to explore the world with multi-search.
Starting today, when you point your camera (or upload a photo or screenshot) and ask a question using the Google app, the new multi-search experience will show results with AI-powered insights that go beyond simple visual matches. This gives you the ability to ask more complex or nuanced questions about what you see, and quickly find and understand key information.
For example, imagine you’re at a garage sale and come across a strange board game. There is no box or instructions, so immediately a few questions come to mind: What is this game and how is it played?
Here’s how to use the new Multisearch feature:
Simply take a photo of the game, add your question (“How do you play this?”) and you’ll get an AI-powered overview that brings together the most relevant information from around the web. This way you can quickly find out what the game is called and how to win. And with the AI-powered overview, it’s easy to dig deeper with support links and get all the details.
AI-powered previews of multi-search results are launching this week in English in the US for everyone – no Search Labs registration required. If you live outside the United States and have opted in to SGE, you can preview this new experience on the Google app.
To get started, simply look for the Lens camera icon in the Google app for Android or iOS.
Read also: Google rolls out generative AI search experience for users in Africa
Continue to boldly experiment with generative AI in research
Reid explained that this week’s launch of AI-powered insights for multiple search is the result of testing that began last year to see how AI generation can make search radically more useful. Recall that Google, two months ago, rolled out its Search Generative Experience (SGE) to users in the Sub-Saharan Africa region as an opt-in experience in Research laboratories.
With SGE, users in Africa will now see an AI-powered overview of key information to consider at the top of a Google search query’s results, with links to explore further. For anyone who has ever been overwhelmed by the amount of information online, this will help find answers faster.
An AI-powered snapshot provides key insights to consider and links to dig deeper.
In addition, the context will be carried over from one question to another, to help users continue their exploration more naturally. Just below the snapshot, you’ll see the option to “ask a follow-up question” or select a suggested next step.
Recent announcements show that Google is working to make AI useful to everyone, not just early adopters. Reid expressed the company’s commitment to continuing to experiment and discover which applications of AI generation are most useful, as well as continuing to introduce them more widely into research.
“Today’s updates will make searching even more natural and intuitive, but we’ve only scratched the surface of what’s possible. We received a lot of helpful feedback from the people who chose to participate in this experiment, and we will continue to offer SGE in Labs as a test bed for bold new ideas,” she added.
The race to AI in 2024 is off to an exciting start.