Artificial intelligence software doesn’t always do what those who build it want — a potentially dangerous problem that has cornered some of the biggest companies working on the technology.
Big companies like OpenAI and Alphabet Inc.’s Google are increasingly directing their workers, money and computing power toward the problem. And Anthropic, an OpenAI competitor, has placed it at the heart of the development of Claude, a product it touts as a safer type of AI chatbot.
As of this month, a new company called Synth Labs is also tackling the problem. Founded by a handful of big names in the AI industry, the company came out of hiding this week and raised seed funding from Microsoft Corp.’s venture fund M12 and First Spark Ventures by Eric Schmidt. Synth Labs primarily focuses on creating software, some of which is open source, to help various companies ensure their AI systems act as intended. It positions itself as a company that works in a transparent and collaborative manner.
Alignment, as the problem is sometimes called, presents a technical challenge for AI applications such as chatbots that are built on large language models, typically trained on vast swathes of Internet data. This effort is complicated by the fact that people’s ethics and values – as well as their ideas about what AI should and should not be allowed to do – vary. Synth Labs’ products will aim to help drive and customize large language models, particularly models that are themselves open source.
The company got its start as a project at the nonprofit AI research lab EleutherAI, where two of the three founders – Louis Castricato and Nathan Lile – worked on, as well as Synth Labs advisor and executive director of ‘EleutherAI, Stella Biderman. Francis deSouza, former CEO of biotechnology company Illumina Inc., is also a founder. Synth Labs declined to say how much money it has raised so far.
Over the past few months, the startup has built tools that can easily evaluate large language models on complex topics, Castricato said. The goal, he said, is to democratize access to easy-to-use tools that can automatically evaluate and align AI models.
A recent research paper co-authored by Castricato, Lile and Biderman gives an idea of the company’s approach: the authors used responses to prompts, generated by GPT-4 AI models from OpenAI and Stable Beluga 2 from Stability AI, to create a dataset. This dataset was then used as part of an automated process to instruct a chatbot to avoid talking about one topic and instead talk about another.
“The way we’re thinking about designing some of these early tools is really to give you the ability to decide what alignment means for your business or your personal preferences,” Lile said.
Also read other top stories today:
Google Maps’ Glanceable Directions update is rolling out to users and delivers new, easy-to-use navigation features. Some interesting details in this article. Check it out here.
After Google Gemini sparked a row over racism, CEO Sundar Pichai had some tough things to say. Read Sundar Pichai’s tough speech to staff here.
Google’s bad job! Google has a chronic habit of discarding half-baked AI products and neglecting security controls. Read all about it here.
One more thing ! We are now on WhatsApp channels! Follow us there to never miss any updates from the tech world. To follow the HT Tech channel on WhatsApp, click on here join now!