So much for putting safety first.
Safety third
Anthropic, the AI company that presents itself as the safety-first alternative to other AI companies like OpenAI – which it is about poached many executives – partnered with obscure defense contractor Palantir.
The AI company is also partnering with Amazon Web Services to offer its AI chatbot Claude to U.S. intelligence and defense agencies, an alliance that appears at odds with Anthropic’s claim that “border security“.
According to a press releaseThe partnership supports the U.S. military-industrial complex by “rapidly processing large amounts of complex data, improving data-driven insights, more effectively identifying patterns and trends, streamlining document review and preparation, and helping U.S. officials make more informed decisions on time. » -sensitive situations.”
The situation is particularly peculiar given that AI chatbots have long gained a reputation for their tendency to divulge sensitive information and “hallucinate” facts.
“Palantir is proud to be the first industry partner to bring Claude models into classified environments,” Palantir CTO Shyam Sankar said in a statement.
“This will significantly improve intelligence analysis and enable managers to make decisions, streamline resource-intensive tasks and improve operational efficiencies across all departments,” added Kate Earle Jensen, head of sales at Anthropic.
Unlimited access
Anthropic technically allows its AI tools to be used to “identify covert influence or sabotage campaigns” or “provide advance warning of potential military activities,” according to its recently expanded terms of service.
Since June, the terms of use have conveniently deviated contractual exceptions for military and intelligence purposes, such as TechCrunch underlines.
The latest partnership allows Claude to access information that falls under “secret” Palantir Impact Level 6 (IL6), which is one step below “top secret” at the Department of Defense, according to TechCrunch. Anything considered IL6 may contain data critical to national security.
In other words, Anthropic and Palantir may not have passed the nuclear codes to the AI chatbot, but it will now have access to spicy information.
This also places Anthropic in an ethically murky business. Concrete example, Palantir landed a $480 million contract from the US Army to build an AI-powered target identification system called Maven Smart System earlier this year. The overall Maven project has already proven incredibly controversial in the technology sector.
However, it remains to be seen how a mind-blowing AI chatbot will fit into all of this. Is Anthropic simply following the money as it prepares to raise enough funding to secure a a rumored valuation of $40 billion?
It’s a disconcerting partnership that establishes growing ties between the AI industry and the U.S. military-industrial complex, a worrying trend that should ring alarm bells of all kinds given the many inherent flaws in the technology – and even more so when lives could be at stake.
Learn more about Anthropic: Anthropic now lets Claude take control of your entire computer