The promises and dangers of advanced artificial intelligence technologies were on display this week at a conclave organized by the Pentagon examine future uses of artificial intelligence by the military. Government and industry officials discussed how tools such as large language models, or LLMs, could be used to help maintain the U.S. government’s strategic lead over rivals, particularly China.
In addition to OpenAI, Amazon and Microsoft were among the companies that presented their technologies.
Not all of the issues raised were positive. Some speakers called for caution in deploying systems that researchers are still working to fully understand.
“There is imminent concern about potential catastrophic accidents due to AI Malfunctionand risk of significant damage due contradictory attack targeting AI,” South Korean Army Lt. Col. Kangmin Kim said at the symposium. “Therefore, it is essential that we meticulously evaluate AI weapon systems from the development stage. »
He told Pentagon officials they needed to address the issue of “liability for accidents.”
Craig Martell, head of the Pentagon’s Chief Digital and Artificial Intelligence Office (CDAO), told reporters Thursday he was aware of these concerns.
“I would say we’re moving too fast if we’re shipping products that we don’t know how to price,” he said. “I don’t think we should ship things that we don’t know how to value.”
Although LLMs like ChatGPT are known to the public as chatbots, industry experts say chat likely won’t be how the military would use them. They are more likely to be used to complete tasks that would take too long or be too complicated if done by humans. This means that they would likely be used by skilled practitioners who would use them to operate powerful computers.
“Chat is a dead end,” said Shyam Sankar, chief technology officer at Palantir Technologies, a Pentagon contractor. “Instead, we are reimagining LLMs and prompts as being for developers, not end users. …It even changes why you would use them.
In the background of the symposium loomed the situation in the United States. technology race against China, which increasingly echoes the Cold War. The United States remains firmly ahead in AI, researchers say, with Washington having hampered Beijing’s progress. a series of sanctions. But U.S. officials fear that China has already achieved sufficient mastery of AI to bolster its military and intelligence-gathering capabilities.
Pentagon leaders were reluctant to discuss China’s AI level was asked several times by audience members this week, but some of the industry experts invited to speak were willing to delve into the issue.
Alexandre Wang, CEO of Scale AI, based in San Francisco, which works with the Pentagon on AI, said Thursday that China was far behind the United States in LLM just a few years ago, but has closed much of that gap thanks to billions of dollars of investments. He said the United States appears poised to stay ahead unless it makes unforced errors, such as not investing enough in AI applications or deploying LLMs in the wrong scenarios.
“This is an area where we, the United States, should win,” Wang said. “If we try to use technology in scenarios where it is not suitable, we will fail. We’re going to shoot ourselves in the foot.”
Some researchers have warned against the temptation to commercialize emerging AI applications before they are ready, simply for fear of China catching up.
“What we are seeing is concerns about being or falling behind. It’s the same dynamic that drove the development of nuclear weapons and later the hydrogen bomb,” said Jon Wolfsthal, director of global risk at the Federation of American Scientists, who did not attend the symposium. “Perhaps these dynamics are inevitable, but we are not – neither in government nor in the AI development community – sufficiently aware of these risks and we do not take them into account in decisions on the extent to which to integrate these new capabilities into some of our most sensitive systems.
Rachael Martin, Pentagon Director Maven program, which analyzes drone surveillance videos, high-resolution satellite images and other visual information, said its program’s experts are turning to LLMs to help them sift through “millions, even billions” of video and photo units – “a scale which, in my opinion, is probably unprecedented in the public sector. The Maven program is managed by the National Geospatial-Intelligence Agency and CDAO.
Martin said it remained unclear whether commercial LLMs, trained on public Internet data, would be best suited to Maven’s work.
“There’s a big difference between cat photos on the Internet and satellite images,” she said. “We don’t know how useful models trained on these types of Internet images will be to us.”
Interest was particularly high for Knight’s presentation on ChatGPT. OpenAI has removed restrictions against military applications from its usage policy last month, and the company began working with the Defense Advanced Research Projects Agency of the US Department of Defenseor DARPA.
Knight said LLMs were well suited to conducting sophisticated research in multiple languages, identifying source code vulnerabilities and performing needle-in-a-haystack searches that were too laborious for humans. “Language models don’t get tired,” he said. “They could do this all day.”
Knight also said LLMs could be useful for “disinformation actions” by generating sock puppets, or fake social media accounts, filled with a “kind of baseball card biography of a person.” He noted that this is a time-consuming task when done by humans.
“Once you have sock puppets, you can simulate them arguing,” Knight said, showing a mock-up of right-wing and left-wing ghost individuals debating.
US Navy Capt. M. Xavier Lugo, head of CADD’s Generative AI Working Group, said on stage that the Pentagon would not use a company’s LLM against its will.
“If someone doesn’t want their fundamental model used by the DoD, then they won’t do it,” Lugo said.
The office chairing this week’s symposium, the CDAO, was formed in June 2022 when the Pentagon merged four data analysis and AI-related units. Margaret Palmieri, deputy director of the CDAO, said centralizing AI resources in a single office reflected the Pentagon’s interest in not only experimenting with these technologies, but also deploying them at scale.
“We’re looking at the mission from a different perspective, and that goal is one of scale,” she said.