The company has reportedly removed language from its usage policy expressly prohibiting the use of its technology for military purposes, The Intercept reported.
“Our goal was to create a set of universal principles that are easy to remember and apply, especially as our tools are now used globally by everyday users who can now also create GPT” said an OpenAI spokesperson.
“A principle such as “Do no harm to others” is broad but easy to understand and relevant in many contexts.
Additionally, we specifically cited weapons and inflicting harm on others as clear examples,” the spokesperson added.
The actual consequences of this policy are, however, unclear. According to the report, there are several “adjacent tasks” that a large language model (LLM) like ChatGPT could augment, such as writing code or processing purchases.
Discover the stories that interest you
OpenAI’s platforms could be of great use to Army engineers looking to summarize decades of documentation on a region’s water infrastructure, TechCrunch reports. OpenAI has softened its stance on military use, but it still prohibits AI for military use. weapons development.