OpenAI Adversarial Threats Report Should Be a Prelude to more robust data sharing in the future. When it comes to AI, independent researchers have started building abuse databases, such as AI incident database and the Political Deepfake Incident Database—to allow researchers to compare different types of misuse and track how misuse changes over time. But it is often hard to detect external abuse. As AI tools become more capable and ubiquitous, it is important for policymakers considering regulation to understand how they are used and abused. While OpenAI’s first report offered high-level summaries and selected examples, expanding data-sharing relationships with researchers that provide more visibility into conflicting content or behavior is an important next step.
When it comes to combating influence operations and AI abuse, internet users also have a role to play. After all, this content only has impact if people see it, believe it, and participate in sharing it further. In one of the cases revealed by OpenAI, online users reported fake accounts using AI-generated text.
In our own research, we have seen Facebook user communities proactively expose AI-generated image content created by spammers and scammers, helping those who are less tech-aware avoid falling prey to deception. A healthy dose of skepticism is increasingly helpful: stopping to check whether content is real and people are who they say they are, and helping friends and family members become more aware of the increasing prevalence of generated content, can help social media users resist deception. propagandists and crooks.
OpenAI blog post The report announcing the takedown put it succinctly: “Threat actors work on the Internet. ” U.S. too. As we enter a new era of AI-powered influence operations, we must address common challenges through transparency, data sharing, and collaborative vigilance if we hope to develop a more resilient digital ecosystem.
Josh A. Goldstein is a researcher at the Center for Security and Emerging Technologies (CSET) at Georgetown University, where he works on the CyberAI project. Renee DiResta is the research director of the Stanford Internet Observatory and the author of Invisible Leaders: Those Who Turn Lies into Reality.