Although the proliferation of artificial intelligence may seem like an isolated phenomenon, humans have already faced disruptive technological changes in society and have done so successfully for thousands of years. However, these innovations have always progressed faster than their societal consequences – which is not obvious with AI.
“What happens when we get to the point where the time frame for consequences to emerge is much shorter than the time it takes to innovate our way out of them? Andrew Maynard asked the audience at a recent seminar hosted by Arizona State University Consortium for Science, Policy and Results.
The seminar, “Responsible Artificial Intelligence: Policy Pathways to a Positive AI Future,” was presented at ASU. Barrett & O’Connor Washington Center to an in-person audience and broadcast live. In this one, Maynardprofessor at ASU School for the Future of Innovation in Society and director of Risk Innovation Labargued for the need for a new risk management and innovation framework in response to accelerated developments in transformational AI.
Maynard, who writes regularly about AI and has testified about its ramifications before Congress, highlighted AI’s rapid iteration. Calling ChatGPT “the tip of a very large AI iceberg,” he noted that international regulators are studying much larger, trainable “core models,” applicable in many fields, as well as “frontier models.” ”, which are basic models with potentially disruptive powers.
Maynard touted the framework for an “advanced technological transition,” which is different from transitions spurred by past innovations, but which can be solved. Drawing on his own research, he highlighted nanotechnology as a field characterized by the large-scale engagement of sectors such as academics, technologists and philosophers, and a framework for responsible innovation based on anticipation, reflexivity, inclusion and responsiveness.
This framework-based approach is different, he noted, from AI development so far, in which computer scientists have played an outsized role without adequate understanding of the social consequences of their actions, or the The recent interest of political decision-makers in “responsible AI”. ”, which is defined in a different way from responsible innovation.
“We need a framework for thinking and guiding decisions that recognizes this complexity,” Maynard said.
From a risk and innovation perspective, focusing on threats to existing and future values (or important elements and beliefs), Maynard identified a series of “orphan risks”: social and ethical factors ; the unintended consequences of emerging technologies; and organizational and systemic problems – which are not resolved due to their complexity.
Additionally, Maynard highlighted the need for “agile regulation” that operates at the pace of sector innovation and incorporates lessons learned from ever-evolving technologies.
“We cannot prevent the emergence of transformative AI,” Maynard said. “All we can do is direct it. …(We) need to think about how we direct the inevitable toward desirable outcomes.
To watch the seminar in full, visit the Consortium for Science, Policy & Outcomes’ website.