The current debate over whether open or closed advanced AI models are safer or better is a distraction. Rather than focusing on one business model over another, we need to adopt a more holistic definition of what open AI means. This means shifting the debate to focus on the need for open science, transparency and fairness if we are to build AI that works for and in the public interest.
Open science is the foundation of technological progress. We need more ideas, and more diverse ideas, that are more widely available, not fewer. The organization I lead, Partnership on AI, is itself a mission-driven open innovation experiment, bringing together academics, civil society, industry partners and policymakers to work on one of the world’s most pressing problems. more difficult: ensuring that the benefits of technology reach as many people as possible. , not a few.
With open models, we cannot forget the influential upstream roles that public funding of science and open publication of academic research play.
National science and innovation policy is essential to an open ecosystem. In his book, The entrepreneurial state, economist Mariana Mazzucato notes that public funding of research has been the source of some of the intellectual property seeds that have given rise to U.S.-based technology companies. From the Internet to the iPhone and Google Thanks to the AdWords algorithm, much of today’s AI technology has received a boost through early government funding for new and applied research.
Likewise, open publication of research, peer-reviewed with ethical review, is crucial for scientific progress. ChatGPT, for example, would not have been possible without access to researchers’ openly published research on transformer models. It is worrying to read, as reported by Stanford AI Index, than the number of doctorates in AI. the number of graduates taking up jobs in academia has declined over the past decade, while the number going into industry has increased, more than doubling in 2021.
It’s also important to remember that open does not mean transparent. And while transparency is not an end in itself, it is essential to accountability.
Transparency requires timely disclosure, clear communications to relevant audiences, and explicit documentation standards. As PAI Guide to Deploying the Safe Foundation Model As the model shows, actions taken throughout the lifecycle of a model enable increased external control and auditability while protecting competitiveness. This includes transparency regarding types of training data, testing and evaluations, incident reporting, labor sources, human rights due diligence and performance assessments. environmental impacts. Developing documentation and disclosure standards is essential to ensuring the safety and accountability of advanced AI.
Finally, as our research has shown, it is easy to recognize the need to be open and create space for a diversity of perspectives in order to chart the future of AI – but it is even more difficult to do so. TO DO. It is true that with fewer barriers to entry, an open ecosystem includes more players from backgrounds not traditionally seen in Silicon Valley. It is also true that instead of further concentrating power and wealth, an open ecosystem paves the way for more actors to share the economic benefits of AI.
But we must do more than just prepare the ground.
We must invest to ensure that communities that are disproportionately impacted by algorithmic harm, as well as those from historically marginalized groups, are able to fully participate in the development and deployment of AI that works for them while protecting their data and their privacy. This means focusing on skills and education, but also rethinking who develops AI systems and how they are evaluated. Today, through private and public sandboxes and labs, citizen-led AI innovations are being tested around the world.
Ensuring security is not about choosing between open and closed models. Rather, it is about building national systems of research and open innovation that advance a resilient field of scientific innovation and integrity. It’s about creating space for a competitive marketplace of ideas to advance prosperity. This is to ensure that political decision-makers and the public have visibility on the development of these new technologies in order to better question their possibilities and their dangers. It’s about recognizing that clear traffic rules help us all get around faster and safer. Most importantly, if AI is to deliver on its promise, sustainable, respectful, and effective ways must be found to listen to new and different voices in the AI conversation.
Rebecca Finlay is the CEO of AI Partnership.
More essential comments published by Fortune:
The opinions expressed in comments on Fortune.com are solely the opinions of the authors and do not necessarily reflect the opinions and beliefs of Fortune.