In a rapidly changing landscape, organizations of all types – public, private, commercial, nonprofit – are embarking on an exhilarating journey into the world of generative artificial intelligence (AI). Although some companies had already developed traditional AI systems (pre-generative AI), relying primarily on machine learning predictions powered by structured data, many are now navigating uncharted AI territory and the possibilities are endless.
We’ve been fascinated by emerging technologies before, from the blockchain revolution and the allure of the metaverse to the frenzy surrounding NFTs, but generative AI is a new pinnacle of technological innovation. The broad range of applications is truly breathtaking, from automating business processes to unlocking AI for profound societal benefits.
However, amidst this great promise lies great peril. Much has been written about the risks of AI, from the possibility of discriminatory outcomes to more existential threats to humanity. The question is: who will bear the essential responsibility of protecting humanity in the age of AI?
An inaction
While several NGOs and civil society organizations are diligently analyzing AI risks, the vast economic and commercial potential makes it difficult for nonprofits to restrict AI development. Policymakers are also taking steps to address this AI tidal wave, as evidenced by the recent adoption of the EU AI law by the European Parliament.
However, global regulation of AI today is immature and the prospect of heavy global regulation of AI remains uncertain.
Sometimes government leaders choose not to regulate innovative technologies. In 1997, Bill Clinton’s administration released a seminal report titled A framework for global e-commerce, advocating for industry self-regulation without undue restrictions on e-commerce. A year later, President Clinton signed the Internet Tax Freedom Actpromoting innovation by allowing the Internet to flourish with limited taxation.
Similarly, the Chinese government strategically decided not to heavily regulate FinTech innovation and the rise of super-apps during a significant period of technological development.
If NGOs and governments do not take on the role of rigorous stewards of AI, then we must look to businesses and their boards to rise to the occasion and take on the role of stewards of AI.
Boards play a central governance role in overseeing a wide range of organizational risks, including the complex AI risk landscape. The 1996 benchmark Care brand The Delaware Chancery Court’s decision established a fundamental (albeit minimal) legal standard for board oversight. But can we rely on the legal system to ensure strong governance of companies’ AI programs? Boards of directors have considerable latitude in carrying out their governance responsibilities. They must establish a surveillance system, monitor it and respond to any warning signals that arise.
Establish ethical tolerance limits
Given the particular ethical challenges posed by AI, board accountability should transcend minimum legal requirements and encompass deep ethical accountability with pioneering AI governance. Boards of directors can adopt ethical AI practices that protect the interests of stakeholders and humanity at large while upholding their fiduciary duty to their shareholders. Ethical behavior can not only improve a company’s reputation and trustworthiness, but it also lays the foundation for lasting success. I am encouraged that many boards Even further and promote certain ethical AI actions that benefit society without direct evidence of a positive connection with their company.
The board of directors must establish the limits of ethical tolerance within the company, setting the limits of what is morally acceptable in the company’s AI initiatives. Ethics should serve as a common thread informing every facet of a company’s AI strategy. Here are five actions boards should consider to promote ethical AI governance:
Advancing the board’s technological expertise: Fill an open board seat with a technologist or AI expert. Embrace AI by directly engaging with this transformative technology; experiment with generative AI tools to improve your board work. Invite AI experts to bring new perspectives.
Go beyond legal compliance: remember that Care brand represents only the minimum standards of monitoring. Go further and elevate AI governance beyond legal compliance, whether it is a core element of corporate social responsibility, an expression of ESG principles, or another manifestation of a duty towards society. Your company’s AI applications must not only be compliant positive law, but also to defend fundamental human rights, recognizing our duty to natural law.
Form an ethics council: Establish an ethics council composed of experts familiar with ethics, anthropology, technology, data, law and human rights. Leverage this multidisciplinary consultancy to rigorously evaluate enterprise AI applications, providing a new perspective on ethical considerations. Diversity is the key to good ethical analysis of AI.
Establish a technology committee or board advisory council: Preview EY one study finds that 13% of S&P 500 companies have instituted some form of board-level technology committee. These committees have proven invaluable in effectively managing technology risks and driving the technology-powered innovation and growth agenda.
Foster collaboration within the AI ecosystem: Boards of directors must ensure that the company collaborates effectively within the AI ecosystem. Collaborate with industry stakeholders, policymakers and ethicists to collectively establish ethical standards for AI that reflect societal values.
Boards are at the forefront of ethical AI governance. They are uniquely positioned to protect humanity by ensuring that AI is designed, developed and deployed ethically and responsibly. Boards of directors hold the compass that can guide us toward a future where AI is harnessed not only for its vast potential, but also for its integrity. They bear the heavy responsibility of ensuring that AI is not just for economic gain, but is applied ethically, responsibly and with unwavering dedication to our shared values.
As we navigate the complex waters of the AI revolution, let us not only meet our legal obligations, but transcend them. Let’s infuse ethics into every line of software code, every decision and every action. Let us cultivate a legacy of responsible innovation that serves as a beacon for generations to come and protects the dignity and rights of all. Let’s embark on this noble journey together, because the future of AI, and of course humanity, depends on the choices we make today.
Jeffrey Saviano is an AI ethicist. He holds a position at the Edmond & Lily Safra Center for Ethics at Harvard University, where he is a member of the GETTING Plurality research network and collaborates with the Harvard community to study the ethics of AI. Jeffrey is also a senior fellow and affiliated researcher at MIT Connection Science, a lecturer at Boston University School of Law, an EY emerging technology strategy and governance leader, and an AI leader at from the EY Center for Board Matters. The views reflected in this article are those of the author and do not necessarily reflect those of Ernst & Young LLP or other members of the global EY organization.
The opinions expressed in comments on Fortune.com are solely the opinions of the authors and do not necessarily reflect the opinions and beliefs of Fortune.