Eric Loeb, executive vice president of global government affairs and public policy, Selling power.
Historians will remember 2023 as the year AI went mainstream: sophisticated applications of generative AI went from being a tech expert novelty to almost consumer productivity tool. Across all industries and business functions, people are finding new and innovative ways to deploy AI tools to better serve customers, improve efficiency, drive positive change, and solve challenges of all kinds.
With so much promise, it’s easy to see why generative AI tools are among the best. fastest growing apps in history. But as we know, there are also risks and a lack of confidence remains. To ensure technology is used ethically tomorrow, we must ensure responsible innovation and deployment today.
A global emergency to establish governance and safeguards
To address the growing challenge of building trust in technology, interest in establishing policy frameworks governing AI and its uses has accelerated almost as fast as the technology itself. Across the world, governments, civil society and industry leaders are collaborating to develop policy frameworks that will enable their economies to benefit from the promise of AI while protecting citizens from risks.
This movement was further accelerated when the UN Security Council met in July to discuss the looming need to ensure the safety and effectiveness of AI by adopting policy frameworks focused on ethical and responsible technology. G7 countries have demonstrated their intention to cooperate on AI governance frameworks, leading to the October 30 announcement international guiding principles on AI and a voluntary code of conduct for AI developers.
Additionally, President Biden welcomed EU and European Commission leaders in October to discuss a coordinated approach to governing AI systems. This converged on UK AI Security Summit at Bletchley Park, which brought together AI experts from government, industry, academia and civil society to exchange ideas on responsible and ethical AI practices. This fall, we saw the results of this collaboration through the G7 Principles, the White House Executive Order and the UK Security Summit.
A tailor-made risk-based approach
While establishing guardrails is fundamental to trustworthy AI, a one-size-fits-all approach could be almost as harmful to society as no rules at all. To fully benefit from AI, balancing security and innovation is essential, and a heavy-handed approach to regulation can hinder innovation, disrupt healthy competition, and delay adoption of the emerging technology that consumers and businesses people around the world are just starting to use to boost productivity. .
The United States and the EU adopted a risk-based approach that advances trustworthy and responsible AI. Additionally, the groundbreaking EU AI law, which took a major step forward in political negotiations in December 2023, has set a global standard for the responsible, risk-based development of AI, combining innovation with strong safeguards against misuse. The EU AI law encourages protection policies to focus more on high-impact applications while ensuring appropriate mitigation measures for potential risks, making the EU an important pioneer in of ethical and reliable AI solutions.
Responsibly shaping the future of AI
As Executive Vice President of Global Government Affairs and Public Policy at Salesforce, I understand that trust is earned, not given, and requires continued investment in responsible practices and transparency. In the rapidly evolving AI landscape, navigating the path to responsible innovation requires a multi-step approach.
Here are some of the steps technology companies should consider taking to gain trust and develop an ethical AI framework:
1. Build Trust: Even as regulatory discussions progress, legal requirements should only serve as a reference. Organizations must take responsible action before being required to do so by regulation and must exceed customer expectations for privacy, transparency, security and trust.
2. Protect privacy: As AI relies on data, it is essential to ensure the adequate collection and protection of this data through comprehensive privacy legislation, in order to build trust and open the way for further legislation on AI.
3. Prioritize transparency: People need to know when they interact with AI systems and have access to information about how AI-based decisions are made.
4. Actively participate in political discussions: Public-private collaboration is the key to effective safeguards that protect both people and innovation. Find ways to ensure you have a seat at the table and engage a diverse group of global stakeholders to enrich these discussions.
By adopting these principles, organizations can navigate, shape and anticipate the regulatory landscape, positioning themselves as leaders in the development of responsible, safe and reliable AI.
Strong momentum in 2024 and beyond
These rapid developments demonstrate the seriousness with which governing bodies must take transformative technologies into account. Ensuring these systems are reliable is fundamental to operating ethically and safely.
Making policy frameworks globally interoperable is an exciting goal for 2024, as we aspire to make the development and diffusion of AI tools inclusive and globally available. I’m encouraged by the commitment we’ve seen in 2023 from leaders around the world to act quickly, collaboratively, and with spirit to get this done, even when that effort will require iterations and learning as we go that we are progressing.
As we look to 2024 and beyond, the technology industry must continue to collaborate with governments, academia and civil society from all regions and backgrounds to build a solid foundation for responsible progress in AI.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Am I eligible?