Sonita Lontoh is an award-winning Fortune 100 executive and independent public company board director.
As artificial intelligence, particularly generative AI, becomes more advanced, many companies and boards are looking to strike the right balance between exploring how best to leverage the benefits of AI and putting in place appropriate safeguards to ensure that AI is used responsibly.
AI is still in its infancy, so we don’t really know what it’s capable of or where it’s going to go. This makes it difficult for boards to create the necessary safeguards for companies to use AI responsibly. What does accountability look like? What types of problems do we want to avoid? These are the questions plaguing leaders as they attempt to use AI to its full advantage while avoiding business and ethical pitfalls.
How Businesses Are Using Generative AI
Generative AI as we know it is evolving at lightning speed and businesses are experimenting with it in multiple ways to streamline operations, automate simple tasks and much more. Uses of generative AI are growing rapidly, with an estimated $19.4 billion spent globally on generative AI solutions in 2023, according to International Data Society.
According to McKinsey, most of the potential uses of generative AI fall under the 4C frame:
• Content summarization: summarize and interpret data sources
• Coding: generate and test code
• Content generation: creating content such as contracts, plans and communications
• Customer engagement: chatbots and customer data collection
Potential use cases for generative AI in business include smart mirrors in retail stores; describe presentations, presentations and reports; research, because AI can calculate exponential amounts of data in seconds; medical research and drug discovery; product design and more. Opportunities to use AI are growing, and businesses are deploying it to increase efficiency and reduce costs while creating new opportunities for more personalized and faster customer experiences.
AI Risk Boards Need to Consider
Creating content, conducting research, and talking to customers are business-critical tasks and should not be taken lightly. The risks posed by using AI for these actions must be recognized by boards and considered when developing guidelines on how to use AI responsibly. These risks include:
AI may not be reliable for decision making. Despite its name, AI does not think like a human. AI relies heavily on data and algorithms, which may contain biases or inaccuracies and lead to unintended consequences.
For example, the data that AI learns from may be biased due to how the information is obtained or the assumptions used by the machine learning process. So while AI can help increase efficiency, it should not be used to make material decisions that affect people and society as a whole. It is reasonable to assume that most AI use cases today should focus primarily on increasing productivity and efficiency.
AI creates both cybersecurity risks and opportunities. One of the major drawbacks of AI is the ability for hackers to use it to launch more sophisticated attacks. Another risk is potential alert fatigue resulting from increased false positives when machine learning algorithms cannot identify threats that do not fit pre-existing patterns.
AI also presents great opportunities for cyber defense. In addition to enabling much faster detection and response to threats, AI can also help identify vulnerabilities and reduce risks earlier by proactively spotting new patterns, trends and information.
AI affects the workforce and society. AI has the potential to both displace and improve our workforce. This can create both risks and opportunities for businesses and workers. Which jobs or roles will be most affected by AI? Which human-machine interactions are most optimal with AI? How should leaders address the positive and negative ramifications of AI on our workforce and in society? Without careful consideration of these fundamental questions, AI can pose major risks for businesses and societies.
A framework for human-centered AI board governance
Recognizing the risks above and taking into consideration other business and ethical factors, boards should create an AI governance framework that allows their companies to experiment with AI use cases , but also to place safeguards to ensure that their companies practice ethical AI and comply with relevant standards. laws and regulations.
1. Integrate AI into your overall business strategy. Boards and management need to understand how AI can impact their business operations internally, their customers and partners externally, their industries in general and in particular how AI can disrupt the relevance of their economic models. Boards and management should not view AI in isolation as a technology, but rather fully integrate it into the overall way the business manages significant risks and opportunities to create long-term value.
2. Balance risks and opportunities. There is always a tension between over-indexing on risk and chasing opportunities or shiny new objects with wild abandon. Boards and management must find the sweet spot where risks and rewards are optimal based on where the company is on its AI journey. Boards tend to over-index on risk mitigation, but given the nascent and exponential nature of AI, boards need to ensure their risk management framework allows for rapid experimentation to find new ways to use AI responsibly. Businesses should not be afraid to experiment while considering ethical use cases for AI and compliance with applicable laws and regulations.
3. Create a code of conduct to ensure ethical use of AI. While it is impossible for anyone to accurately predict how AI might affect businesses, workers, customers, and society in the future, we should ensure that AI enhances rather than destroys humanity. Boards can enable AI experimentation while ensuring ethical use by creating a code of conduct that sets the rules governing the standards, responsibilities and guiding principles of their company’s approaches to AI.
Like any tool, AI can build or destroy
The explosion of AI around the world has changed the way we operate and has generated much excitement and concern. AI presents great potential and great risks. Businesses would do well to prepare for the evolution of their AI journey by creating ethical and human-centered guiding principles using the strategies above.
Forbes Communications Council is an invitation-only community for leaders of successful PR, media strategy, creative and advertising agencies. Am I eligible?