Artificial intelligence (AI) technologies are now widely used by organizations. In fact, 72% of organizations interviewed by the consulting firm McKinsey At the start of the year, 10% of companies surveyed had adopted AI in at least one function of their business. Additionally, 65% of respondents indicated that their organizations were regularly using generative AI (genAI) technologies, while three-quarters predicted that genAI would drive significant or disruptive change in their industries in the coming years.
AI can be applied to many business use cases that improve efficiency, productivity, and competitiveness, from automating routine tasks to sophisticated analysis of large data sets. At the same time, however, this technology poses significant ethical risks, including bias, intellectual property, and privacy. So how can leaders help their organizations manage the ethical risks associated with AI?
1. Exercise sound human judgment
“Developing and deploying new technologies ethically and responsibly will depend on human judgment,” says Rob Hayward, chief strategy officer at organizational ethics consultancy Main“AI is not simply good or bad, but each new AI solution will involve thousands of micro-decisions on the ground throughout the innovation lifecycle. These decisions will depend not only on legal and regulatory parameters, but also on individual and collective judgment about what is the right thing to do.”
Hayward says that exercising informed judgment will require people at all levels of the organization, from designers and engineers to marketers and product managers, to “identify and reflect on the ethical challenges presented by new technologies, and understand their decisions through the lens of the impact they will have on the world.”
The most important thing leaders can do is engage their employees in an open and honest dialogue about how AI can help or hinder the organization’s ethical aspirations and commitments, Hayward notes. To foster ethical AI, he also argues that leaders must strengthen the organization’s systems, policies, and governance mechanisms.
2. Promote a culture of responsible innovation
Culture is key to success in almost every organizational endeavor. So with AI, fostering a culture of responsible innovation is critical, says Nell Watson, an AI expert, ethicist, and author of Taming the Machine: Ethically Harnessing the Power of AI. ““Conduct regular audits to detect bias and unintended consequences,” she says, “especially in high-stakes areas like hiring and performance reviews. Prioritize data privacy and security with robust access protections and explicit consent protocols.”
Watson recommends that leaders establish clear oversight of AI decisions, ensure accountability, and preserve the right to challenge algorithmic outcomes. They should also consider the long-term implications of AI deployment, including potential job losses, and proactively invest in reskilling initiatives to future-proof the workforce.
“Remember that ethical AI is a journey, not a destination,” Watson says. “Encourage open dialogue with stakeholders, including employees and the public, to address reasonable concerns and build trust. By balancing the effectiveness of innovation with ethical considerations, leaders can harness AI’s incredible potential without causing scandal or imposing unfair burdens on others.”
3. Have a valid business case for using AI
AI is just one technology among many that make up the fourth industrial revolution or Industry 4.0. Other technologies in the mix include advanced analytics, blockchain and cloud. “These emerging digital technologies all involve complex tradeoffs around ethics, sustainability and ‘technology for good,’” says Richard Markoff, professor of supply chain management at ESCP Business School in Paris and co-author of The Digital Supply Chain Challenge.
Markoff says that, as with other technologies, any deployment of AI must be driven by real business motivations, have a solid business case, and be subject to “careful implementation with deep commitment from the top.” Citing the example of driverless vehicles, he says that while much of the discussion has focused on passenger cars and taxis, the most likely application in the near term may be “driverless trucks moving freight in most companies’ supply chains.”
4. See AI as a force for good
The controversy surrounding AI is so great that it’s easy to forget that the technology is there to help, not take over. “AI can handle the mundane tasks so you can focus on what really matters,” says Chris Griffiths, co-author of The Focus Fix: Finding Clarity, Creativity, and Resilience in an Overwhelming World. “By handing over repetitive tasks to AI, you free up your team’s brain capacity for strategic and creative thinking.”
Griffiths believes we should view AI as our ally, using it to lighten our cognitive load while ensuring our approach is ethical. “In this way, not only do we improve our productivity, but we also find more clarity, creativity and joy in our daily work,” he says.
Training employees on the ethical use of AI is critical. “It’s not just about teaching people how to push the right buttons or use the right AI model,” Griffiths says. “It’s about understanding how to harness the full potential of AI in an ethical way. Leaders need to create an environment where teams see AI as a tool for good, a way to increase productivity without sacrificing mental well-being.”
Did you like this article? Follow me by clicking the blue “Follow” button under the title above.