The rapid development of AI has ushered in an era of technological capabilities for businesses around the world. Today, AI systems are augmenting and automating decision-making in areas ranging from healthcare to education to marketing. But while this AI revolution holds incredible potential to drive innovation and improve lives, the dramatic increase in the use of this technology also raises important ethical challenges.
As AI algorithms gain increasing autonomy and authority to make judgments and choices that impact human lives, it is essential to ensure that these systems operate reliably and align on moral principles. Just like the humans who train them, large language models can exhibit biases and make costly mistakes if they are not equipped with the right values and governance.
Additionally, integrating ethics into AI initiatives is not only a moral but also a business imperative. Ethical AI practices can help businesses avoid reputational damage, legal issues, and financial losses. Consumers are increasingly aware and concerned about how companies use AI, with many preferring to engage with companies that demonstrate their commitment to ethical practices. As consumers become more aware of these risks – Salesforce study showing that three-quarters of respondents worry about unethical use of technology – organizations must act now to ensure AI is used safely.
Ensuring ethical and responsible use of AI
For businesses and society to take full advantage of the opportunities of AI, appropriate safeguards must be put in place. Highlighting the continued importance of integrating ethics into the future development of AI, the European Parliament approved the world’s first comprehensive framework for regulating AI in March 2024.
The EU AI law aims to set a precedent for the world’s first legislation on AI technology, aiming to ensure that it will be consistent with the protection of fundamental human rights. It places the EU at the forefront of global attempts to address the risks associated with AI in a rapidly evolving digital landscape.
With more such regulations coming – and similar laws planned in the US, UK and China – Douglas Dick, head of emerging technology risk at KPMG, explains why organizations need to act now to ensure responsible and ethical use of AI.
While it’s important to keep regulations in mind, he says organizations should already be striving to use technology ethically. “This involves putting in place appropriate governance and controls to use emerging technologies and mitigate potential risks,” he explains. “Failing to implement effective governance and control frameworks from the outset can have significant reputational, financial and operational consequences, even in the early stages of AI development. »
Organizations should also be aware that any technology they develop or use will need to be compliant when the regulations come into force. Otherwise, it could represent an unnecessary investment and put the company at risk of being scrutinized by regulators, as well as customers and the media.
“Businesses are thinking about how AI can complement and enrich the customer journey and experience, increasing empathy and insight, while removing the day-to-day work of people. Therefore, it will have a greater ethical impact on employees and the society in which the organizations serve.
Building a team dedicated to AI ethics and governance
As AI becomes more ingrained in business operations, creating teams dedicated to AI ethics and governance becomes increasingly important. While Google, Twitch and Microsoft have already been among the tech companies to reduce their ethical AI teams, these teams play a crucial role in guiding the ethical use of AI, ensuring that AI practices AI meets both ethical standards and regulatory requirements.
“Hiring a team dedicated to AI ethics and governance will be a challenge due to the general lack of AI skills; however, it would benefit greatly from the inclusion of an AI ethicist and upskilling their colleagues from the outset in the Three Lines of Defense risk governance framework,” he says.
“The role of the team should be to continuously monitor and guide AI practices in accordance with regulatory requirements, define and implement ethical principles, review the organization’s risk management strategies to consider potential issues related to data and AI, and strengthen privacy impact assessments. As technology evolves, it’s important to regularly review team responsibilities.
Cultivate an internal culture that embraces ethical AI practices
With KPMG’s 2023 CEO Outlook survey revealing that CEOs around the world cite ethical challenges as their top concern when implementing generation AI, it is also critical to build a culture that values ethical AI within of an organization. As AI systems become more complex and their decisions more impactful, it is important that every employee understands and considers ethical aspects such as fairness, transparency and privacy in their work with AI.
“Educate employees on how their jobs might change to alleviate their anxieties and start now,” advises Douglas. “Talking about technology as your “new AI colleague” can help dispel the myth that AI will replace humans in certain roles.
“If you have the resources, hiring a dedicated person or team to train AI models and monitor for bias is extremely valuable. Or have critical models independently assessed if there is a risk that their production will have a negative impact on the public.
******
Be sure to check out the latest edition of Technology review and also register for our global conference series – Technology and AI LIVE 2024
******
The technology magazine is a BizClik brand