As artificial intelligence (AI) becomes increasingly integrated into various aspects of business operations, it brings with it numerous ethical and legal challenges. Businesses must carefully manage these complexities to harness the potential of AI while protecting themselves from potential risks. Before rushing to implement emerging AI tools and technologies, companies should explore the ethical and legal risks associated with AI implementation, with a particular focus on their impact on experience customers and employees, especially in customer service and contact center environments.
Understand the risks
AI systems, while powerful and transformative, are not without pitfalls. The main risks lie in three main areas: legal, ethical and reputational.
- Legal risks arise from non-compliance with various AI regulations and legislation.
- Ethics the risks relate to the broader societal and moral implications of using AI. Ethical risks often extend beyond legal compliance to include fairness, transparency, and the potential for AI to perpetuate or exacerbate existing inequalities.
- Reputation risk involves potential harm resulting from perceived or actual misuse of AI. Negative public perception can lead to a loss of customer trust and ultimately impact a company’s bottom line.
Legal risks in implementing AI
Learning and navigating the regulatory landscape should be non-negotiable for any company implementing AI. While AI technology is being implemented into all aspects of businesses at an unprecedented pace, the landscape is constantly evolving, with significant differences from region to region.
In Europe, European AI law is set to build on the already comprehensive data privacy legislation set out in the GDPR. European AI law classifies AI models and their use cases based on the risk they pose to society. It imposes significant penalties on companies that operate “high-risk” AI systems and fail to comply with mandatory security controls, such as regular self-reporting. It also introduces general prohibitions, including the use of AI to monitor employees’ emotions and certain processing of biometric data.
In the United States, a more diverse, state-by-state approach is developing. For example, in New York, Local Law 144 mandates annual audits of AI systems used in recruiting to ensure they are free of bias. State-level mandates are led by recent Executive Decree regarding safe, secure, and trustworthy AI and subsequent key AI actions announced by the Biden-Harris administration. It is imperative that businesses stay up to date with regulatory developments to avoid hefty fines and legal repercussions.
In customer service, this means ensuring that AI systems used for customer interactions are data privacy compliant and developing AI laws. For example, AI chatbots must handle customer data responsibly, ensuring it is stored securely and can respect data subject rights, such as the right to be forgotten in the EU .
Ethical risks and their implications
The ethical risks of AI can be identified by considering two areas of ethical importance: harms and rights. When AI has the potential to cause, aggravate, or perpetuate harm, we must take steps to understand, remediate, or completely avoid that harm.
A key example of this type of ethical risk is the harm caused to individuals by AI systems that unfairly or mistakenly make high-consequence decisions. For example, in 2015, Amazon implemented an AI system to facilitate the initial screening of candidate resumes. Despite attempts to avoid gender discrimination by removing any mention of gender from documents, the tool unintentionally favored male applicants over female applicants due to bias in the training data. Thus, the candidates were repeatedly disadvantaged by this process and therefore suffered the harm of indirect discrimination.
Other ethical risks include where AI could undermine human rights, or where its ubiquity highlights the need for a new category of human rights. For example, by prohibiting the biometric processing of AI in the workplace, the EU AI law seeks to address the ethical risk of having a person’s right to privacy compromised by the AI.
To mitigate these risks, companies should consider adopting or expanding comprehensive ethical frameworks. These frameworks should include:
- Bias detection and mitigation: Implement robust methods to detect and mitigate bias in training data and AI algorithms. This may involve regular audits and the inclusion of various datasets to train the AI systems.
- Transparency and explainability: Ensure AI systems are transparent to avoid potential deception, with decision-making processes that can be explained. Customers and employees should be able to identify and understand how AI decisions are made and have means to challenge or appeal those decisions.
- Justice and equity: Implement necessary measures to ensure that the benefits of AI are distributed equitably among all stakeholders. For example, in customer service, AI is expected to improve the experience for all customers, regardless of their background or demographics.
Reputation risks and proactive management
Reputational risks are closely linked to legal and ethical risks. Companies that fail to adequately address these issues can suffer significant reputational damage, often leading to tangible negative impacts on their business. For example, a data breach involving AI systems can erode customer trust, lead to public backlash, and ultimately lead to loss of customer loyalty and sales.
To manage reputational risks, Avaya believes that businesses should:
- Adopt responsible AI practices: Adhere to best practices and guidelines for AI implementation. This includes being transparent about how AI is used and ensuring that it complies with ethical standards.
- Communicate clearly with stakeholders: Keep customers and employees informed about how AI systems are used and what measures are in place to protect their interests. This level of transparency builds trust and often mitigates potential negative reactions.
- Implement a solid governance framework: Establish an AI governance program to oversee AI implementation and ensure compliance with ethical and legal standards. This program should include representatives from various business units and have clear processes for monitoring regulatory guidelines and evaluating AI projects. To fulfill this role at Avaya, we created an executive-sponsored AI Enablement Committee.
The ethical and legal risks associated with implementing AI are significant, but manageable with the appropriate strategies and frameworks. By understanding these risks and taking proactive steps, businesses can harness the power of AI to improve customer and employee experiences while protecting their business from potential pitfalls.
To learn more about Avaya’s AI capabilities across its solutions portfolio, click here.