Director of Information at TELUS Internationala global customer experience provider powered by next-generation digital solutions.
Imagine a world where generative AI (GenAI) creates personalized customer experiences based on consumers’ unique preferences, needs and emotions. Amazing, right? But what would happen if, to achieve this, companies collected, analyzed and used their customers’ personal data without their knowledge or consent? What happens if the results discriminate against certain groups or lead to inaccurate or biased decisions? It’s not so surprising anymore.
This is why US President Biden’s October 2023 speech Executive Order on Artificial Intelligence is so important. It establishes standards for the ethical and responsible development of AI that help protect society against fraud and deception. The order also challenges companies to develop GenAI solutions to ensure they respect consumer privacy, promote fairness, and provide transparency.
Although the actions President Biden detailed in the order have not yet become law, businesses harnessing the power of AI to improve their customer experience (CX) through personalization should proactively integrate its best practices in their AI governance framework. In doing so, they are taking steps to mitigate harm while building consumer confidence.
Maintaining Trust in the Age of GenAI
In the digital age, trust has become imperative as AI, and in this area, GenAI specifically presents both unprecedented opportunities and new and complex challenges. Today’s GenAI-based chatbots can create convincing fake content, including text, images, videos, voice clips, and “deepfakes” that appear real but can be used to spread misinformation and tarnish reputations.
Recent deepfakes that have made headlines involve financial scams And celebrity impersonations. With the upcoming US elections, there are already fears that GenAI could be used by adversaries to undermine a fair political process. Deepfakes can be powerful tools used to make disinformation campaigns more credible, thereby compromising the reliability of visual content. Without proper oversight, these bots could erode voter confidence by spreading harmful lies and propaganda.
Using GenAI to power LLMs also raises copyright issues. Bots typically scour the internet for content, which means they often use copyrighted works without permission. For example, there was recent debates around AI-generated art and literature, highlighting the need for robust mechanisms to protect intellectual property.
It’s no surprise that if consumers think a company’s value the plagiarism chatbot (paywall), this can quickly erode trust. This can be mitigated in several ways, including properly licensing all copyrighted training data, checking for plagiarism, and allowing users to report AI-generated content that violates these policies. so that it can be quickly deleted. As technology advances, the risk of copyright infringement raises complex issues, requiring a proactive approach to protect artists and content creators.
Furthermore, the question of hallucinations continues to plague GenAI platforms, whereby a model confidently manufactures content that does not correspond to reality. These hallucinations often arise from insufficient or biased training data, a lack of common sense, or prioritizing a smooth response over a truthful response. Even if the answers seem plausible, these imaginary answers harm consumer confidence. Ongoing monitoring and refinement is essential to reduce hallucinations and build client confidence.
A solid governance framework
To ensure that GenAI systems adhere to truth and trustworthiness, companies must create a comprehensive framework for ethical AI governance. This framework should cover the entire lifecycle of GenAI, from data collection and analysis to content generation, delivery and evaluation. In addition to aligning with a company’s values, it must also follow the ethical principles and best practices outlined by the AI Decree and other relevant global standards and regulations.
Key elements of an ethical AI governance framework include:
• Privacy by design principles and practices that integrate privacy into GenAI systems through data minimization, anonymization, and other privacy-enhancing techniques.
• Invest in privacy-enhancing technologies that enable GenAI systems to analyze data and generate content without compromising customer privacy.
• Comply with data regulations and ongoing monitoring, which involves conducting third-party audits, reviews and monitoring of data practices to detect violations.
• Integrate reinforcement learning from human feedback (RLHF), which uses human ratings, reviews, and corrections as reward signals to improve the quality, fairness, and accountability of GenAI.
The impact of transparency on trust
Trust and transparency are key in developing GenAI to promote ethical practices, build consumer trust, and strengthen brand reputation, especially when applied to CX. A recent survey A study of 1,000 Americans familiar with GenAI conducted by my company, TELUS International, found that nearly three in four (71%) expect companies to be transparent in how they use GenAI. By providing clear and reliable information, businesses can establish credibility, responsibility and accountability with their customers, thereby strengthening their brand reputation and trust.
Businesses should educate consumers about the presence and purpose of GenAI on their platforms and how they use it to improve the customer experience. They must also disclose how they collect, use and protect customer data, as well as the benefits and risks of data sharing. Clear, easy-to-understand privacy policies explaining customers’ rights and choices regarding their data should also be communicated. Additionally, brands should make it easy for customers to access and review their personal data and correct any inaccuracies, allowing them to have some control over how their data is collected, stored and used.
The era of GenAI offers unprecedented opportunities to create a personalized customer experience that generates loyalty and connections with consumers. By adopting proactive and responsible AI principles and diligent governance, following best practices, and working with industry experts to stay ahead of evolving standards and regulations, businesses can ensure they act thoughtfully and operate GenAI ethically, thereby building lasting consumer trust.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Am I eligible?