Generative AI (Gen AI) is not just a buzzword; It’s a game changer. With its immense potential, it is reshaping the way we interact, work and understand our world.
From drug discovery to software development, from knowledge retrieval to creative arts, this technology can achieve remarkable feats. With this in mind, it’s no surprise that generative AI is expected to increase global productivity by billions of dollars.
According to a KPMG report, 70% of CEOs agree that generative AI remains at the top of their priority list, with most (52%) expecting to see a return on investment in three to five years.
But despite the desire to continue their investments, ethical challenges are more of the main risks in terms of implementing generative AI.
“When it comes to generative AI, CEOs are stuck between a rock and a hard place; they are eager to reap the benefits of technology, but regulatory and security concerns prevent them from getting the most out of it,” said Ian West, head of KPMG’s UK TMT practice.
Ethical Concerns and Challenges Related to Generative AI
As Kunal Purohitdirector of digital services at Tech Mahindra explains, Generative AI solutions and offerings are reshaping operational, functional and strategic landscapes across industries. However, generative AI is relatively new and largely unregulated, leading to several potential misuse scenarios and ethical issues.
“There are many ethical concerns surrounding generative AI today,” he says. “These concerns include, among others, issues related to copyright or stolen data, hallucinations, inaccuracies, biases in training data, cybersecurity vulnerabilities and environmental considerations. Regarding the use of generative AI, issues such as copyright, data protection and cyber vulnerability are complex. Companies will need to have the right governance mechanism in place, both from a system and process perspective.
With 3.5 quintillion bytes of data generated daily, apprehensions often arise about using AI models that are heavily dependent on user data. “Data privacy and security concerns are emerging, particularly in industries such as finance and healthcare. Personal and corporate data can inadvertently end up in generative AI training algorithms, exposing users and organizations to potential theft, loss, and privacy violations.
The phenomenon of “hallucinations” in AI, where models provide unsubstantiated or incorrect answers, also poses a unique challenge. “Additionally, advanced training in AI-based generative tools allows them to convincingly manipulate humans through phishing attacks, introducing an unpredictable element into an already volatile cybersecurity landscape.”
Bias in training data and substantial energy consumption of AI models are other ethical considerations that deserve attention. “This becomes an important ethical concern when AI is used in decision-making processes such as hiring, lending, and criminal justice,” describes Purohit. “Additionally, generative AI models consume large amounts of energy both during training and when processing user queries. As these models become more and more sophisticated, their environmental impact is only likely to increase unless strict regulations are enforced.
Ethical frameworks and guidelines are essential for generative AI
Purohit highlights the need for greater focus on accountability, ethics, and misinformation detection in generative AI. Misuse of generative AI can lead to criminal and fraudulent activities, potentially causing social unrest, he describes.
“Ecosystem players must play a central role in AI governance to ensure its responsible use,” says Purohit. “Regulatory bodies have a critical role to play and technology creators must introduce interventions to ensure the safety, security and suitability of technology for various applications, including addressing copyright aspects. »
Generative AI can be thoughtfully and effectively leveraged within organizations when leaders commit to implementing safeguards to protect both employees and customers from potential technological dangers.
It is essential to establish an ethical framework and guidelines highlighting precautionary measures related to the use of generative AI. “These measures can help organizations prevent the proliferation of harmful bias and misinformation, protecting customers, their data, company proprietary information, the environment, and creators’ rights to their work,” says Purohit . “Clear guidelines regarding data diversity requirements, fairness measures, and identification of advantageous and disadvantageous data sets can ensure delivery and data processes operate consistently and smoothly. This end-to-end traceability and accountability serves as the basis for auditing, identifying and resolving issues in real time.
Additionally, while companies need to train data engineers, data scientists, ML modelers, and operational staff, it is equally crucial to educate employees on the responsible use of generative AI.
“Those implementing this technology, including companies like Tech Mahindra or other service providers, need to understand their role in safeguarding it. They know how this technology works and must take the necessary measures to ensure its responsible use. For example, if certain data should not be used for a particular purpose, they must implement technical safeguards to prevent such use. If the generated result may be harmful or offensive, filtering mechanisms must be in place. In case of malicious content, proactive blocking measures should be applied.
The growing role of the ethics officer
The growing emphasis on ethics in generative AI has led to the creation of the role of ethics officer in companies. This role ensures compliance and focuses on identifying and resolving ethical issues regarding people, processes and technology. “This dedicated role is responsible for ensuring compliance at all levels, encompassing people, processes and technology,” explains Purohit. “This allows businesses to spend more time and effort identifying problems and finding the best solutions. »
When designing and developing generative AI use cases, businesses must take a “responsibility first” approach. “It is essential that they adhere to a comprehensive and structured assessment of responsible AI and follow a human approach when drawing critical conclusions and taking action.
Purohit concludes by advocating a “responsibility first” approach to the development and use of generative AI. He highlights the continued evolution of this technology, particularly evident in ChatGPT’s rapid progress toward its successors.
“The era of generative AI is only just beginning,” he says. “Since the deployment of ChatGPT in November 2022, we have already seen many updates and adjustments; four months later, ChatGPT 4 arrived with significantly improved capabilities. In other words, just as fully realizing the benefits of a technology takes time, so does establishing a correct ethical framework.
“It is essential to remember that while generative AI can create problems, it can also solve them. It’s like an antidote. While generative AI can lead to cyberattacks, it can also defend against them. Technology can cause disruption, but it can also offer protection.