When a generative AI tool spreads misinformation, violates copyright law, or perpetuates hateful stereotypes, it is the users of the technology who pay the blame.
After all, a large language model (LLM) generating text or an image “doesn’t use its own brain” and understand the implications of what it generates, said Paul Pallath, vice president of the Applied AI at Searce, a cloud consulting company. founded in 2004, which provides AI services such as assessing AI “maturity” or readiness and identifying use cases.
“We’re a long way from machines doing everything for us,” said Pallath, who held senior data science and analytics roles at SAP, Intuit, Vodafone and Levi Strauss & Company before joining Searce last year . (He also has a PhD in machine learning.)
Humans cannot entrust their ethical problems to algorithms and programs. Instead, we need to “ground ourselves in empathy,” Pallath said, and develop responsible machine learning practices and generative AI applications.
Searce, for example, works with his clients to go beyond the abstract. It guides companies in implementing generative AI and helps them establish frameworks for ethical and responsible use of AI.
Pallath spoke with AdExchanger about some hypothetical – but very possible – ethical scenarios a marketer might face.
If a generative AI tool produces information that is factually inaccurate or misleading, what should a marketer do?
PAUL PALLATH: Understand, verify and fill in the gaps in everything that comes out. There will be a lot of content created by LLMs that appears to be true but is not. Don’t assume anything. Fact checking is very important.
What should I do if I am unsure whether an LLM has completed training on copyrighted material?
Avoid using it unless you have the rights and explicit permission from the author who owns the copyright, as this creates significant exposure for your business.
The LLM should also spit out the references from which this content was generated. It is necessary to check each reference. Go back and read the original content. I’ve seen LLMs create a reference, and the reference doesn’t exist. It just concocted information.
Say a marketer is looking for advertising images and an LLM keeps referring people with lighter skin. How can they avoid reinforcing and amplifying harmful prejudices?
It’s about how you design your prompts. You need to have governance in terms of prompt engineering – a review of the different types of prompts you should use, generally – so that the content published isn’t biased.
If you have a repository of approved images, the LLM can create a different environment, change the color, clothing or brightness, or make the image a high-resolution digital image.
For retail businesses, if they are allowed to use a person’s image, they can place different clothing items on top of (existing images) so that they can become part of their marketing messages. They can have brand-approved ambassadors who don’t need to come in for several hours of photo and video shoots.
Should companies pay these brand-approved ambassadors for variations of their AI-generated images?
Yes. You would compensate for each digital artifact you create with different models. Companies will start working on different compensation mechanisms.
LLMs are trained on what is online, so they often favor “standard” forms of dominant languages, such as English. How can marketers mitigate language bias?
LLMs are maturing from a translation perspective, but there are even variations of the same language. What region the content comes from, who verified the content, whether it is culturally true, whether it matches the belief system of that country – that is not the knowledge that LLMs have.
You need a human involved in the loop who performs a rigorous review of the generated content before it is published. Have cultural ambassadors within your company who will understand the nuances of a culture and how it will resonate.
Is generative AI morally questionable from a sustainability perspective, given the energy consumption required to run LLMs?
A significant amount of computing power goes into training these models.
The parameters that large companies seek to become carbon neutral in the next five to ten years are fundamental to the choice of suppliers, so that they do not contribute to their carbon emissions. They must consider the energy consumed by their data centers when making these choices.
How can we prevent exploitationlike the use prisoners Or very poorly paid workers to form LLMs, and other bad behavior on the part of LLM creators?
You need to have data governance and data lineage – in terms of who created the data, who touched the data, even before the data actually gets into the algorithms – and (a log of) decisions that were made (along the way) . Data lineage gives you transparency and allows you to audit algorithms.
Today, this auditability does not exist.
Transparency is necessary to eliminate unethical elements. But we depend on the big companies that created these models to provide transparency measures.
This interview has been edited and condensed.
For more articles on Paul Pallath, Click here.