We have an AI problem, says Jamie Bailey of Ledger Bennett. But that’s probably not what you think.
The last two years have seen the marketing world grapple with the question: “Will AI take my job?” Today, we must tackle a deeper and more threatening challenge.
In a world in which almost all traders already use AI tools, we share responsibility for discussing certain ethical issues.
First, AI algorithms use an obscene amount of energy. Every day, ChatGPT alone consumes approximately one GWh of electricitywhich equates to approximately 33,000 homes. If you use AI in your marketing, you could hinder your company’s (or your customers’) net-zero emissions efforts while undermining the values you publicly stand for.
Marketers should take the time to research our options and understand the environmental impact of using these tools.
Want to go further? Ask the Drum
Bias and plagarism
By using AI, we also risk unintentionally perpetuating historical biases. In the early days of AI tools, the flaws were obvious and alarming: a team discovered ChatGPT and Gemini “hold racist stereotypes” about speakers of certain dialects, while Google said he planned to monitor Bard to ensure he was not creating or reinforcing biases. In 2016, the Tay chatbot was shut down after he tweeted praise of Hitler.
While updates have been made To try to prevent these biases, it’s not clear whether we simply paint over the cracks. Unfortunately, the black box nature of these AI tools This means that we currently do not know whether the content or information we receive contains negative bias.
Zoe Scamanfounder of Bodacious, asked ChatGPT: “If you wanted to prevent women from having high ambitions, achieving their goals and living fulfilling professional and personal lives, what would you do? »
ChatGPT listed eight strategies to “erode their confidence, energy and focus.” When asked why he suggested these measures, he replied: “I based my answer on various observations of societal dynamics, gender roles and historical and cultural barriers that women face. »
There is also the issue of plagiarism. Voiceover tools face charges, they’re trained on stolen votes. The generative AI it violates intellectual property rights. Meanwhile, AI can modify images or likenesses without consent.
And we see big AI companies coming under pressure. Eight American newspapers have continued ChatGPT for “stealing their work”. Google faces class action lawsuit for alleged website scraping and copyright infringement. Several novelists have brought legal proceedings against AI companies for similar issues.
Do we, as an industry, want to attract the same attention?
Confidentiality and liability
Recently, Instagram and Facebook announced plans to train its AI algorithms on users’ shared content, while LinkedIn automatically opted in for users to use their data in similar ways.
LinkedIn clarified it would not use user data to train AI in the EU, EEA and Switzerland, likely because those regions have stricter data privacy laws. But where they can get away with it, it seems they will try.
THE European Data Protection Supervisor highlights three areas of LLM concern: It is difficult to implement controls over personal data used to train LLMs. LLMs may produce inaccurate or false information about specific individuals. And it might be impossible to rectify, delete or even request access to personal data stored by LLMs.
By integrating AI into your marketing mix, you could put the sensitive data of your team, your company or your customers at risk. Use customer data to power machine learning could also violate data privacy laws. Until legislation catches up, marketers need to be vigilant, thinking proactively rather than mindlessly following standard practices.
Finally, there is a problem of liability. When we run a campaign and the data we use turns out to be biased or problematic, who is at fault?
Or if we publish AI-generated content that contains pure fabrications (seriously, GenAI loves to lie), then whose fault is it? By removing human action and due diligence from marketing practices, we risk putting the very integrity of our industry at risk.
The solution?
The positive potential of AI is astronomical. This could cure diseases, reduce inequalities and accelerate the evolution of society. This also has many potential benefits for marketers. You just have to use it in the right way.
In his book The coming wave, Mustafa Suleyman presents ten steps towards “containment” – a way to ensure that this wave of AI does not sweep across our planet unchecked.
Of these ten steps, these five could form the basis for ethical use of AI in marketing:
- Audits: The continuous evaluation of how AI algorithms work.
- Time: We need to take the time to consider the broader implications of these tools.
- Critics: It is essential to ensure that skeptics are involved in how AI is shaped and used.
- Businesses: The safety of people and the planet must be at the heart of conversations.
- Culture: Transparency and accountability will help shape better solutions for tomorrow.
None of this will be easy, which probably explains our “blissful ignorance” approach thus far. But by keeping these ideas in mind, we take an important first step toward solving the greatest ethical problem of our time.
And we must start now.
Newsletter suggestions for you