Many marketers are eager to see what generative AI products can do for their brands.
But before everyone gets too concerned with the “could,” Brandtech Group believes it’s time to consider the “should.”
On Wednesday, the marketing technology holding company announced a series of initiatives aimed at establishing guidelines for the ethical use of generative AI.
Initiatives include a free ethics policy blueprint for businesses and the launch of “Bias Breaker” in Pencil, The Brandtech Group’s AI-powered generative ad creation platform acquired last year.
Brandtech Group also plans to offer its clients six-week AI ethics sprints through several of its consulting firms. During these development periods, companies will be guided to work on brand-specific policies.
Combating bias in artificial intelligence
Previously, AI-curious customers approached Brandtech Group with concerns about copyright infringement, the origin of training data, and data privacy, Rebecca Sykes, head of emerging technologies, told AdExchanger.
Legal issues remain a concern, but brands now seem more concerned about the court of public opinion, citing fears of getting it wrong and suffering reputational damage.
According to Sykes, weaker brands that already struggle to stand out in a “sea of sameness” are particularly vulnerable to the negative consequences of AI tools, which are inherently designed to reflect what already exists in the training data, including any human biases or prejudices present there.
“The biases are so ingrained and the models are so opaque,” Sykes said, that if your brand doesn’t have a strong stance around DEI and ethics to begin with, you may not notice the impact AI-generated material can have on your business as it evolves.
Brandtech Group also recommends that its clients adopt a formal position on transparency and disclosure, and consider what their “hard no’s” will be regarding the use of certain techniques or models.
“Brands will have to take a different position and perspective,” Sykes said of the policy-making process. “But it has brought them to some clarity, to a much deeper level of thinking about what they will do, what they won’t do, but most importantly, how they will get there.”
Roll the dice of representation
When developing the “Bias Breaker” tool, The Brandtech Group worked with Pencil to generate thousands of AI images from text prompts and identify trends in what the images depicted.
“I think images, and particularly images of people, were the starting point because they created the most friction and tension within the companies we talked to,” Sykes said. “They were very divided on whether or not they should develop synthetic people.”
They found that simple occupational suggestions tended to match existing stereotypes about the types of people who do those jobs: for example, typing “CEO” typically conjures up images of middle-aged white men, while an image suggested by the word “nurse” will depict an attractive young white woman in uniform.
To correct this trend, Pencil’s new Bias Breaker tool uses random probability to inject more inclusive descriptive language into these prompts, creating a broader range of representative figures across age, gender, race, body type, and even religion.
That said, Bias Breaker isn’t meant to be a complete solution yet, Sykes admitted. For one thing, it doesn’t yet account for the fact that AI’s representations of marginalized identities can often sink into cultural stereotypesAlso.
Brandtech Group policy requires that content targeted to a particular group include members of that group in the creation process. For example, content related to Pride Month celebrations in June must include contributions from human members of the LGBTQ+ community. Most importantly, companies should not use AI-generated content if they have previously paid community members to participate in real photoshoots.
“If you care enough about supporting Pride Month to create content about it, make that content about real people and about real community,” Sykes said.
Pencil, The Brandtech Group’s existing AI product, also protects against copyright infringement. The company’s image-generating software doesn’t allow the use of proper names of any kind, making it impossible to generate content “in the style” of a particular artist, and it includes completely royalty-free datasets from models like Adobe Firefly and Getty Images that its more copyright-conscious clients can use.
A more ethical future
Generative AI remains an ethical minefield in many other ways. And Brandtech Group has no easy answers to complex issues, such as AI’s reliance on exploitation of labor or its negative environmental impact.
“I wouldn’t want anyone to think we’ve solved the problem and we’re going to walk away,” Sykes said. “This is the first step, and then we want to continue to build and move forward.”
Meanwhile, the Brandtech Group’s internal ethics policy follows an important guiding principle: a computer can never be responsible for making decisions on its own.
“We made a conscious decision not to automate anything 100 percent,” Sykes said. “We’re going to push automation to the point where it makes your life significantly easier, but not to the point where we have autonomous decision-making.”