NEW! Listen to the article
What do you get when you mix AI, marketing and ethics?
Confusion. Frustration. Potentially questionable decisions. And a lot of raised hands (and not because “we just don’t care”).
We asked Arizona attorney Ruth Carter, Evil Genius of Geek Law Firm, for some tips to help you navigate the new AI reality during their recent MarketingProfs presentation, “Ethical AI in Marketing: A Marketer’s Guide to Privacy, Policy, and Regulatory Compliance.”
Please keep in mind that this is general legal information and not legal advice. Speak to your legal team or hire a legal professional online if you need specific advice. If you are not paying Ruth, then Ruth is not your lawyer.
Read the fine print
“Understand what (AI vendors) might do with what you…put into the ‘AI machine’ or what the ‘AI machine’ creates for you,” Ruth emphasizes.
Yes, this means the entire terms and conditions of each AI platform.
In particular, carefully review their retention and training policies on your data. Don’t just blindly click “accept” and accept them (as we all do with social networks and iOS updates).
If these terms don’t resonate with you, ask yourself whether this changes your AI use case or whether you should choose a different platform. And use these terms to inform your internal AI practices, including “what absolutely, under no circumstances, can be integrated with AI,” Ruth emphasizes.
Ensure confidentiality and protection of internal data and customer data
Unless you are using a closed system or know that the plan you purchased keeps your data sequestered and private, assume that the AI will use whatever you upload as public training data.
In most cases, this is a big no-no for you. This is certainly the case when personally identifiable information is involved. GDPR, CCPA, and other privacy regulations also apply to AI.
You should also consider confidentiality and non-disclosure agreements. These can impact how you use information, such as financial data and other confidential details, especially if they relate to private companies. Don’t risk legal action or ruining relationships by breaking your promises.
Ruth suggests avoiding real customer information and instead using pseudonyms and anonymized data.
Understanding AI Security Measures and Risk Management
Hacking. AI is not safe. And this can happen in two ways:
- First, “Could an AI be hacked and sabotage the outcome of what it creates? What it produces for the people who use it? Yes.”
- Second, “Could an AI be hacked and the data stolen? That means data has been fed into the AI” and the AI company now has “copies of it stored somewhere. Could all of this be obtained by a hacker? Yes.”
So, look at the service’s security measures. Ask yourself, “Are they sufficient? What would be the worst-case scenario if the AI were hacked?” What would happen to you and what would the AI company do about it? And what would you do about it?
Make your AI policies clear to employees, contractors, and vendors, and ensure everyone follows the rules
“Have an ethics statement regarding AI… (including) how the company will and will not use AI, and why they made that decision,” and publish it in your company handbook or as a stand-alone document, Ruth says.
Check out this excerpt from the presentation to learn who and what to consider when creating your policy.
But don’t overdo it and create restrictive restrictions that prevent your team from growing and being competitive.
Your policy “serves multiple purposes, but it allows your team to embrace the ‘this is how we work’ philosophy,” Ruth emphasizes. So think of it as a way to ensure everyone is working legally, ethically, and on time.
Verify all AI-generated content, every time
It’s no secret that AI can “hallucinate,” i.e., produce false results. So always double-check that AI-generated content is accurate. Demand the same from your team.
Ruth says: “I would expect a corporate AI policy to require you to fact-check all the data before using (AI output) in a client’s content, so as not to spread misinformation.”
This is not only unethical, it’s also embarrassing. Don’t be the next viral textbook case of making mistakes.
Be transparent with your customers
You also need to discuss your AI policy (not just what and how, but why) with your customers.
Ruth insists: “You want to be transparent with your customers about how you’re using AI and what benefits it brings to them”:
- They need to know how AI improves your work and products, and continue to benefit from your human expertise.
- And if you integrate it into your products, whether internally or through a third-party LLM integration, they need to know that it actually enhances what you sell.
Publishing a corresponding ethics statement on your website and sharing it with your customers can help ensure transparency and build customer trust.
Add legal protection to your customer contracts
Now is the time to update those contracts, confidentiality agreements, and force majeure clauses. We’re still in the Wild West of AI technology, so protect yourself from unforeseen problems.
First, ask your clients to agree to only provide legally obtained data or content for AI use and to follow their instructions: “The contract should state that you only use their content in accordance with their instructions, so you can’t use it for any other purpose, and you rely on them to provide the instructions,” says Ruth.
Second, include an indemnity clause to protect yourself if something goes wrong. Ruth suggests including a statement that “in the event that you are accused of doing something wrong because you followed your client’s instructions, they will indemnify you and reimburse you for your legal fees and any damages you are awarded.”
Third, add a disclaimer. “I want your marketers not to be psychics. You can’t make any guarantees about the results of the (AI-generated) content you create for them,” Ruth says.
Just say no to unethical behavior
Finally, don’t be afraid to say goodbye to a client if you think they’re using AI to engage in unethical or illegal activities, or if they’re asking you to do so on their behalf. “You set the rules for who can work with you,” Ruth emphasizes.
But there’s no need to be immediately discouraged if you discover something amiss. If you think it’s just an ill-informed mistake, “I look at it as a learning opportunity,” Ruth says.
But that doesn’t mean you should let customers – and suppliers – off the hook if you foresee a long-term problem. “If they’re not willing to learn, I’ll walk away, because it’s easier to prevent problems than to fix them later,” Ruth suggests.
Want to know more about what Ruth has shared? Check out their recent AI for Demand Generation Marketers session.
Other resources in the AI for Demand Gen Marketers series
Can AI Save You From Marketing Hell?
Using AI across the customer journey requires alignment across teams
Use AI to Create Your Profiles: Don’t Lose Sight of Your Real-World Buyers
AI Can’t Write Thought Leadership (But It Can Do Other Things)
Your AI needs a human reviewer
AI can do hard things for you (like predict your future success)
Seven Steps to Deploying AI in Your Demand Generation Programs