In the 1970s classic Colossus: the Forbin projectthe uncontrolled growth of AI leads to a dystopian world where human action is supplanted by cold machine logic. Although we are far from these extremes, the exponential rise of modern AI raises similar concerns about control, ethics, and power dynamics. From imitating human voices to appropriating personal likenesses, AI tools are pushing the boundaries of privacy, fairness and individual rights.
The lawsuits emerging today – against the estate of George Carlin which takes on an AI-generated comedy in Main Sequence, Ltd. has
small businesses battle tech giants for branding Gemini data against Google— highlight the imbalance of power and the need for a legal framework to ensure the ethical development of AI. As AI increasingly influences industries as diverse as agritech, healthcare, and entertainment, the ethical dilemmas surrounding its use become increasingly pressing.
1. Right of publicity: protection of personal identity
AI’s ability to reproduce voices and appearances without consent has sparked significant controversy. In Main Sequence, Ltd.The estate of George Carlin has challenged the use of the late comedian’s voice and persona in an AI-generated comedy special. The estate argued that this unauthorized use violated Carlin’s rights and exploited his inheritance for commercial purposes.
This case highlights the ethical dilemmas posed by AI’s ability to imitate human identity. Without clear consent, such practices violate personal rights and raise concerns about the exploitation of individuals, living or dead.
2. False statements and misleading advertising
AI companies often market their tools with claims that blur ethical lines. In Lehrman et al. v. LOVO, Inc.voiceover actors alleged that LOVO falsely implied actors’ consent to “clone any voice”. Such false claims not only mislead customers but also undermine trust in AI-based services. This reflects a growing need for transparency in how AI tools are marketed and used.
3. Ethical and legal responsibility of AI training models
Many lawsuits accuse AI companies of training their models on data scraped from the internet without permission. For example, in
Zhang et al. against Google LLCvisual artists have claimed that Google is using their copyrighted works to train its AI image generator, Imagen. The ethical question here is whether companies should be allowed to use publicly available data to train AI models without compensating the original creators.
4. Imbalance of power and legal costs
A recurring theme in these lawsuits is the disparity between the plaintiffs — often smaller entities or individual creators — and the tech giants they face. In Gemini data against Googlethe small company accused Google of exploiting its dominant position to appropriate the “Gemini” brand. This imbalance highlights the difficulty of holding large companies accountable, particularly when smaller plaintiffs lack the resources to fight protracted legal battles.
5. Calls for new legal frameworks
These cases reveal significant gaps in existing laws. Current intellectual property and privacy laws are often inadequate to address the complexities of AI technologies. Plaintiffs in cases like Basbanes v. Microsoft Corp. implicitly call for regulatory reform, urging lawmakers to create clearer frameworks for AI accountability. These frameworks must balance innovation with ethical practices and ensure fair compensation for creators. The rapid development of AI raises ethical and systemic questions that go beyond intellectual property. From protecting individual rights to addressing power imbalances, these lawsuits highlight the need for transparency, fairness and accountability. As courts grapple with these challenges, the outcomes will shape not only the future of AI, but the digital economy as a whole.
The content of this article is intended to provide a general guide on the subject. Specialist advice should be sought regarding your specific situation.