Today, the term AI generally refers to a complex algorithm fine-tuned via machine learning, trained on a massive database of publicly available data, and wrapped in a large language model. There is no real intelligence implied; the models don’t actively train from scratch, and everything is based on datasets from real humans.
This is very good (apart from the misleading terminology). Despite my annoyed complaintswhat we call AI is promising and could have a future, improving accessibility for people with disabilities, efficiency for professionals and convenience for the general consumer.
But instead of AI changing the world for the better, consumers are increasingly fed up with buzzwords, scams and false promises. The idea that AI (especially generative) is theft is no longer limited to us alarmist naysayers, has entered public discourse.
AI’s persistent legal and ethical hurdles
And the latest attempts to capture regulation by industry
Don’t just take my cynical word for it. There is an ever-increasing glut of lawsuits and blatant public criticism regarding companies’ theft of not only aggregated text and images from social media, but also intentional data such as voice actor recordings.
- Adobe insisted that the recently proposed Creative Cloud terms of service, which would have given it unlimited access to your works, are not intended to allow scraping by AI – even if it has been captured hypocritically train your Firefly AI on Midjourney content. At the very least, Adobe has changed his mind after a series of bad press.
- San Francisco-based “AI Voice Generator” Lovo faces accusations of illegally appropriate the voices of actors to develop its products. Is anyone surprised?
- Five of the first 20 winners of the High Art competition were finally disqualified for explicitly flouting the rules and submitting AI-generated works.
- Sony has started the unsubscribe process to prevent content from being scraped by hundreds of AI companies. When artists take sides Sonyof all multinationals, you know something is brewing.
- After Reddit sold its entire website to OpenAI as training material (how could that go wrong?), a new service called ReplyGuy openly announces poisoning training kits to promote the products.
- Microsoft’s MSN information service in trouble openly defames an Irish public figure using AI-generated content based entirely on lies.
- Because it is not obvious enough, Microsoft has decided to stock Plain text snapshots of sensitive Windows data for local AI purposes is, one way or another, exactly what consumers are demanding. What could go wrong??
- California legislators introduced a “kill switch” bill which, while relatively toothless and successful, nevertheless has AI companies that – how awful — regulators could address the uncontrolled flow of potentially copyright-infringing defamatory material.
- An internal OpenAI team meant to control advanced algorithms and guide them toward what’s good for consumers has, unsurprisingly, I quietly took the ax.
- Current and former OpenAI and Deepmind employees openly warn of the dangers to give AI companies carte blanche to scrape and analyze user data across the web.
- The world’s biggest tech giants shamelessly skirting the edges from the barely regulated domain to capture the entire data market before lawmakers have a chance to protect consumers.
If all this is not enough, you are invited to keep up to date with the latest AI-related lawsuits and complaints with ChatGPT is eating the worldin-depth monitoring of the position of artists and companies in the struggle for control of their own work.
How did we get here?
Copyright law is woefully outdated
Don’t bother arguing that deleting public databases (like social media or Stack Overflow) legally falls under fair use provisions. A decision by the United States Court of Appeals for the 9th Circuit holds that even copy a program to RAM constitutes a violation. It is abundantly clear that current US intellectual property laws are completely inadequate to handle data scraping and machine learning composition.
OnePlus 12 review: all flagships, no AI
This phone leaves nothing on the table, making it a truly complete package
Indeed, the AI giant OpenAI argued before the British Parliament this year, forcibly limiting AI training to works in the public domain would harm progress, primarily because people have already begun to misappropriate other people’s creations en masse. The lawyers then explained how to require private negotiations to access intellectual property crush the company’s business model, which doesn’t really say what the legal team thinks. Yet the widespread adoption of AI and its massive training data sets make it truly up for grabs Harder and harder so that the courts can navigate it.
The already bleak future of AI
Recursive training, poisoned datasets, and model collapse
Publicly available datasets used for AI training have actually been compromised since the massive surge in popularity of ChatGPT 3.5. It’s no secret: training AI on AI-generated data leads to regressionand yet the situation is moving full steam ahead, with no slowdown in sight.
How many IP eggs will researchers break to prepare a smart, predictive omelet?
The paper in question recognizes various ways to mitigate this regression, and it is unlikely that we will see a complete collapse of the model. After all, investors invest almost unlimited funds to ensure they can still make a profit and control the flow of information by removing comments, visual art, code, and other content from their creators. But it is unlikely that the model’s performance will continue to improve at the same rate.
And, as AI enthusiasts rightly point out, there is currently no legal a framework considering these actions explicitly illegal, hence one wave after another of lawsuits, investigations, corporate controversies and refusals from outraged artists. If you support human creativity, or humanities disciplines in general, you should hope that this legal framework will show up soon.
Complex code trained on stolen data is more than just a paintbrush
I’ve said it before: AI models don’t resemble the human mind. Researchers have yet to identify the part of the brain responsible for consciousness and creativity, but what we TO DO be aware that wherever it is, it is not proprietary OpenAI code.
Hey, maybe AI isn’t so bad after all
A paintbrush requires lifelong skill and talent to create consistently satisfying art in the hands of a human being. Poetry, prose, and song lyrics demand a human element, and stealing that humanity from uncredited sources goes against the very concept of creativity. And no matter how interesting and convenient your new software tool is, if it’s a tool for stealing other people’s creative ideas, it has no place in the art world.
Will there be a resolution?
It’s difficult to envision a satisfactory conclusion to the ongoing controversies over data hacking and art theft, especially when human creators tend to suffer loss after loss under the weight of corporate legal departments to big budget and tech-ignorant lawmakers.
The 5 Best AI Apps for Your Android Phone or Tablet
Cut through the clutter to find the best AI apps for your Android.
The art and technology industries have never been more involved in this battle, with user-friendly AI software more accessible than ever. Like many of you, we are eagerly watching to see what developments hold up and provide protections for real, tangible human contributions.
Only time will tell, and when it does, we will be there to analyze the consequences.
Gemini AI in Gmail must be incredibly accurate for me to trust it
The Company That Makes Search Results Worse Wants You to Trust It With Your Emails