From the now infamous Mother’s Day photo taken at Kensington Palace to Tom Cruise’s fake soundtrack bashing the Olympic Committee, AI-generated content has recently made headlines for all the wrong reasons. These examples are sparking widespread controversy and paranoia, leading people to question the authenticity and origins of the content they see online.
This phenomenon affects all sectors of society, not only public figures and ordinary Internet users, but also the largest companies in the world. Chase Bank, for example, reported being fooled by a deepfake during an internal experiment. Meanwhile, a report revealed that in just one year, deepfake incidents increased by 700% in the fintech sector.
Today, there is a critical lack of transparency around AI, including whether an image, video, or voice was generated by AI or not. Effective methods for auditing AI that allow for greater accountability and incentivize companies to more aggressively remove misleading content are still being developed. These gaps combine to exacerbate the trust problem in AI, and addressing these challenges depends on greater clarity around AI models. This is a major hurdle for companies looking to harness the tremendous value of AI. AI Tools but I fear the risk may outweigh the reward.
CEO and Founder of Casper Labs.
Can Business Leaders Trust AI?
Right now, all eyes are on AI. But while the technology has seen historic levels of innovation and investment, trust in AI and many of the companies that support it has steadily declined. Not only is it becoming increasingly difficult to distinguish between human-generated and AI-generated content online, but business leaders are also increasingly reluctant to invest in their own AI systems. There’s a common struggle to ensure the benefits outweigh the risks, compounded by the murkiness around how the technology actually works. It’s often unclear what kind of content is being discussed. data is used to train models, how data impacts the results generated, and what the technology does with a company’s proprietary data.
This lack of visibility presents a host of legal and security risks for business leaders. Despite the fact that AI budgets are expected to increase fivefold this year, cyber security These concerns have reportedly led to 18.5% of all AI or ML transactions being blocked in the enterprise. This is a whopping 577% increase in just nine months, with the highest case (37.16%) being in the finance and insurance sector, industries that have particularly stringent security and legal requirements. Finance and insurance are harbingers of what could happen in other industries as questions around AI security and legal risks increase and companies must consider the implications of using this technology.
While businesses are eager to tap into the $15.7 trillion in value that AI could unlock by 2030, it’s clear that businesses can’t fully trust AI right now, and that barrier will only get worse if the issues aren’t addressed. There’s an urgent need to introduce greater transparency into AI to make it easier to determine whether content is AI-generated or not, to see how AI systems are using the data, and to better understand the results. The big question is how to get there. Transparency and loss of trust in AI are complex problems that don’t have a single, irrefutable solution, and progress will take effort. collaboration from sectors around the world.
Meeting a complex technical challenge
Fortunately, we have already seen signs that governments and tech leaders are committed to addressing this issue. The recent European AI Act is an important first step in establishing guidelines and regulatory requirements for the responsible deployment of AI. In the US, states like California have taken steps to introduce their own legislation.
While these laws are useful in that they outline the risks specific to industrial use cases, they only provide standards to follow, not solutions to implement. The lack of transparency in AI systems runs deep, even in the data used to train models and how that data informs results, and poses a thorny technical problem.
Blockchain is a technology that is emerging as a potential solution. While blockchain is largely associated with cryptography, at its core it is a highly serialized and tamper-proof data store. For AI, it can enhance transparency and trust by providing an automated, certifiable audit trail of AI data, from the data used to train AI models to inputs and outputs during use, and even the impact that specific datasets had on an AI’s output.
Data Augmented Generation (RAG) has also rapidly emerged and is being adopted by AI leaders to bring transparency to systems. RAG allows AI models to search external data sources like the internet or a company’s internal documents in real-time to inform results, meaning models can ensure that results are based on the most relevant and up-to-date information possible. RAG also introduces the ability for a model to cite its sources, allowing users to verify information themselves rather than requiring blind trust.
And when it comes to combating deepfakes, OpenAI said in February that it would embed metadata into images generated in ChatGPT and its API so that social platforms and content distributors can more easily detect them. That same month, Meta announced a new approach to identifying and labeling AI-generated content on Facebook, Instagram and Threads.
These emerging regulations, governance technologies, and standards are an important first step toward greater trust in AI and responsible adoption. But much more needs to be done in the public and private sectors, especially in light of viral moments that have heightened public unease about AI, upcoming elections around the world, and growing concerns about AI safety in the enterprise.
We are at a turning point in the AI adoption trajectory, where trust in the technology has the power to tip the scales. Only with greater transparency and trust will businesses embrace AI and their customers reap its benefits in AI-powered products and experiences that delight, not discomfort.
We list the best AI website builders.
This article was written as part of TechRadarPro’s Expert Insights channel, where we showcase the best and brightest minds in today’s tech industry. The opinions expressed here are those of the author and not necessarily those of TechRadarPro or Future plc. If you’d like to contribute, find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro