AI and blockchain go hand in hand. This match made in heaven is becoming more and more evident as we explore the good and bad opportunities related to the burgeoning AI-based technology. To explore this subject further, which our Founder is also passionate aboutI moderated a panel on ethical AI and Blockchain during the AI and Blockchain Virtual Expo 2024 on October 31. Here’s a summary of what happened in case you missed it.
Panelists covered a broad spectrum of experience, from technical and research to associations, business and healthcare. Their different perspectives shed fascinating light on the hot topic of AI Ethics and how we can prevent and resolve the main concerns surrounding it. The group included Wei ZhangnChain Research Director; Kristopher Klaichpolitical director of the Chamber of Digital Commerce; Kerstin EimersGlobal Web3 from Deutsche Telekom; And Daniel PietrzykoskiTechnical Strategy and Product Innovation – Johnson & Johnson Blockchain and Program Management.
The discussion began with what AI ethics means to each panelist and the concerns surrounding AI, including real-world examples.
Klaich and Eimers emphasized that we are still in the experimental stage of these future technologies, which raises issues of scams and other challenges. Klaich used the example of AI agents operating in the blockchain worldcreating their own wallets and acting as influencers to pump a token or even offering small amounts of money to others to promote the token. This behavior can manipulate the market, making it difficult to know what is a real person and what is not.
“The opportunities are sort of terrifying in that sense and endless,” Klaich said.
Eimers highlighted apps that create “ideal” AI-generated images and portraits, which can lead to mental health issues, particularly among young women.
“I think there are a lot of areas to watch, which are great on the one hand but also bring struggles and ethical challenges at the same time,” Eimers said.
Zhang revealed that the biggest challenge for him was regulation. He used the analogy of a criminal escaping a scene using a fast car, but the solution is not to ban cars; this is to ensure that the police also have fast cars.
“This leads to a kind of widespread conflict between technological progress and regulation, and then to misuse of technology,” he said.
“AI is being misused and we need regulation to catch up and create the corresponding technology to deal with the new technology. So if we blindly ban technology, we will only have police officers without cars,” Zhang pointed out.
Fortunately, Pietrzykoski, who was experiencing technical difficulties throughout the panel, was able to intervene on this point.
“There’s a lot of uncharted territory right now, and I think a lot of people are trying to figure it out. There is a lot of apprehension and a lot of fear. Either something clarifies what we need to do, or something bad is going to happen and we’ll have to find a way to deal with it,” he said.
So how can we deal with it before something bad happens? This is where blockchain and education come into play.
Klaich spoke about the importance of data privacy, ownership and provenance and explained how we can hash information to the blockchain to trace where data is for AI the models come.
Zhang, who is engaged in research around verifiable AI on blockchainsaid the key is to balance data use and data privacy, the goal of regulation, whether government-led or industry-led.
“What I’m trying to address is another angle, which is whether we can come up with something that can fully resolve the conflict between the two so that there is no longer a conflict between privacy and use data. Of course, it’s too good to be true, but the seed is there and it involves using blockchain and cryptography,” he said.
The idea is to prove that input to the AI model, execution of the AI model, and output of the AI model are verifiable without compromising user privacy. The trade secrets of the AI model will not be disclosed, but we can be confident that the model acts as intended.
Use cases in this scenario include certifying a AI Model was trained on a particular dataset, proving that it is unbiased. Or, a data owner could earn royalties from an AI model when its data is used, which also requires issuing a certificate.
The ability to do things based on blockchain micropayments also opens doors for users willing to share their personal data.
“With AI and applications that run AI or are based on AI, we will have the choice to interact with and potentially be compensated or receive micropayments for disclosing certain personal information to these models ” Klaich explained.
The role of education in preventing AI abuse is an area that Eimers is particularly passionate about, and she was excited to expand on this topic.
“Education really plays an important role and goes way beyond just providing materials or quick checklists for clients,” she said.
Eimers revealed details of Deutsche Telekom’s awareness campaigns, including “A Message from Ella”, a deep fake of a child to raise parents’ awareness of the confidentiality of their children’s data.
“We invest a lot of time and resources into getting these messages out to the mass market and using our reach in the best possible way. I think that’s really, really important, and responsibility and accountability, what a company has to do,” she said.
The question of who is responsible in the event of AI malfunction was also discussed, a hotly debated topic as we are still in uncharted territory. Nobody knows the answer yet. However, Zhang is convinced that we cannot hold technology responsible and said that stopping technology is never a solution – remember the police who also need a fast car.
“Overall, I would say in terms of accountability, the first technical approach is to have identities,” he said.
“In Web3 or any future generation of the Internet, there is only one identity protocol… in this system, all of your interactions can be held accountable and auditable. By using blockchain to manage this, you will be able to track and verify the entire history. Through cryptography, we can preserve user privacy and only reveal it if a crime is committed,” he said.
“We hope to be able to have a system or at least a tool to help identify bad behavior of AI models and, therefore, their corresponding developers or companies to hold them accountable,” Zhang added.
For artificial intelligence (AI) to operate within the law and thrive in the face of increasing challenges, it must integrate an enterprise blockchain system that ensures the quality and ownership of data capture, enabling it to protect data while also ensuring immutability. of data. Check out CoinGeek’s coverage on this emerging technology to find out more why enterprise blockchain will be the backbone of AI.
Watch: Blockchain and AI unlock possibilities
title=”YouTube Video Player” frameborder=”0″ allow=”accelerometer; autoplay; write to clipboard; encrypted media; gyroscope; picture-in-picture; web sharing” referrerpolicy=”strict-origin-when- cross-origin” allow full screen>