Kathy Gibson reports from Gitex Africa Morocco 2024 – When it comes to using artificial intelligence (AI), ethics is a key issue and organizations are advised to start formulating clear policies around trustworthy and responsible AI.
Rohan Patel, senior vice president: engineering at Builder.ai in the UK, says it’s important to realize that there are two completely different types of AI systems.
The first concerns systems used exclusively internally, where companies are expected to manage ethics themselves. The second concerns systems that use personal data and could have an impact outside the organization, and regulation will soon take these considerations into account.
A common fear is that AI systems will soon start writing other systems and completely disintermediate humans from the process.
But Patel doesn’t see this as an ethical issue at all. “We already have models creating models and models creating data sets. If it works for the business, why not?
This assumes that these models are intended for internal AI use and are not systems that could have an impact outside the organization.
Another way to keep models honest is to build “humble” AI systems, says Sebnem Erener, managing general counsel at Klarna in Sweden.
“This involves programming humility into AI systems in the sense that, mathematically, they would never assume that their predictions or statements are definitive. They would thus continually update underlying preferences and values by taking into account the constant reactions of human behavior.
Although there are no global initiatives yet to harmonize AI ethics, the EU is on the right track to put in place regulations for trustworthy AI, Erener adds. The new law, which addresses AI as well as privacy, will be enacted in June and will come into force in two years.
What is important is that views on these issues are constantly evolving and regulations must be flexible enough to accommodate these changes.
“What we protect and what we believe should be protected is changing,” Erener emphasizes. “For example, we were very keen on privacy and intellectual property (IP) issues. But we realized that we shouldn’t be afraid to question this position.
“If the goal is to make systems more accurate, you can’t treat privacy and intellectual property as absolute rights; and existing legal structures will need to be updated.
That said, organizations need to be sensitive to these AI-related issues and should start formulating policies now.
“For example, at Klarna we started experimenting with an ethical AI framework to have the ability to learn from consumer needs. Then, when the regulations come into effect, we will not react but rather take part in the conversation by defining the regulations.
Is it possible to program ethics into the system? That may be the case, but Dr Juergen Rahmel, a lecturer at the University of Hong Kong, says businesses shouldn’t worry about tackling the biggest problems first.
“Leave the ethical questions to humans who are augmented,” he says. “Give the solvable problems to machines and keep the difficult and ethical problems for humans.”
“Then, as you progress, you can start to adopt more of the value chain in electronic models. »