In a shift toward ethical use of technology, companies around the world are stepping up efforts to develop responsible artificial intelligence (AI) systems, with the aim of ensuring fairness, transparency and accountability in applications of technology. ‘AI.
OpenAI, Salesforce and other tech companies recently signed a open letter emphasizing a “collective responsibility” to “maximize the benefits of AI and mitigate the risks” for society. It’s the latest effort by the tech industry to call for developing AI responsibly.
Fight for responsible AI
The concept of responsible AI is attracting more and more attention Elon Musk’s recent trial against OpenAI. It accuses the creator of ChatGPT of breaking its initial promise to operate as a nonprofit, alleging breach of contract. Musk’s concern was that the potential dangers of AI should not be handled by profit-driven giants like Google.
OpenAI responded aggressively to the lawsuit. The company released a sequence of emails between Musk and executives, revealing its initial support for the startup’s transition to a for-profit model. Musk’s lawsuit accuses OpenAI of violating its original agreement with Microsoft, which went against the startup’s nonprofit AI research foundation. When Musk helped launch OpenAI in 2015, his goal was to create a nonprofit that could balance Google’s dominance in AI, especially after acquiring DeepMind. Musk’s concern was that the potential dangers of AI should not be handled by profit-driven giants like Google.
The AI company said in a blog post that it remains committed to a mission to “ensure that AGI (artificial general intelligence) benefits all humanity.” The company’s mission includes creating safe and beneficial AI and helping create widely distributed benefits.
What is responsible AI?
The goals of responsible AI are ambitious but vague. Mistral AIone of the signatories of the letter, wrote that the company strives to “democratize data and AI for all organizations and users” and talks about “…ethical use, accelerating data-driven decision-making and opening possibilities in all sectors…”.
Some observers say there is a long way to go before the goals of responsible AI are largely achieved.
“Unfortunately, businesses will not achieve this by adopting many of the ‘responsible AI’ frameworks available today.” Kjell Carlsson, head of AI strategy at Domino Data Labtold PYMNTS in an interview.
“Most of them use idealistic language, but nothing else. They are often disconnected from real-world AI projects, often imperfect, and generally lacking actionable guidance. »
Carlsson said building responsible AI involves developing and improving AI models to ensure they operate accurately and safely and comply with relevant data and AI regulations . The process involves appointing managers with responsibility for AI and training team members in ethical AI practices, including validating models, mitigating bias, and monitoring changes.
“This involves establishing processes to govern data, models and other artifacts and ensuring that appropriate actions are taken and approved at each stage of the AI lifecycle,” he added. “Most importantly, it involves implementing the technology capabilities that enable practitioners to leverage responsible AI tools and automate the governance, monitoring and orchestration of necessary processes at scale.” »
Although the goals of responsible AI may be a little unclear, the technology can have a tangible impact on lives, Kate Kalcevich from the digital accessibility company Fable » emphasized in an interview with PYMNTS.
She said that if not used responsibly and ethically, AI technologies could create barriers for people with disabilities. For example, she questioned whether it would be ethical to use a video avatar who is not disabled to represent a disabled person.
“My biggest concern would be access to essential services such as health care, education and employment,” she added. “For example, if AI-based chat or phone programs are used to make medical appointments or for job interviews, people with communication disabilities could be excluded if the AI-based chat tools IAs are not designed with access needs in mind. »