Whether AI is helpful or harmful will be the biggest challenge facing businesses in 2025. Much of the news surrounding artificial intelligence focuses on technological questions, such as how to build faster computers, to make AI programs more efficient and to make new AI tools work. with older technology.
Unless you use the technology for good purposes and guard against abuse, the level of technical sophistication of AI doesn’t matter. Harm will overshadow progress.
Let’s look at three ethical challenges that AI raises for businesses around the world. We’ll also look at how some businesses are tackling these challenges and what it all means for you and your own organization.
1. Saving lives, preventing disasters: why AI safety must come first
The most fundamental ethical principle of all is “Don’t hurt.” This applies not only to doctors and other healthcare workers, but also to leaders in the AI field and everyone.
AI security is not optional. It is an urgent necessity. Speaking at the AI Safety Summit in November 2023, Dario Amodei, CEO of Anthropicstressed the importance of continuous risk assessment and response. “We need both a way to frequently monitor these emerging risks and a protocol to respond appropriately when they occur,” he said.
Although the summit took place in 2023, Amodei’s ideas remain critical for 2025. The challenges he outlined – establishing rigorous assessments and proactive response mechanisms – are fundamental principles for managing climate-related risks. AI that continues to grow.
Examples of AI Damage
Some of the serious harms that AI can cause include:
- Prejudice and discriminationin which AI models perpetuate unfair practices in hiring, lending, and law enforcement, to name a few areas of concern
- Privacy violations, since AI-based surveillance systems can compromise individual safety
- Autonomous weapons, whereby artificial intelligence makes life and death decisions without human oversight
- Manipulation and disinformation through deepfakes and other AI-generated lies that can erode trust and incite political or social unrest
How companies manage this ethical challenge
The company Anthropic, cited above, uses the process called “red teaming” to test its AI systems. This means simulating adversarial attacks to identify weaknesses such as biased results or harmful behavior. Red teaming helps ensure that AI models are safe, reliable, and resilient before a company uses them.
By prioritizing security over speed, delaying product launches when necessary, and working with regulators to establish industry-wide security standards, companies like Anthropic are demonstrating how point rigorous testing can build confidence and avoid harmful consequences.
For reflection
How can we prioritize security in AI development without sacrificing innovation or speed?
2. Combating Chaos: Will Regulation Catch Up Over Time?
When I began my career at the West Virginia University Health Sciences Center in Morgantown, I took a seminar on time management. The instructor, law professor Forest “Jack” Bowman, told us, “If you don’t manage your time, someone else will.” »
This wise saying could be updated to: “If you don’t manage your AI systems, someone else has to.” »
At a conference sponsored by Reuters Yesterday, Elizabeth Kelly, director of the American Institute for Artificial Intelligence Security, highlighted the challenges facing policymakers in developing protective measures against AI due to the rapid evolution of technology .
Kelly noted that in areas like cybersecurity, it can be easy to circumvent AI security measures. These workarounds are known as “jailbreaks” and can be easily performed by tech-savvy people.
Remember, in David Fincher The Girl with the Dragon Tattoo, the incredulous look that Rooney Mara hacker Lisbeth Salander (Rooney Mara) gives Mikael Blomkvist (Daniel Craig) when he asks her about the difficulty of breaking into a computer system. And that was in 2011! (Written by Steven Zaillian, the film was based on the novel by Stieg Larsson.)
The European Union is one of the parts of the world that is addressing the need for government regulation of AI systems. It is Artificial Intelligence Law (AI Act), which took effect on August 1 of this year, prohibits AI that poses unacceptable risks, such as social scoring, in which individuals receive scores based on their behavior and actions. Social scoring can unfairly limit access to financial services, employment, travel, education, housing and public benefits.
How companies manage this ethical challenge
IBM has already taken proactive steps to address concerns related to European legislation through initiatives such as Precision regulation policy. This policy addresses three components of AI ethics: 1) accountability, 2) transparency, and 3) fairness.
It’s worth taking a look at this paper because it presents a model for how any company, not just IBM, can use AI in the right way and for the right reasons.
For reflection
What is your company doing to align your AI systems with emerging regulations and thus avoid potential legal or ethical risks?
3. AI and the future of work: will technology leave us behind?
Earlier, we looked at the Do No Harm ethical principle when it comes to security. This fundamental ethical imperative also applies to employment. Whatever euphemism you want to use – downsizing, downsizing – the effect is the same: letting loyal, hard-working employees go causes harm, even if there are financial benefits to the companies that do it.
Andrew Yangformer presidential candidate and founder of the Forward party, has been a staunch defender of this issue. “The IMF (International Monetary Fund) has said that around 40 percent of global jobs could be affected,” he noted earlier this year. “That represents hundreds of millions of workers around the world.”
How companies manage this ethical challenge
In response to these concerns, some companies are forming mutually beneficial relationships with nonprofit organizations. “Nonprofits can often connect companies with underrepresented talent in the knowledge workforce,” writes Kathy Diaz, Cognizant’s chief human resources officer, in an article for the World Economic Forum. “The IT Senior Management Forum is one of many nonprofit organizations leading the way in this area. »
For reflection
How can your organization ensure both technological advancement and job security when it comes to its use of AI?
Takeaways
In 2025, businesses will need to answer the crucial question: “How can we use AI wisely and prevent abuse?” If your organization takes this issue seriously, you will go a long way to ensuring that your own AI systems do not end up like HAL 9000 from 2001: A Space Odyssey and become humanity’s worst nightmare.