Over the past few years, the media has often reported on the power, promise, and transformative capabilities of artificial intelligence in nearly every discipline. From business to healthcare to cybersecurity to education, AI is everywhere. While it can offer a multitude of benefits, what happens when AI gets it wrong? This week, I’ll explore what happens when AI gets it wrong and how we can work to minimize the negative consequences while preserving the positive ones.
POTENTIAL AI ERRORS
As most people know by now, AI can make mistakes in many situations. First, because it works with algorithms, it can make wrong decisions that can be financially catastrophic or damage an institution’s hard-earned reputation, whether in business or education. It can be designed with unintended biases or discriminate against certain groups. It can also negatively impact data privacy while equipping cyber attackers with sophisticated tools.Researchers are now collecting data on how often AI can make mistakes, as Patrick Tucker, science and technology editor of the National Defense News website, explains: Defense 1 written in January“When… researchers submitted the statements to ChatGPT-3, the generative AI tool “accepted incorrect statements between 4.8 and 26 percent of the time, depending on the statement category.” An error rate approaching 25 percent can be particularly troublesome for any discipline.
The Evolution of AI in Big Tech Companies
Concerns about AI errors have been documented over the past decade. In 2015, Google discovered a flaw in its Google Photos app that used a combination of “advanced computer vision and machine learning techniques to help users collect, search, and categorize photos,” according to the report. The New York Times in 2015. Unfortunately, the app incorrectly labeled images of black people as gorillas. Times As a Google representative noted, “There is still a lot of work to be done on automatic image labeling, and we are investigating how we can prevent these types of errors from happening in the future.”
Nine years later, in 2024, Google began restricting parts of its Gemini AI Chatbot Capabilities after creating factually inaccurate representations based on user-submitted generative AI prompts. Some feared Gemini could negatively impact elections around the world.
In 2016, Microsoft used a Twitter bot called Tay to target a younger audience. Unfortunately, this AI project was quickly taken down after it began sharing extremely inappropriate tweets.
Eight years later, in 2024, Microsoft introduced a new AI-powered feature called CoPilot+Recall that could take screenshots of a computer’s desktop and archive the data. Cybersecurity professionals were quick to warn that creating a searchable archive of a person’s computer activity would be an easy target for hackers to capture. According to a Forbes article In June, “due to public reaction, Microsoft plans to make three major updates to Recall: making Recall an opt-in experience instead of a default feature, encrypting the database, and authenticating the user via Windows Hello.” These examples illustrate the continued transformation and evolution of AI, showing its enormous potential while also revealing potential pitfalls.
CONCERNS ABOUT AI IN EDUCATION
AI in education is already being used independently by faculty and staff, as well as institutional programs. In 2023, Turnitin, a plagiarism detection software tool, introduced a new AI detector. Unfortunately, end users began seeing student work being incorrectly flagged as plagiarized. public statement In August 2023, Vanderbilt University said it would be one of many institutions to disable Turnitin’s AI detector because “this feature was enabled for Turnitin customers with less than 24 hours’ notice, with no ability to disable the feature at that time, and, importantly, with no idea how it would work. At the time of launch, Turnitin claimed that its detection tool had a 1% false positive rate.” The Washington Post reported In April 2023, Turnitin claimed its detector was 98% accurate while warning end users that plagiarism reports “should be treated as an indication, not an accusation.”
AI AND HEALTH
With AI being studied, tested, and implemented across many disciplines, even if it’s not necessarily integrated into a curriculum, it can give educational institutions some breathing room to fully implement it. The World Health Organization issued a warning in May 2023, calling for “rigorous oversight necessary to ensure technologies are used safely, effectively and ethically.”
While artificial intelligence can be a powerful tool in medicine, it can also pose risks if safeguards are not put in place. The Pew Research Center found in 2023 60% of Americans would be uncomfortable with their healthcare provider relying on AI. Yet AI could help doctors analyze diagnostic images more quickly and accurately, develop innovative drugs and therapies, and serve in an advisory role to a medical team. Once AIs can deliver safe and proven healthcare regimens, they could be more useful in medical schools.
AI AND CYBERSECURITY
AI is a critical tool to protect against cyberattacks through sophisticated monitoring, detection, and appropriate response. AI is a proven tool to protect both our data and our privacy through its ability to analyze massive amounts of data and detect unusual patterns while scanning networks for potential weaknesses. Unfortunately, cybercriminals are using AI tools to educate themselves and circumvent what AI is trying to protect. Jen Easterly, Director of the Cybersecurity and Infrastructure Security Agency said Axios In May, AI “makes it easier for anyone to become a bad guy” and “will exacerbate the threats of cyberattacks – more sophisticated phishing, voice cloning, deepfakes, foreign malicious influence and disinformation.”
STRATEGIES TO REDUCE AI RISKS
To balance the enormous potential of AI with its risks, experts recommend specific audits to ensure that AI tools are appropriate, accurate, and free of bias. It is also important that developers integrate specific ethics into the creation of AI tools and processes. Educating the public and private sectors about the potential errors and risks of AI should be part of the formula for the future, where higher education could play a central role. In May, a trio of authors of the Harvard Business Review identified Four Types of GenAI Risks and How to Mitigate Themsummarized them as misuse, misapplication, misrepresentation, and misadventure. These are just some of the options to minimize the risks associated with the use of AI. The business and education sectors must work together to reduce the risks so that we can all benefit from the many positive opportunities that AI offers.