Artificial intelligence is increasingly important in higher education, and experts in the field are paying increased attention to its ethical use. Dr. Terence Ow, WIPLI Fellow in AI and professor of information systems and analytics in the College of Business Administration, has thought extensively about how higher education institutions can ensure that artificial intelligence is used responsibly.
The past predicts the future
Ow describes artificial intelligence, especially large language models, as pattern recognition tools. AI can recognize an input or detect patterns in data, compare them to previous instances in its training data, and then predict the logical output based on that information.
However, an AI’s ability to achieve this depends on the need for a robust and unbiased data set.
“If your past data has any bias or distortion, your final result will be inaccurate and will need to be corrected,” Ow says. “It will take time for the people working on these topics to refine the dataset and correct the errors.”
Distinguishing facts from opinions
Large language models are currently producing “hallucinations,” answers that are presented as facts but are nonetheless inaccurate. These phenomena reflect the limitations of artificial intelligence. For example, AI has difficulty putting its results into context.
“AI has a hard time determining whether something is a fact or an opinion, for example, and if you replicate something a million times and it’s wrong, AI will label it as a fact because it most often completes the model. That’s a big flaw right now,” Ow says.
People who know how to use the right AI tool to enhance their own critical thinking skills and independent judgment will be best positioned for tomorrow’s job market.
Ethical application
While AI opens up vast possibilities for positive change, unethical actors have access to these same tools. For example, companies hoping to increase cigarette sales can more accurately target people who tend to smoke or are trying to quit. Deepfake videos allow fraudsters to imitate the faces and voices of their loved ones.
In this world, it is more important than ever that students are educated on the limitations of AI and its appropriate use cases.
“We need to think about the societal impact of AI: who is this data for, what is it used for, and how do we direct people to value-creating activities,” Ow says. “The use of AI has the potential to improve your life and bring knowledge and opportunity to the individual, the community, and society. It balances the fields and offers hope for greater social mobility; you come to Marquette because you want to use technology for those purposes.”
To learn more, join us on November 21 at Marquette University for the inaugural AI Ethics Symposium, “From Policy to Practice,” sponsored by the Northwestern Mutual Data Science Institute and the Marquette Center for Data, Ethics and Society..