The smartest business leaders know that developing a strong AI governance framework is not only the right thing to do, it’s actually just good for the bottom line, especially as Investors become more informed and regulators come knocking.
And responsible implementation of any AI system begins with thoughtful leadership at the executive level, according to Cansu CancaDirector of Responsible AI Practice at Northeastern University Experiential AI Institute and research associate professor of philosophy.
“Leaders need to take it upon themselves and really defend this cause,” says Canca. “Because without them, it won’t hold up. »
According to Canca, this top-down approach works across industries, whether it’s a financial services company looking to reduce the time it takes to analyze thousands of data sets or a big-box retailer looking to automate their customer service operations.
“It’s not just ethicists or developers who are concerned about what can go into an AI system,” she says.
Canca was the head of the faculty during the “Training of managers in responsible AI»course held on Northeastern’s Boston campus. The two-day training was designed for current and future executives looking to pioneer responsible AI practices in their organization.
The teaching team included Ricardo Baeza-Yates, research director at Northeastern’s Institute for Experiential AI, AI ethicists Laura Haaber Ihle and Matthew Sample, researcher Annika Marie Schoene, postdoctoral researcher Muhammad Ali, and principal investigator Steve Johnson .
As CEO of Notable Systems Inc., which uses AI to help healthcare providers capture and enter data, Johnson developed his own AI framework in the form of five rules, which he often shares with business leaders.
Rule #1: Don’t believe in magic.
“If a provider says, ‘Plug it in.’ It simply works. “You can find something else to do, that’s probably not true,” Johnson says.
A company can also claim its system is 99% accurate, but users need to worry about the 1% of the time the system breaks, because that’s when they might run into problems, says -he.
Rule #2: Don’t get distracted by the words “artificial intelligence.”
“I like to replace it with the word ‘software,’” he says. “Proceed as you have done in your career with any software system. Examine its subtleties and peculiarities, its strengths and weaknesses. You always do this with any software system.
Rules #3 and #4 go hand in hand. First, trust but verify that your system is working properly on an ongoing basis. Second, check with your supplier to see if they have a system that allows users to verify a machine’s competence to perform particular tasks.
Johnson used his own business as an example. He works in data entry, and the AI systems he works with let users know when they fail to recognize a particular piece of information.
Rule #5 is to ask suppliers how their system helps close the loop.
“That means how does the system collect and learn from the output, as well as any human intervention that occurs? » said Johnson. “Because the human element, your judgment and advice, is a golden source of truth. This is what will keep your system healthy and improving.
Andrew Grover, chief risk officer at Bangor Savings Bank, said the conference was a useful and useful analysis of the issue.
He found Johnson’s five rules to be an easy guide to follow. Bangor Savings Bank hopes to leverage AI to increase efficiency, he says.
“I keep quoting Jurassic Park,” he says. “I think Jeff Goldblum’s character made the comment, ‘Just because we can, does that mean we should?’ I keep saying this with AI there are many things we can do, but we have to constantly challenge ourselves and do the right thing.