McIntosh recommends seeking out third-party resources and subject matter expertise. “This will go a long way toward accelerating the development and execution of your plan and framework,” says McIntosh. “And, based on your current program management practices, provide the same level of rigor – or more – for your AI adoption initiatives. »
Move slowly so the AI doesn’t go wild
The Laborers International Union of North America (LIUNA), which represents more than 500,000 construction workers, public sector employees and postal workers, has embarked on the use of AI, primarily for accuracy and clarification of documents, as well as for drafting contracts, says IT director Matt Richard. .
As LIUNA expands AI use cases in 2024, “it makes us ask the question of how we use AI ethically,” he says. The organization began testing Google Duet to automate the process of drafting and negotiating agreements with contractors.
LUNA
Currently, union officials are not using AI to identify members’ wants and needs, or to review hiring data that could be sensitive and reflect bias on people based on how of which the models are trained, explains Richard.
“These are the areas where I get nervous: when a model talks to me about a person. And I don’t think we’re ready to jump into that space yet, because frankly, I don’t trust publicly trained models to give me insight into who I want to hire,” he says.
Richard, however, expects a “natural evolution” in which, eventually, LIUNA may want to use AI to gain insights about its members to help the union offer them better benefits. For now, “there’s still a gray area on how we want to achieve this,” he says.
The union is also trying to increase its membership, which includes using AI to effectively identify potential members, “without identifying the same homogenous people,” says Richard. “Our organization works very hard and does a good job of empowering minorities and women, and we want to grow those groups. »
This is where Richard becomes concerned about how AI models are used, because avoiding “the rabbit hole of chasing the same demographic stereotypes” and introducing bias means humans need to be part of the process. “We don’t let the models do all the work,” he says. “You understand where you are today, and then we stop and say, ‘OK, humans need to step in here and look at what the models are telling us.'”
“You can’t let the AI run wild… without intervention. Then you perpetuate the problem,” he says, adding that organizations shouldn’t take the “easy way out” with AI and focus only on what the tools can do. “My fear is that people buy and implement an AI tool, abandon it and trust it. … We have to be careful, these tools don’t tell us what we want to hear,” he says.
To that end, Richard believes AI can be used as a helping hand, but IT leaders need to use your team’s intuition “to make sure we don’t fall into the trap of just trusting flashy software tools that don’t give us the data.” we need it,” he says.
Taking AI Ethics Personally
Like LIUNA, Czech Republic-based global consumer credit provider Home Credit is at the beginning of its AI journey, using GitHub Copilot for coding and documentation processes, says Jan Cenkr, group CIO .
“This offers a huge advantage in terms of time savings, which in turn also has a cost advantage element,” says Cenkr, who is also CEO of EmbedIT, a subsidiary of Home Credit. Ethical AI has been a priority for Cenkr since the beginning.
Real estate loan
“As we began rolling out our AI tool pilots, we also had extensive discussions internally about creating ethical governance structures to support the use of this technology. This means we have real controls in place to ensure we do not breach our codes of conduct,” he says.
These codes are regularly updated and tested to ensure they are as robust as possible, Cenkr adds.
Data privacy is the most difficult consideration, he adds. “All information and data that we feed into our AI platforms must absolutely comply with GDPR regulations.” Since Home Credit operates in multiple jurisdictions, IT must also ensure compliance across all of these markets, some of which have different laws, adding to the complexity.
Organizations should develop their governance structures “in a way that reflects your own personal approach to ethics,” says Cenkr. “I believe that if you take the same care in developing these ethical structures as you do in the ethics you apply in your personal and daily life, those structures will be that much safer. »
Additionally, Cenkr says the IT department must be prepared to regularly update its governance policies. “AI technology is advancing daily and it’s a real challenge to keep pace with its evolution, as exciting as it is.”
Put guardrails
AI tools such as chatbots have been used at UST for several years, but generative AI is a whole new ball game. This fundamentally changes business models and has brought ethical AI into the debate, says Krishna Prasad, chief strategy officer and CIO of the digital transformation company, although admitting that “it’s a bit more theoretical today” .
Ethical AI “doesn’t always come up” in implementation considerations, Prasad says, “but we talk about the fact that we need responsible AI and some ability to provide transparency and trace the manner in which a recommendation was made.
UST
Discussions among UST leaders focus on what the company does not want to do with AI “and where do we want to draw the lines as we understand them today; how to stay true to our mission without causing harm,” says Prasad.
Echoing others, Prasad says this means humans need to be part of the equation, as AI is more deeply embedded within the organization.
One question that has arisen at UST is whether having leaders have a conversation about employee performance while a robot listens in constitutes a compromise on privacy. “Things (like that) started to bubble up,” Prasad says, “but at this point on this, we’re comfortable moving forward using (Microsoft) Copilot as a way to summarize the conversations.
Another consideration is how to protect intellectual property around a company-developed tool. “With the protections offered today by software companies, we always feel like the data is contained within our own environment, and there is no evidence of external data loss,” says he. For this reason, Prasad says he and other executives have no qualms about continuing to use certain AI tools, particularly because of the productivity gains they are seeing.
Although he thinks humans should be involved, Prasad also worries about their contribution. “Ultimately, human beings are inherently biased because of the nature of the environments we are exposed to, our experiences, and how that shapes our thinking,” he explains.
He also worries whether bad actors will gain access to certain AI tools when they use customer data to develop new models for them.
These are areas that executives will need to worry about as the software becomes more widespread, Prasad says. In the meantime, CIOs need to lead the way and demonstrate how AI can be put to good use and what impact it will have on their business models, and bring leaders together to discuss the best way forward, he says.
“CIOs need to play a role in leading this conversation, because they can bust the myths and also execute,” he says, adding that they also need to prepare for these conversations to become very difficult at times.
For example, if a tool offers a certain functionality, “do we want it to be used as much as possible, or should we hold back because it’s the right thing to do,” says Prasad. “This is the hardest conversation,” but CIOs must present that a tool “might be more than you bargained for.” For me, this part is still a little vague, so how can we put constraints around the model… before making the choice to offer new products and services using AI.