The incredible growth of the artificial intelligence industry in recent years sparks excitement, wonder, and not just dread.
Promises of new efficiency and automation in the workplace go hand in hand with fears of job losses. The incredible power of generative AI platforms means that sophisticated analysis and plagiarism have never been easier.
As AI-powered industries head headlong into an unknown future, the University of Virginia’s Darden School of Business grapples with the many implications of AI’s peril and promise.
Recent conference hosted by the LaCross Institute for Ethical Artificial Intelligence in Businessstakeholders from academia, government, and industry sought to make sense of this one-of-a-kind technological marvel and its extraordinary ethical complications.
Welcoming attendees to the new institute’s first public event, Dean Scott Beardsley, whose background includes focusing on technology regulation as well as AI ethics, recalled an article he wrote in the 1990s entitled “Broadband Changes Everything”. Likewise, we are now approaching an era in which it seems AI could change everything, he said, with results similar to the widespread adoption of high-speed internet.
One of the key societal challenges will be ensuring that the adoption and growth of AI technologies develops responsibly, an outcome that is far from a given – particularly given the many unknowns in a nascent space.
“Technology is a rapidly evolving topic, and technological advancements don’t wait for certainty,” Beardsley said. “Many of us are waiting for what is absolutely going to happen, but I believe the only thing that is certain is uncertainty. I’m incredibly optimistic in some ways and I’m also very worried.
One of the challenges for individuals and organizations is to ensure that AI develops in a way that contributes to the well-being and flourishing of humanity, that AI is “in the service of humanity ” and not “humanity serving AI,” Beardsley said.
“The Ethical AI Value Chain” served as the conference theme and a fundamental framework for leaders and academics to consider the business and ethical issues that are part of every aspect of AI development and deployment. AI in business.
From infrastructure and data, to algorithms and applications, to impacts and people, and every phase in between, business and ethical issues arise – often in tension – and must be resolved for the AI can keep its promises ethically.
“We suggest that ethical AI is an outcome, not a feature of the product,” said Marc Ruggiano (MBA’96), director of the LaCross Institute. “This is the end result when a series of considerations across the AI value chain are taken into account and trade-offs between them are made in an ethical manner. »
The conference was organized to explore each phase of the ethical AI value chain and encourage participants to formulate an agenda to guide their ethical AI journey in the years to come.
Introduce responsibility
In one of the day’s lectures, Notre Dame Mendoza professor Kirsten Martin (MBA ’99, Ph.D. ’06), a leading authority on technology ethics, called attention to the chain of value of ethical AI by highlighting “value-laden” decisions. done at almost every stage of AI development and deployment, an idea at odds with the objectivity claimed by much AI data.
Tech companies often claim that an algorithm is an impenetrable “black box” and try to avoid accountability by claiming the results are “objective, effective and accurate,” she said.
“The idea that everything is more efficient – this idea of efficiency, precision and objectivity – is so pervasive in our hypothesis about AI that we don’t even stop to ask what’s behind it and what “that hides,” Martin said, noting that the world is currently in the middle of a “hype bubble” around AI, with many in the industry positioning it as a clear advantage.
Responsible development of AI would involve real responsibility on the part of those who develop the technologies, especially when companies exercise power over stakeholders.
The development of new technologies is often accompanied by attempts by companies to avoid reporting on the negative implications of the technology, but rather than being a stopping point, the denial of responsibility should instead be seen as a fairly typical aspect of the development of new technologies.
“We shouldn’t take this as a stopping point when they say we’re not responsible for these negative implications, we should just view this as an ongoing part of the practice of responsibility,” Martin said. “Businesses should expect to be accountable for their decisions that affect other people. »
A topic as ubiquitous as AI shakes up traditional considerations of who might be considered a stakeholder, Martin said, because they involve a significant number of “enrolled stakeholders,” or those who have no choice but to opt in. contribute or not to the company’s value creation. .
“We have a whole body of studies that assume that all stakeholders are willing and mutually beneficial in the relationship, otherwise they would leave,” Martin said. “So what happens if you have stakeholders who are most impacted by the company’s decisions, who are neither voluntary and for whom the decisions are not beneficial?
The implications for management, leadership and scholarship are all significant when the “fundamental assumption” of mutually beneficial value creation is shaken, Martin said.
The strategic value of AI
Darden Professors Raj Venkatesan And Tom Davenport delivered separate speeches exploring different facets of AI capabilities and their implementation within businesses. Venkatesan, author of The AI marketing canvasgave examples of AI-related errors as well as companies using AI capabilities to create personalized relationships with consumers, leading to greater engagement.
Generative AI refined by the unique customer data held by a company will increasingly be a source of competitive advantage, Venkatesan said, while envisioning a difficult frontier in which companies increasingly interact with agentic AI of an individual, as opposed to the person themselves. .
Davenport, the most recent author of All About AI: How Smart Businesses Win Big With Artificial Intelligencesaid that despite all the excitement and concern, generative AI is generally still in the experimental phase for businesses, with relatively little deployment into production. Davenport said most data leaders believe generative AI will transform their organizations, but generally have “yet to figure out how to get real economic value from this technology,” with questionable data quality and cases of failure. uncertain use remaining obstacles.
Smart businesses looking to implement AI do so deliberately using a process chain that progresses from strategy, to use case, to model development, then deployment, and finally, monitoring – a clear plan and the opposite of “random acts of AI”.
Partnership with AI
In addition to industry sessions on healthcare, technology and talent management, the panels included sessions focused on infrastructure, data, tools, applications, management and people, organizers of the conference taking a broader approach to what constitutes AI in 2024.
In a session dedicated to the essential leadership skills of the future, Professor Darden Roshni Raveendhran said humans continue to excel at what she calls composite intelligence, or the ability to combine physical, emotional and perceptual intelligence in order to plan and respond in the way required by a particular context. This higher-level functioning continues to separate humans from AI, said Raveendhran, whose work often explores technology’s influence on individuals and organizations.
Raveendhran said she was particularly interested in the implications of increasing or changing human capabilities when combined with new technologies.
“It’s not that AI can replace humans, but the idea that humans with AI could potentially replace humans without AI,” Raveendhran said, adding that adopting AI capabilities by organizations remains controversial in certain sectors, taking the example of students returning from the summer. internships reporting that their companies banned the use of generative AI tools.
“This is going to change, because if these organizations are going to adapt and learn, their people are going to have to learn with AI and learn how to deploy AI instead of just shutting it down,” Raveendhran said. “It’s learning by partnering with AI.”
Professor Darden Gabrielle Adams said the rapid growth of AI will likely require changing both how individuals learn and instruct.
“We really need to think very intentionally about how we’re going to remake pedagogy and how we’re going to really change our psychology now that we have AI as a partner,” Adams said. “I don’t think we built our education system for this.”
From ideas to action
Although the conference represented the first major event under the LaCross Institute banner, the event built on decades of ethical leadership activity at UVA and Darden.
The LaCross Institute, which was created in 2024, following the largest gift in Darden history from David LaCross (MBA ’78) and his wife, Kathy, aims to ensure that concepts such as business ethics and leadership responsible are integrated into the incredible opportunity surrounding AI. Doing so will require focused and deliberate action, UVA leaders know this, and Darden Professor Yael Grushka-Cockayne concluded the conference with a planning session for what has been called the University Ethics Agenda. AI by taking many of the ideas from the conference and turning them into opportunities for action.
Grushka-Cockayne urged attendees to prioritize increasing their AI acumen and building their organization’s AI capabilities over the coming years.