“Our job is to create tools to help artists, broadcasters and engineers do their jobs better.
“As we build these types of tools and as we integrate this type of technology, we also have to make sure that we are ethical in what we put in place,” SMPTE President Renard Jenkins said during a recent session focused on ethics and regulation in AI.
This conversation featured representatives from the SMPTE Joint Working Group on AI and Video, each of whom shared their perspectives. You can watch the full video below, or read on for highlights.
The working group was formed in 2020. Director of the ETC AI and Neuroscience in the Media project Yves Bergquist said the group found “both a problem and an opportunity in all the ethical and legal questions surrounding the deployment of artificial intelligence in the media industry.”
Jenkins stressed that the media industry “is a consumer of this technology.” While that in itself is a major issue in the ethics debate, he added: “We also have a great responsibility to ourselves because we are able to reach millions of people with a single program or a single piece of content.”
Bergquist, who is also CEO of Short AIsaid: “I like to look at artificial intelligence from the media industry perspective, because the media industry East a technology industry.
He explains that M&E “has a strong track record of combining human creativity and technology. It’s also not a producer of artificial intelligence. It’s a consumer of artificial intelligence products.”
“So it brings a certain sobriety to the way we look at technology. And it’s also a very, very socially conscious technology industry.”
Bergquist also noted that the ubiquity of technology has had “tremendous consequences and impact on the way we live.” As a result, he said, “the ethical question must now be integrated into every conversation about technology.”
The good news
However, Bergquist said: “Practicing ethical AI is the same as practicing good, methodologically sound AI. You have to know the biases in your data. You have to have a culturally and intellectually diverse team.”
In fact, he said, “I have yet to see a requirement for ethical AI that is not also a requirement for rigorous AI practice.”
To be both ethically and intellectually rigorous, Bergquist said, “you have to understand the impact…of your models on your organization, on society as a whole.”
AMD Fellow Frederick Walls agreed, adding, “Transparency and explainability… they’re part of ensuring that your model does what it’s supposed to do.”
Understanding Bias in AI
“The issue of transparency is critical,” Bergquist said. “It’s an issue for which we have tools.”
He cited IBM researcher Kush R. Varshney’s “Trustworthy Machine Learning” (downloadable in PDF and found here), which presents the “food labeling model” to detail important elements such as “how these models were trained, on what data they were trained, what biases were identified during training, what are the variables that contribute the most to the model.”
Bergquist also said that Google researchers have proposed “model cards” to be associated with LLMs, containing “metadata about how the model was trained, the amount of data trained, its performance, the methodologies built into the model, and biases based on the data.”
After all, as Jenkins pointed out, “as we know, you actually have to introduce bias into your model, because otherwise it can go off the rails. And we have to think about bias… basically from its original definition, which is to show exclusion.”
Walls added: “There are sources of bias everywhere in an AI model, and I don’t think there’s a way to really get rid of them.
“But I think there is certainly a responsibility for those who… implement a model to understand what those biases are and where they may be coming from.” He also noted that documentation (logging) is also essential.
The human element and politics
Bergquist stressed that AI “is not independent of humans. It is built by humans and reflects their biases.”
He believes we need to stop the hype around Silicon Valley, which claims that “AI is the kind of magic technology that is going to take over our lives.”
This false advertising is detrimental to progress because, according to Bergquist, “87% of all AI initiatives in large organizations fail because people either think it’s magic and will solve all their problems, or it’s just completely incompetent and can’t do anything and therefore shouldn’t be considered.”
“Most of the time, these kinds of things fail because individuals haven’t taken the time to put the right infrastructure in place or figure out who the right person should be to lead these kinds of things internally,” Jenkins said.
Walls advised organizations to start by The NIST AI Risk Management Framework as they begin to develop “a business strategy to mitigate the risks associated with the use of AI.” He described it as “a great tool” and acknowledged that policies differ across organizations.
He also referred C2PAan organization “that works on standards to ensure that you can verify the provenance and authenticity of content.”
Jenkins suggested that SMPTE Report on AI provides “a good basis” or perhaps “a road map” for organizations to create their own AI working groups to determine internal policies.
The second part of this article is available here: Good AI is ethical AI: Everyone in the media and information sector must experiment
Why subscribe to The angle?
Exclusive information: Get editorial summaries of the cutting-edge content that matters most.
Backstage access: Take a peek behind the curtain with in-depth Q&As with industry experts and thought leaders.
Unparalleled access: NAB Amplify is your digital hub for technology, trends and insights not available anywhere else.
Join a community of professionals as passionate as you are about the future of film, television and digital storytelling. Subscribe to The Angle today!