The rise of artificial intelligence has forced a growing number of journalists to confront the ethical and editorial challenges posed by this rapidly expanding technology.
The role of AI in helping newsrooms or completely transforming them was among the questions raised at the International Journalism Festival in the Italian city of Perugia, which ends on Sunday.
– What will happen to jobs? –
AI tools that mimic human intelligence are widely used in newsrooms around the world to transcribe sound files, summarize text, and translate.
At the start of 2023, the German group Axel Springer announced that it was cutting positions at the newspapers Bild and Die Welt, believing that AI could now “replace” some of its journalists.
Generative AI, capable of producing text and images following a simple query in everyday language, is opening new frontiers and has been raising concerns for a year and a half.
One problem is that voices and faces can now be cloned to produce a podcast or present news on television. Last year, Filipino site Rappler created a brand aimed at young audiences by converting its long-form articles into comics, graphics and even videos.
Media professionals agree that their profession must now focus on tasks offering the greatest “added value”.
“You are the ones doing the real things” and “the tools we produce will be your assistants,” Shailesh Prakash, general manager of Google News, said at the Perugia festival.
– It’s all about money –
The costs of generative AI have fallen since the appearance of ChatGPT at the end of 2022, the tool designed by the American start-up OpenAI now being accessible to small newsrooms.
Colombian investigative media outlet Cuestion Publica has called on engineers to develop a tool that can dig through its archives and find background information relevant to breaking news.
But many media organizations don’t create their own language models, which are at the heart of AI interfaces, said Natali Helberger, a professor at the University of Amsterdam. They are necessary for “safe and reliable technology,” he stressed.
– The threat of disinformation –
According to an estimate published last year by Everypixel Journal, AI created as many images in one year as photographs in 150 years.
This has raised serious questions about how information can be extracted from the tidal wave of content, including deepfakes.
Media and technology organizations are partnering to combat the threat, including through the Coalition for Content Provenance and Authenticity, which seeks to establish common standards.
“The heart of our business is gathering information, reporting on the ground,” said Sophie Huet, recently appointed global news director for editorial innovation and artificial intelligence at Agence France-Presse .
“We will rely on human journalists for a while,” she added, although this could be done with the help of artificial intelligence.
– From the Wild West to regulation –
Media rights watchdog Reporters Without Borders, which has expanded its mission on media rights to defending trustworthy information, launched the Paris Charter on AI and Journalism at the end of Last year.
“One of the things I really liked about the Paris Charter is the emphasis on transparency,” said Anya Schiffrin, lecturer on global media, innovation and human rights. man at Columbia University in the United States.
“How much disclosure will publishers need to make when using generative AI?”
Olle Zachrison, head of AI and news strategy at Swedish Public Radio, said there was “a serious debate going on: should you tag AI content or should people- Do they trust your brand?
Regulations are still in their infancy in the face of constantly evolving technology.
In March, the European Parliament adopted a framework law aimed at regulating AI models without slowing down innovation, while guidelines and charters are increasingly common in editorial offices.
The AI editorial guidelines are updated every three months at Quintillion Media in India, said its boss, Ritu Kapur.
No articles from the organization can be written by AI and the images it generates cannot represent real life.
– Resist or collaborate? –
AI models thrive on data, but their hunger for this vital commodity has sparked concerns among vendors.
In December, The New York Times sued OpenAI and its major investor Microsoft for copyright infringement.
Conversely, other media have concluded agreements with OpenAI: Axel Springer, the American press agency AP, the French daily Le Monde and the Spanish group Prisa Media, which notably includes the newspapers El Pais and AS.
With limited resources in the media industry, it’s tempting to collaborate with new technology, said Emily Bell, a professor at Columbia University’s journalism school.
She feels increasing external pressure to “Get on board, don’t miss the train.”