Despite recent economic difficulties, the video game industry continues to be one of the most profitable entertainment businesses.
And this is not an accident; game studios are spending hundreds of millions of dollars to make their games bigger and more immersive to attract both new and longtime gamers.
Graphics are edging closer to uncanny valley territory and companies like Meta and Sony continue to blur the lines between the physical and digital worlds with their mixed reality headsets and other sensing technologies.
Artificial intelligence, meanwhile, has been hailed as the next big technological breakthrough, and video game creators use it for a myriad of processes, including programming non-playable characters, creating procedurally generated levels , game chat log moderation, and video game personalization. -gaming experiences.
Certainly, the industry is at the forefront of progress, but an often overlooked aspect is the ethical challenges that developers face when using these technologies to create their games.
There are important discussions to be had about data privacy, biased algorithms, and tantalizing gameplay loops made more addictive with the help of AI.
As AI continues to play a larger role in game development, how should game creators use it more responsibly?
Researchers at Northeastern University address this question head-on in a recently published paper. ACM partsuggesting that the tools and frameworks being developed for the emerging field of responsible AI should be adopted by the gaming industry.
“It’s clear that the gaming industry is moving to a different level of risk with AI, but the ethical aspect is lagging behind,” says Cansu Canca, director of responsible AI practices at the Institute for Experiential AI and one of the authors of the research. .
“Game designers are concerned that without proper ethical guidance, they will also not know how to approach these complex and new ethical issues,” she adds.
Other authors of the paper include Northeast researchers Annika Marie Schoene and Laura Haaber Ihle.
Although AI is used in different ways in game development, the authors target a few specific issues, including how game developers use AI to develop game mechanics, how generative AI can be used in image creation and other creative activities, and how companies collect and use user data.
Game studios can often be massive operations with multiple departments spanning dozens of teams and individual workers. According to researchers, the larger these companies are, the greater the risk of silos and breakdowns in communication.
An effective AI ethical framework is therefore designed from its inception to be integrated into every part of the company’s hierarchical system. In practice, a comprehensive AI ethical framework can help all parties – from designers and programmers to story writers and marketers – make better decisions about specific AI use cases and should be based “on fundamental theories of moral and political philosophy,” the researchers say.
“In some sense, the broader framework of responsible AI doesn’t necessarily change when applied to games,” says Canca.
For example, a risk assessment tool can help game developers weigh the pros and cons of certain use cases before implementing them, the researchers write. A comprehensive risk assessment describes the strengths and weaknesses of the proposed use and technology as well as potential positive and negative outcomes.
“Using an AI ethics risk and impact assessment tool could allow game designers and developers to evaluate both the game and the embedded AI systems for their impact on individual autonomy and action, their potential risks of harm and benefits, and the distribution of burdens and benefits. within the target audience as well as in society in general,” they write.
The researchers emphasize that this is just one tool that developers can take advantage of in a broader context to facilitate this process. Other tools that should be part of this AI framework include bias testing and error analysis of certain AI technologies.
Another important consideration in AI development is how companies use data to train their models and increase their bottom lines.
The researchers suggest that using rating labels similar to the current Entertainment Software Rating Board system used in the United States falls far short of communicating to gamers the risks associated with data and AI ethics.
Instead, approaches similar to model maps, AI and data labels developed in the field responsible for AI would bring more transparency to the process, the researchers explain, and include detailed information, including on the how the developers used AI in creating the game and how a user’s specific data is collected and used in the game.
“Data is incredibly valuable, and this type of information cannot be used just to retrain a model,” says Schoene.
“I would like to know what happens to my data. I would like to know how my play style informs and influences all types of game agents,” she adds.
But Schoene points out that used wisely, AI could actually help grow the industry and attract new players.
“Yes, there are all these compromises to be made,” she says. “There are dangers, but there are also positives. AI could be used to make games more accessible to people with different abilities if done well.