With approximately 75% of operators now launching generative AI commercially, integrated AI strategies are increasingly critical to facilitating interactions between AI and networks. Societal concerns regarding the adoption of AI are pushing companies to take an ethical approach in implementing this technology.
At a recent panel discussion titled “AI 2025 – Shaping the Telecommunications Landscape of Tomorrow,” experts from GSMA Intelligence as well as operators, including Telefonica, shared their views on the responsible adoption of AI.
GSMA Chief Intelligence Officer Peter Jarich opened the conversation by providing an overview of how the AI space has evolved over the past year and will continue to evolve in 2025 as technology is quickly adopted. He highlighted that around two thirds of operators have adopted an integrated AI strategy, saying: “AI is incredible for making networks work better, for healthy operational efficiency, for improving what operators do with their networks , but to take full advantage of the AI to ride. To determine the use cases we want, we need networks that can do it.
Jarich noted that increased use of AI on networks would drive upstream traffic: Traditionally, mobile networks are designed to support much more downstream traffic, so this increased workload of the AI will affect the way networks are designed and built. Operators primarily use AI in their network operations, often to automate processes and troubleshooting as well as for security. Supporting the use of AI generation will involve additional investment in the ability to generate new traffic.
There are of course many questions about how operators will make money with AI; focusing internally on network operations may be less risky than focusing externally on the customer side, but ultimately operators need to grow their businesses by generating revenue. Jarich noted that there is a major trend toward collaborating with major cloud players such as Google and Microsoft to leverage the innovation happening across the ecosystem.
Security and ethics
Jarich argued that in terms of security, AI has a dual nature: For years, operators have used AI to help them understand and protect against the threat landscape, but increasingly AI is used to generate these threats – it causes more fraud on networks. Around 90% of operators cite security as their top strategic priority.
Focusing on a key theme, Jarich said responsible AI is not just lip service. Much like sustainability before it, the ethical use of AI has become a critical pillar of business that companies understand the importance and value of pursuing. While commitments to Net Zero were evident, Jarich said the same trend was clear with responsible AI, with more than 70% of operators having an AI governance framework in place. Jarich acknowledged that some markets are of course taking the lead on AI governance, but noted that this requirement is recognized globally.
Ethics in AI is of course crucial, but sustainability is just as essential. The mobile sector has led other sectors in terms of net zero commitments, and while AI can help improve network efficiency, the increase in energy consumption to support mobile loads AI work could more than make up for that. Jarich said it was important to consider these aspects together: 75% of an operator’s energy consumption comes from its radio access network, so use AI sustainability solutions to optimize efficiency of RAN makes perfect sense and helps counter the increased power consumption of AI.
Adopt AI responsibly
Alix Jagueneau, Head of External Affairs at the GSMA, took the stage to discuss the GSMA’s Responsible AI initiative, a partnership with McKinsey aimed at assessing the opportunity behind responsible AI. Over the next 15 to 20 years, AI opportunities for telecommunications are estimated at US$680 billion, but to take advantage of them, Jagueneau said it was crucial to link innovation to responsible deployment and to ethics. She argued that since the telecommunications industry is heavily regulated and heavily reliant on trust, it makes more business sense to adopt AI responsibly. As 65% of operators adopt an AI strategy for their business, internal governance is becoming a key priority, with organizations establishing champions to ensure governance is in place, anticipate potential issues and manage their reputation.
In September, the GSMA launched the Responsible AI Maturity Roadmap, a voluntary common framework for the private sector to define a consistent way for operators to demonstrate that they are applying AI responsibly. With reputation management so important to businesses, the roadmap helps operators show their commitment to responsible AI and be seen as a leader in this area – especially at a time when policymakers focus on AI governance. In Europe, EU AI law has been in force for a few years, but over the same period, governments around the world have sought assurance that the private sector is deploying AI responsibly.
The GSMA works with operators around the world, and not all are at the same stage of AI deployment and adoption: some are leaders, while others will be slow to adopt AI and transform their business. Jagueneau noted that regardless of the operator’s AI maturity level, the roadmap helps plan business strategies and offers best practice guidance to operators to implement AI in a manner responsible, whatever their ambitions. As more operators engage with and use the tool, the GSMA can compile more resources and content to share learning across the industry.
Jagueneau described the RAI Maturity Roadmap as “a way to facilitate internal conversations within the company, to have those discussions before deploying an AI tool to think about the different checkpoints. It’s also about being visionary – it’s not just for the sake of compliance. If you think about responsible AI, from a compliance perspective, it’s more about being ahead of the curve and not playing catch-up.
Around 22 mobile operators have adopted the RAI Maturity Roadmap, across a wide range of geographies – not just developed markets, but also operators in Africa and Asia Pacific. Although the roadmap was developed with the help of telecommunications companies, Jagueneau emphasized that it is industry-agnostic and has the potential to be replicated in other areas.
AI Ethics in Corporate Governance
Joaquina Salado, Head of AI Ethics at Telefonica, provided more details on Telefonica’s adoption of the RAI Maturity Roadmap. In 2017, Telefonica established ethical principles to define its ambitions for working with AI, adding that the company has recently updated these principles to strengthen elements, for example on sustainability. Telefonica’s AI governance model includes roles, responsibilities and processes to ensure it reaches the entire organization – this has become particularly important with the rapid adoption of Gen AI in the entire company.
As part of adopting the RAI Maturity Roadmap, Telefonica created the role of responsible AI champions within its organization, initially in areas working directly with AI, but now almost everywhere following the proliferation of Gen AI. Salado noted that this gives the operator the opportunity to instill an ethical mindset by implementing cultural change within the company – and that this helps retain in-demand talent, as more and more of workers are attracted to companies with strong CSR values.
“Alongside this governance model, we work a lot on awareness, communication and training of our employees. We have fully integrated responsible AI into the AI strategy, both on the business side, but also in terms of training: whenever we train our employees in AI, responsible AI is a key part of it .
Telefonica also uses an internal group of experts with different specialties to escalate use cases that might have more ethical components or pose a higher risk now that the AI law is in place. Salado noted that the group is in the process of mapping the regulations: “in the case of Europe, we have the requirements of the AI Act map in our workflow, where we analyze and assess the risks of our use cases, and it is also ready to implement other regulations that may emerge in other geographies where we operate, such as Latin America.
Salado cited Verify, a tool developed by Telefonica, as an example of responsible AI combining business opportunities and societal benefits. Verify can identify AI-created text, videos and images, helping businesses and individuals identify and combat misinformation. Adopting this responsible approach helps to increase the trust of customers as well as society as a whole, as well as mitigating risks from the outset.