After several days on the show floor at Mobile World Congress 2026, one theme became increasingly clear: the voice channel is entering a new era.
Across mobile network operators, device manufacturers, technology vendors, and AI companies, voice is no longer being treated as a legacy communication channel. Instead, it’s being reimagined as the primary interface for AI-driven interactions and that shift is bringing a new framework for trust in communications.
The new trust layer is made up of:
- Identity (who is calling)
- Authenticity (is it spoofed/manipulated)
- Intent / Risk (is it likely legitimate or harmful)
Let’s dive into how we saw this shift showing up at MWC.

Voice over the mobile network is becoming the natural interface for AI
Companies like Deutsche Telekom and Samsung showcased AI assistants embedded directly into calling experiences — summarizing conversations, surfacing real-time insights, and analyzing tone mid-call. AI is no longer just supporting communication; it is actively participating in it.
That shift raises the stakes for identity. As AI agents act more autonomously, accessing data, managing permissions, and executing tasks, verifiable caller authenticity becomes the foundation everything else is built on. In a world of deepfakes and AI-scaled fraud, contextual trust isn't a feature. It's a prerequisite for engagement.
AI Is making fraud scalable, targeted, and harder to detect
At the same time, the rise of generative AI is creating new challenges for trust in communications.
Across booth demonstrations and sessions, AI-generated voice scams and deepfakes emerged as a central focus, with live demos of real-time fraud detection, panels debating how identity verification must evolve in this new era, and broader discussions around how to build trust across the entire lifecycle of a call. The message was consistent: fraud tactics are advancing faster than current defenses, and mobile network operators are increasingly being looked to as the first line of response.
In Hiya’s 2026 State of the Call Report, 31% of consumers reported they received a deepfake voice call in the past 12 months, a number that continues to rise year over year. As AI fraud continues to evolve, the industry is increasingly focused on solutions that can detect suspicious activity in real time and alert users during live interactions.
Protection needs to move to the network
Another clear trend was the shift toward protection built directly into telecom infrastructure.
Rather than relying solely on apps or device-level tools, operators are exploring ways to deliver security and trust services at the network layer. This approach has the potential to provide broader coverage and reduce friction for consumers by making protection automatic. It also creates a new opportunity for operators, verified identity and branded calling turn what was once a cost center into a platform for growth and subscriber trust.
Embedding intelligence into the network enables faster detection and response to emerging threats.
The voice channel has evolved and so must the strategy to protect it.
Together, these trends point to a broader transformation: voice interactions are becoming a rich source of intelligence and context. Calls are no longer just conversations — they generate summaries, insights, tasks, and signals that help systems understand intent, risk, and opportunity in real time.
For operators, that's not just a technology shift. It's a strategic one. Those who move now to embed identity, integrity, and intelligence into their networks will define the standard every business on top of them is forced to meet.
The urgency is real. Fraud isn't slowing down — it's scaling. Businesses are already losing customers to calls they're too afraid to answer. And regulatory momentum is building from the outside, with governments and standards bodies increasingly focused on authentication, fraud liability, and consumer protection across the voice channel.
MWC 2026 made one thing unmistakable: voice is being reshaped faster than most strategies have accounted for. The trust layer isn't a future consideration. It's the infrastructure decision in front of operators right now.
The fraud won't wait. The AI won't slow down. The operators who act now won’t just be protecting their networks and subscribers, they’ll be helping define the new future of trusted voice.