Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Technology
David Phelan

Qualcomm boss says his phone chips could actually make you use your phone less

Cristiano Amon is CEO of Qualcomm, a man with a demeanour that’s simultaneously relaxed and laser-focused when you interview him. Recently, the company announced brand new chips like the Snapdragon 8 Elite Gen 5 processor, which is just coming on stream in flagship mobile phones like the recent OnePlus 15 and expected in the Samsung Galaxy S26 Ultra next year.

But how did the company devised these processors and what do they mean for how we actually use our phones? Amon had said that agentic AI will play a big part, meaning you’ll ask your phone to do things in a different way. To book yourself an Uber, say, rather than opening the phone, finding the Uber app and choosing a ride that way you will tell your phone to book you an Uber and it will act as the agent that sorts it out. It sounds quicker, but is it too much of a behaviour change for people to adopt it, I wonder.

“When you have big changes in user interfaces,” Amon says, “people think one thing is going to get replaced by another, so what's the next thing? But the phone didn't replace the laptop. The laptop didn't replace the desktop. And the phone's going to continue, but now we're going to have those new devices, like smart glasses. We're going to wear glasses, jewellery, maybe we'll choose a pin but then all those things get connected to the agent. The phone will no longer be the centre. The agent is the centre.”

This still sounds like a sudden change, but Amon says it won’t be like that. “You didn't have to do a hard switch shopping from your laptop to your phone. But I can tell you, you probably do more shopping on the phone than you do on a laptop.

“You're going to start using the agents more and use your phone for different things. This gives us confidence that we found the answer to what the next mobile platform is: we enable everything we wear to connect with agents. Everything we wear will create a different user experience which starts with the agent at its base.”

If these other gadgets are smart, like the highly successful Ray-Ban Meta and Oakley Meta glasses with camera and microphone built in, you may touch your smartphone less because you’re talking to your glasses or smartwatch more. For a chip maker like Qualcomm this is good news because suddenly, even if you use an iPhone, say, which has Apple’s own processors, you’ll also be using products where Qualcomm is involved.

“You may find a situation where people are buying the fashion devices they like, and eventually their connection with the phone is no longer the centre of everything, and whether they have an Android or an iPhone isn’t so important any more,” Amon says. “The horizontal platform is the one that allows you to choose a Ray-Ban, Oakley, or maybe I want Gucci, or Maui Jim. Do you believe everybody in the world is going to have Tim Cook eyeglasses?” Not that Apple makes smart glasses, at least not yet, but the point is clear.

Something Amon has been talking about for some time is the how AI has moved from just being in the cloud to being on-device as well, AI on the edge, as it’s called.

“Finally, people realize it's happening because the UI is where the human is, the data is where the human is. That becomes the AI computing that changes the structure of applications in OS, and then changes the chip, and those are kind of the chips that we are building,” he says

The move to what’s called agentic AI, where you ask your phone for your balance instead of opening the banking app, for instance, also sounds like a big change. I wonder if people will just stick with opening the app as they do now. “People have a tendency to think in binary terms: I stopped doing this, I'm going to go do that. It’s like everybody used to shop on Amazon on your browser.

“Now you shop on Amazon on your phone. Can you still do the browser? Yes, and you still do it sometimes. The question is what percentage do you do that? If you decide to pull your phone out, then yes, you will continue to use the app. The phone's going nowhere. but there are certain things that you used to do on your phone that you're going to feel more comfortable doing with the agent.”

In fact, as Amon points out, that’s already happening a lot. If you open a Chrome browser and start typing your search term, the first result in Google is the answer that the Google Gemini agent created, alongside the familiar site of suggested links. Often, the AI-generated answer is enough and you won’t click through.

Amon then suggests an example that, for me, is the real purpose of AI: to help you remember people. “You're walking around, wearing smart glasses and then the glasses let you know that, this person on the right, you met them before. You look to your right and you say hi. You don't have to pull up your phone.

“Or say you're in your office going through your mail and there's a bill and you say, ‘Pay this bill.’ And the smart glasses hear that and it's done. Or you can pull up your phone, open the app, and go in and pay a bill. So, I think this is how it's going to work. It's going to be a transition and it's going to come natural. Because if it's not natural, it's not going to be successful.”

It's a mighty future but sometimes, even if the change is natural, consumers are put off when things don’t work: I’m not alone in asking a voice assistant to do something and it completely misunderstanding or failing to answer, am I?

Amon suggests the future will see less of this as the networks grow. “Put your phone in airplane mode and see if it does everything you want. Probably not. With AI, it's going to be a network of intelligence, cloud and edge working together as one. It's never about one or the other. You're going to see the evolution and integration of applications with models.

It will connect a model with Uber, for instance, so you can say ‘Call an Uber’ and the model knows how to call an Uber using your account. Those things are in process right now. If you ask the model, give me the meaning of life, you may not get the exact answer, but who knows?”

Is there a risk that AI deployment just isn’t ready yet and that repeated mistakes will put users off, I ask. Amon suggests that that future is near.

“The first answer is, in our area of competence, can we build a processor that can run the things we want it to? And I think we're there. And the second technical answer to is, are models evolving the way that they are now able to run and do those things? And I think the answer is also yes.

“Models evolve dramatically. The next step is the one that takes a little bit of time, which is how do you get great devices with those capabilities in the hand of people, and people start adopting it and using it at scale to the point that the rest of the ecosystem starts to make sure that all of their experiences are going to be agent first, and that's the story of technology adoption.

“I think we're at the beginning of the process. The inflection point already happened, and we're going to start to have scale. Take Meta Ray-Ban glasses which have been exceeding expectations. These things are going to happen when those devices sell in the few millions to the tens of millions become hundreds of millions.”

Qualcomm works to a road map and its next processors will need to address problems that haven’t come along yet. I wonder how Qualcomm anticipates future needs.

“It’s what we do for a living. You have to develop these technologies up to four years in advance. If you have the vision and the belief system, and you are able to take risks, then I think that's the history of innovation.”

There are problems, I suggest ,if you take the wrong risks and don’t predict where things are going. “Yes, especially if you're too conservative. Because you can miss an entire technology transition.”

Qualcomm restated its mission as “bringing AI everywhere”. But does that mean there’s a risk, if you focus just on AI, that you could leave other things out of the equation that get neglected? Maybe companies are right to focus just on design now and let AI come later, I suggest.

Amon is clear: “No. The agent, powered by AI, is the future, whether it’s Gemini or ChatGPT or whoever. The agents that understand human intention will one day mean there might be no more apps and no more app stores. Every day that goes by, people will do more agentic stuff. My prediction is the new devices, the new experiences, that's where the innovation is going to be.

“The AI model is what will understand you. You choose your model, such as ChatGPT or Gemini that’s going to be on your glasses or sometimes in your phone or your car or whatever. And then the model starts to understand your intentions. The model has access to the Uber app or to the banking app because everything is connected to the model, so it becomes the app store in itself.”

That’s perhaps the biggest change that’s coming, then. That we’ll choose our preferred AI model for all our devices and it will do all the heavy lifting for us, letting us concentrate on the important tasks, like posting Instagram reels and playing games.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.