Why the real disruption will happen when AI moves from cloud subscriptions into the devices we already carry.
AI as a Utility
Sam Altman recently described a future where artificial intelligence is sold more like electricity or water. Speaking at the BlackRock Infrastructure Summit in Washington, D.C., the OpenAI CEO said:
“We see a future where intelligence is a utility like electricity or water and people buy it from us on a meter and use it for whatever they want to use it for.”
That is an important idea. It is also a very understandable one coming from the CEO of a company building some of the most powerful AI systems in the world. OpenAI, Anthropic, Google, Microsoft, and others are investing heavily in the infrastructure needed to train and run large AI models. Data centers, GPUs, energy, networking, cooling, and global scale are not cheap. So it makes sense that the business model today looks like a utility: you pay for access to intelligence hosted somewhere else.
But I do not think the utility analogy tells the whole story.
In my view, AI adoption will look less like electricity and more like the history of computing itself.
At first, computing was centralized, expensive, and mostly available to governments, universities, and large corporations. Then it became personal. Then it became mobile. Then it became invisible enough that most people stopped thinking about “using a computer” and simply started living with computing all around them.
I think AI is headed in the same direction.
AI Today Looks a Lot Like Mainframes
Back in the 1960s and 1970s, computers were not personal devices. They were rooms. They were mainframes. They were expensive, centralized systems operated by specialists and used by large institutions.
The average person did not have a computer at home. Most people did not even imagine why they would need one.
That seems obvious now, but it was not obvious then.
Computing was something you accessed through an organization. It was not something you owned. It was not something you carried. It was not sitting on your desk, in your backpack, or in your pocket.
That is similar to where AI is today.
The most capable models currently live in the cloud. You access them through a subscription, an API, an enterprise license, or a platform integration. The intelligence is somewhere else. You send your prompt across the internet, it gets processed in a data center, and the answer comes back.
That model works. It is useful. It is powerful.
But it is also early.
Then Computing Became Personal
The personal computer changed the relationship people had with computing.
Steve Jobs, Steve Wozniak, Apple, Microsoft, IBM, and many others helped move computing out of institutions and into homes, offices, schools, and small businesses. The computer stopped being only a corporate resource and became a personal tool.
That shift mattered because it changed who could create.
When computing was centralized, access was limited. When computing became personal, experimentation exploded. People wrote software. They built businesses. They learned programming. They published newsletters. They managed finances. They made art. They played games. They automated work. They created things that would have been impossible, or at least impractical, when computing was locked inside institutions.
That is the pattern I see coming with AI.
Right now, many people experience AI as something they rent from a large provider. Over time, more of that intelligence will move onto the devices people already own.
Not all of it.
But enough of it to change everything.
The Smartphone Was the Next Leap
The personal computer put computing on the desk.
The smartphone put it in your pocket.
That was another major shift. Suddenly, people had a computer, a camera, GPS, messaging, email, maps, social media, apps, and the knowledge of the internet available almost anywhere. You did not have to go sit at a desk to use computing. Computing came with you.
That changed expectations.
People stopped asking whether they were “online” and started assuming they were. They stopped thinking of software as something installed once on a machine and started thinking of it as something always available. They stopped planning around access and started planning around immediacy.
AI will go through a similar shift.
Today, we often open a specific AI app or website. We think in terms of models, subscriptions, prompts, tokens, and usage limits. That is normal for this stage of the technology.
But the future probably will not feel like that.
The future will feel like AI is just there.
It will be in the operating system. It will be in the apps. It will be in the development tools. It will be in the camera, microphone, browser, terminal, IDE, calendar, email client, and document editor. Some of it will call out to the cloud. More of it will run locally.
Eventually, much of the everyday AI people use will not feel like “using AI” at all.
It will just feel like using a computer.
Local AI Is Already Starting
This is not just a theory. The industry is already moving this way.
Apple Intelligence was designed to use on-device processing when possible and scale to server-based models for more complex requests through Private Cloud Compute. Google’s Gemini Nano is built for on-device AI in Android through AICore, giving developers a way to run generative AI experiences without always sending data to the cloud. Microsoft’s Phi family of small language models is aimed at scenarios where AI can run with fewer resources, including directly on devices without requiring cloud connectivity.
You can already run local models on everyday hardware using tools like Ollama. Popular smaller models people run locally today include Llama 3.2 3B, Mistral 7B, Phi-4 Mini, and Gemma 2B and 9B, depending on the device and use case.
That matters.
The best model in the world may still need a data center. But not every task needs the best model in the world.
A lot of everyday AI work is smaller than that:
- Writing assistance
- Summarization
- Search across your own files
- Local code help
- Meeting notes
- Image cleanup
- Voice commands
- Translation
- Personal knowledge retrieval
- Simple automation
- Context-aware recommendations
A locally running model does not need to beat the frontier model on every benchmark to be useful. It only needs to be good enough for the task, fast enough to feel natural, private enough to trust, and cheap enough to disappear into the device experience.
That is where things get interesting.
The Future Is Not Either Cloud AI or Local AI
I do not think this becomes a simple cloud-versus-local debate.
That is the wrong framing.
The future will likely be hybrid.
Large frontier models will still matter. Governments, enterprises, research labs, and software companies will continue using powerful hosted models for complex reasoning, massive-scale analysis, advanced coding, scientific research, content generation, simulations, and high-value business workflows.
There will absolutely be a market for metered intelligence.
But the average person does not need a frontier model for every interaction. They do not need to pay a subscription every time they want a device to summarize a notification, clean up a paragraph, organize notes, search personal files, draft a message, or explain an error.
For those tasks, local AI will become increasingly attractive.
It reduces latency.
It improves privacy.
It works offline.
It lowers marginal cost.
It gives users more control.
And perhaps most importantly, it makes AI feel less like a service you rent and more like a capability you own.
That is a big difference.
Subscriptions Will Not Be the Whole Story
Today, subscriptions make sense. The models are expensive to train and run. The infrastructure is massive. The demand is high. The user experience is still evolving. The business models are still being discovered.
But I do not believe the long-term future of AI for everyday users is only “pay a monthly fee forever to access intelligence somewhere else.”
That may be part of the future.
It will not be the whole future.
Just as mainframes did not disappear when personal computers arrived, cloud AI will not disappear when local AI gets better. Mainframes evolved. Enterprise computing evolved. Cloud computing emerged. Centralized systems still matter. In many cases, they matter more than ever.
But personal computing changed who had access.
That is the key point.
AI will become truly disruptive when it stops being something only the well-funded can access at scale. It becomes disruptive when the average person has useful intelligence running locally on a laptop, phone, tablet, headset, car, or embedded device.
Not as a novelty.
As a default.
Democratized AI Changes the Game
The real disruption will not happen only because the largest models get smarter.
That matters, but it is not the whole story.
The bigger disruption comes when AI becomes widely available, inexpensive, embedded, and personal.
When students can use it without worrying about usage caps.
When small businesses can automate work without hiring a team of specialists.
When developers can build AI-enabled applications that work offline.
When people can use their own data locally without sending everything to a third-party service.
When AI becomes part of the device, not just a tab in the browser.
That is when the creative surface area expands.
That is when more people start building.
That is when new workflows emerge that do not depend entirely on centralized platforms.
And that is when the technology becomes less about who controls the biggest model and more about what people can do with intelligence that is always within reach.
The Mainframe Moment Will Not Last Forever
AI today still feels like the mainframe era in many ways.
The power is centralized.
The cost is significant.
The infrastructure is specialized.
The best capabilities are controlled by a relatively small number of companies.
That is not unusual for a major technology shift. It is often how these things begin.
But it is not usually where they end.
Computing moved from mainframes to personal computers to smartphones to cloud-connected devices everywhere. AI will likely follow a similar path. The cloud will remain important, but more intelligence will move closer to the user. More models will run locally. More devices will ship with AI built in. More software will assume AI is available as a baseline capability.
That is the future I find most interesting.
Not just AI as a utility.
AI as personal computing.
AI as something embedded into the tools we already use.
AI as something available even when the internet is not.
AI as something that helps people think, create, build, learn, troubleshoot, and communicate without requiring every interaction to pass through a metered cloud service.
The Real Question
So yes, intelligence may become a utility. But it will also become personal. That is the part we should not miss.
The companies building the biggest models today will still matter. They will serve governments, enterprises, developers, researchers, and anyone who needs the highest-end capabilities. There is enormous value there.
But the next major wave of adoption may come from the other direction.
From smaller models.
From local devices.
From laptops and phones.
From open models and optimized hardware.
From AI that is cheap enough, fast enough, private enough, and available enough to become ordinary.
That is usually when technology gets really powerful.
Not when it is impressive.
When it becomes normal.
The mainframe made computing possible at scale.
The personal computer made computing accessible.
The smartphone made computing constant.
AI is still early in that same journey.
And the future of AI may not be just a meter running somewhere in the cloud.
It may be intelligence sitting quietly in your pocket, waiting for the next thing you want to create.