A few months ago a headline went viral: “Satya Nadella says SaaS is dead.” I saved it and thought about it for a while, and I’m not sure what most people took away from it. Nadella didn’t say tech companies are going bankrupt tomorrow. What he described is simpler: the reason SaaS applications exist, which is to put business logic in front of a human user, is disappearing.
A CRM has an UI because a salesperson needs to click through it. An ERP has screens because a finance manager needs to approve things. But AI agents don’t click through UIs. They reason and call APIs. In that world, all those layers built for human interaction just dissapears.
And what about the application logic itself? That gets generated on demand, customised to your specific context. Code is no longer the main competitive advantage. Now you describe a workflow and get working, tailored code in seconds.
So If UI and code becomes a commodity, then what is left? whats the competitive advantage? what actually differentiates one company from another in an “AI first” world?. Not necessarily the software they use or the UI. And not even the processes as those can be generated too. The data!
The transaction history, the customer relationships, the domain knowledge built up over years; that’s something you can’t reproduce. And if data is the real competitive advantae, then the system that stores, secures, and makes that data intelligently accessible becomes the most critical infrastructure decision you can make. Not the AI model, which are becoming commodities fast. Not the agent framework, as there are hundreds of them now. The database!
Why You Can’t Just Feed Everything to the LLM
If LLMs have million-token context windows, why not just dump your entire database in there and let it reason? I get why it’s tempting. But think about the physics for a second.
Processing a 1M-token context window requires roughly 160GB of VRAM for a single session. Now multiply that by thousands of concurrent enterprise users. The numbers stop working fast.
And that is before we even get to the accuracy problem: language models are not query engines. Asking an LLM to process 100 million transactional rows is the wrong tool for the job, the saeme way you would not ask a consultant to manually read every spreadsheet in your company before answering a question.
The right model (which I’ve written about before ) is to bring intelligence to where the data lives, not move the data to where the intelligence is. The database handles precision retrieval. The LLM handles reasoning. They are a team, not a replacement for each other.
This is the bet Oracle made years ago. While the rest of the ecosystem kept adding specialised databases, each one bringing a new sync problem, a new security gap, a new thing to break at 2am, Oracle was pushing in the opposite direction: one engine for everything. Vectors, JSON, graph, spatial, relational; same security policies, same ACID guarantees, same execution plan.
The practical difference for an AI agent is huge. I explored the scalability angle on this in a previous post . The short version is that fragmented stacks collapse exactly when load peaks. Convergent ones don’t.
My personal view
Oracle has earned some criticism; for moving slowly on cloud, for licensing complexity, for having complex administration, etc. Some of these things are part of the past and others still remains.
But as of today, Oracle is in a great position. Most AI projects that are failing right now are not failing because the LLMs are bad. In my personal experience they fail because the data layer wasn’t ready. “Not ready” meaning (among many other reasons): no real security, no governance, agents that work fine in a demo and fall apart when a thousand users hit them at once. Those things don’t come in a “patch”. You either built them in from the start, or you didn’t. Oracle has been building exactly that for many years.
The SaaS layer is fading. The middleware is getting replaced by generated code and agents. But underneath all of that, someone still needs to store, secure, and serve the data that makes those AI agents useful. That is not going away. If anything, it’s becoming the most important infrastructure decision of the next years.

