AI Is Not a Feature. It Is a Dimension.
AI Is Not a Feature. It Is a Dimension.
Most companies treat AI the way they once treated mobile strategy: as a layer to add on top of what already exists. Slap a chatbot on the docs. Wrap the API in a prompt. Call it intelligent. The result is AI that sits at the edge of the product, decorating it rather than running through it.
The more honest framing is that AI is not a feature. It is a set of independent dimensions. The right question is not should we add AI but how do we want to compose it.
Six Questions That Actually Matter
When you think clearly about AI deployment, six questions emerge, and they are orthogonal. You can answer each one independently of the others.
Who writes the prompts? You can ship a product with prompts the vendor wrote, prompts the customer brings, or prompts sourced from a community marketplace. These are different products. A vendor who locks you into their prompts is making a bet that their prompts are always better than yours.
Whose model runs the inference? Foundational models from OpenAI or Anthropic, fine-tuned models you built yourself, models pulled from a marketplace, or models the customer brings entirely. The answer here affects latency, cost, privacy, and regulatory compliance. Treating the model as a fixed upstream dependency is the wrong abstraction.
How is the model trained? No training at all, fine-tuning on your domain, or full custom training. Most companies that think they need custom training actually need better prompts. But some genuinely need the training dimension, and conflating the two wastes effort.
Where does the model run? The customer data plane. The open internet. A shared cloud compute pool. This is where the enterprise security conversation lives, and it is distinct from the model selection conversation.
How does the model access tools? This is the MCP question. Standard tools the platform provides, custom tools the customer brings, or some negotiated combination. The agent power scales directly with the quality of its tool access.
Who runs the agent loop? A platform-managed agent loop, a customer-built loop, or something delegated to Claude or OpenAI native orchestration. This is where the architectural philosophy lives. Tight loops with fixed behavior versus open loops the customer can modify.
The Semantic Layer Is the Constant
Here is what makes this composability coherent rather than chaotic: underneath all six dimensions, there is a constant. A semantic data layer that does not change regardless of how you answer the six questions.
The model can change. The prompts can change. The hosting can change. The agent loop can change. The semantic layer stays fixed. That is the substrate the AI reasons about.
This matters because it inverts the dependency structure. If AI is a feature, your data is in service of your AI. If AI is a dimension, your AI is in service of your data. That inversion determines whether the AI is brittle or durable. Brittle means it fails when the model changes. Durable means it continues to work because the semantic layer stays coherent.
Why Composability Is Not Just a Marketing Word
The objection is obvious: composable gets said about everything now. It is the new modular. But the test of real composability is whether you can independently vary each dimension without breaking the others.
A platform that calls itself composable but requires you to use their model, their prompts, and their agent loop is not composable. It is bundled. The word is being used as positioning, not as architecture.
Real composability means a customer can bring their own fine-tuned model, deploy it on their private data plane, use their own agent loop, and still get the full benefit of the semantic layer. The platform works harder, not differently. Most platforms cannot actually say that.
The Practical Consequence
The companies that get this right will not necessarily win on any single dimension. They probably will not have the best model. They will not necessarily have the best prompts or the best agent loop. What they will have is an architecture that does not require them to win on every dimension at once.
When the model landscape shifts, and it will, they reconfigure one axis. When customers want to bring their own tooling, they accommodate it without rebuilding. When a new regulatory requirement mandates on-premises inference, they support it without a platform rewrite.
That flexibility compounds. A bet on composability today is a bet on staying relevant through whatever changes are coming, instead of betting that the current configuration of AI capabilities is permanent.
It is not. Nothing about this landscape is permanent. Build like it.