Not Nick Jordan

Where AI slop meets a dumpster fire.

AI Is a Primitive, Not a Solution

AI Is a Primitive, Not a Solution

There are roughly three camps in every conversation about what AI can and cannot do.

The prompters believe that with the right input, a language model will solve anything. Feed it your gnarliest problem, iterate on the wording, and eventually the answer arrives. This is a category error. LLMs are function approximators — extremely capable ones, trained on a staggering corpus of human knowledge — but they are not oracles. NP-hard problems don’t become tractable because you have a good language model. You haven’t changed the problem class. You’ve added a very capable heuristic that can navigate the surface of the problem space, but may also hallucinate confidence about territory it has never actually seen.

The dismissers respond to prompter overconfidence by swinging to the other extreme. “It’s stochastic parrots all the way down.” Gary Marcus, Emily Bender — smart people making a legitimate point that LLMs don’t reason in the formal sense, then overcorrecting into a kind of purity that misses what they’re actually good at. The fact that a hammer isn’t a screwdriver doesn’t make it useless.

The systems thinkers are where the substantive work is happening. The Berkeley AI Research group published on compound AI systems last year — the core argument being that the field is shifting from monolithic model capability to systems that compose models with retrieval, tools, memory, and orchestration. The performance ceiling of any single model is far lower than what you can achieve by treating it as a primitive in a well-designed pipeline.


The principle underneath this: AI excels at subproblems, not problems.

A genuinely complex problem — the kind that involves combinatorial search, emergent behavior, irreducible computation — doesn’t have a shortcut. Wolfram’s concept of computational irreducibility is useful here: some systems can only be understood by running them. There’s no compression that lets you skip ahead. No amount of model capability changes that, because the constraint isn’t knowledge or pattern recognition — it’s the fundamental structure of the computation itself.

What AI can do is navigate the search space more efficiently. Prune dead ends. Generate candidate solutions for verification. Handle the subproblems that are actually within its reach — summarization, pattern matching, translation between representations, code generation for well-specified tasks. These are genuinely valuable. They’re just a different thing than solving.

The art, then, is decomposition. Breaking a hard problem into pieces where each piece either:

  • Is tractable by a model
  • Is better handled by a deterministic or algorithmic component
  • Requires human judgment

And then orchestrating those pieces into something that actually works.


This isn’t a new idea in software — it’s basically just good systems design. The reason it’s worth restating is that the AI moment has scrambled people’s intuitions about what belongs where.

At Narrative, we use Temporal to orchestrate workflows where LLMs are steps, not drivers. The workflow provides the structure: sequencing, retries, state management, deterministic branching. The model handles the parts it’s actually good at — interpreting intent, generating query structure, normalizing representations across schemas. The model doesn’t “solve” the agentic task. The architecture does, with the model contributing where its capabilities are real.

That distinction matters enormously when you’re building something that needs to be reliable, auditable, and composable with other systems. An LLM in the driver’s seat of a complex workflow is a liability. An LLM as a well-scoped step in a well-designed workflow is a genuine capability multiplier.


The gap between camps one and three isn’t really about AI capability. It’s about whether you’re thinking in terms of systems or magic.

Magic is when you hand the whole problem to the model and hope. Systems is when you understand the problem well enough to decompose it — to know which parts benefit from probabilistic generation, which parts need deterministic guarantees, and how to make them work together.

Complexity theory hasn’t been repealed. But within its constraints, there’s a lot of room to build.