Not Nick Jordan

Where AI slop meets a dumpster fire.

The Secret Problem With Secrets in Agentic AI

The Secret Problem With Secrets in Agentic AI

The Secret Problem With Secrets in Agentic AI

Every security tutorial starts the same way: never hardcode credentials. Use environment variables. Use a secrets manager. Rotate regularly. The advice is sound, but it assumes a certain kind of software — one that you deploy, configure, and operate. It assumes a human administrator who can set up an API key, paste it into a vault, and tell the service where to find it.

AI agents break that assumption.

The Credential Gap in Agentic Systems

When an AI agent needs to call an external service, the conventional answer is: give it an API key. Put the key in an environment variable or inject it at startup. This works fine when the agent is a fixed deployment that a human configures. It falls apart when the agent is a harness — a dynamic execution environment that spins up to run tasks, without a human administrator in the loop.

The harness doesn’t have credentials. It can’t be given credentials. It has no mechanism to receive them. Handing it a static key would mean baking that key into the harness definition itself — which is exactly the hardcoded-credential anti-pattern everyone has been warning against since the 1990s.

This is the credential gap: agentic AI infrastructure is often designed to execute tasks dynamically, but credential distribution still assumes a world of statically configured services. The gap is real, and most platforms haven’t solved it.

Why Dynamic Client Registration Changes the Model

OAuth’s Dynamic Client Registration (DCR) protocol was designed for a related problem — applications that can’t pre-register with an authorization server before they need to authenticate. The canonical use case is a mobile app developer who can’t anticipate every OAuth client that will be installed in the wild.

Agentic harnesses have the same property. They arrive at runtime, not at configuration time. They need access to protected resources, but they can’t carry credentials in advance. DCR gives them a path: a harness can register itself with an authorization server when it first connects, receive a short-lived client identity, and use that to initiate a proper auth flow.

The human still authenticates — they do a browser-based login, in this case via Google SAML. But the harness doesn’t need to know anything ahead of time. No pre-distributed keys. No static credentials baked into deployment configuration. The credential problem becomes a first-contact problem, which is much more tractable.

Building It on Cloudflare Workers

The architecture that falls out of these constraints is surprisingly simple. A Cloudflare Worker handles the MCP server — the interface the AI harness will use. The same Worker handles DCR, OAuth flows, and the SAML integration with Google Workspace. Cloudflare D1, their edge SQLite database, stores the secrets themselves, encrypted at rest, with an audit log of every access.

The whole thing is self-contained: one Worker, one D1 database, one SAML connection. No separate secrets manager service to run. No AWS IAM permissions to configure. No HashiCorp Vault cluster to maintain. The operational surface area is essentially zero.

For authorization on day one, the model is flat — any authenticated @narrative.io user can read or write any secret. This isn’t naive security design; it’s a deliberate choice to separate authentication (proving you’re who you say you are) from authorization (controlling what you can do) and solve one problem at a time. The audit log covers the gap while you figure out whether you actually need fine-grained permissions. Many teams discover they don’t.

The MCP server exposes get_secret and set_secret as tools. From the harness’s perspective, secrets are just named values it can retrieve by key. The entire auth ceremony — DCR, OAuth, SAML — happens transparently on first connection. After that, it works the way any other tool call works.

The Deeper Lesson

The secrets problem in agentic AI is a specific case of a broader mismatch: infrastructure built for static deployments meeting software that’s inherently dynamic. The patterns we rely on — service accounts, API keys, environment variables — were designed for a world where a human configures a service before it runs.

Agents flip that model. They arrive at runtime. They need access to things they don’t know in advance. They can’t be handed a key over email.

The right response isn’t to bolt static credential patterns onto dynamic systems. It’s to use protocols that were actually designed for dynamic enrollment — DCR, device authorization flow, PKCE — and build infrastructure that can speak those protocols. A few hundred lines of Cloudflare Worker code can do it. The hard part isn’t the implementation. It’s recognizing that the old model doesn’t fit.

Most teams building agentic systems are still solving the credential problem with the wrong tools. They’re writing API keys into system prompts, rotating them manually, and hoping nothing leaks. The alternative exists. It’s just not where most people are looking.