Internal Developer Portals vs Context Layers: What AI Agents Actually Need
IDPs were built for humans browsing catalogs. AI agents need something different: queryable relationships, real-time state, and cross-system reasoning. Here's why IDPs can't close the gap.

Internal developer portals had a good run. Backstage, Cortex, OpsLevel, Port. They all solved a real problem. As engineering organizations scaled past fifty or a hundred engineers, nobody could keep the whole system in their head anymore. A portal gave you a place to look things up.
I say this as someone who built a managed Backstage offering. I've seen the architecture from the inside. I've watched organizations deploy it, struggle with it, and eventually ask whether it's solving the right problem.
The answer, increasingly, is no. A portal is built for humans who browse. AI agents don't browse. They query. And that difference breaks everything.
The Portal Model
An IDP works like this: teams declare what they own in YAML files or through a UI. The portal aggregates those declarations into a searchable catalog. When someone needs to know who owns a service, or what documentation exists, or which team to contact, they open the portal and look.
This model has assumptions baked in:
-
Humans will keep declarations current. They won't. Every organization I've talked to describes the same pattern: the catalog is accurate for about a quarter after initial setup, then drift sets in. New services get deployed without catalog entries. Ownership changes but nobody updates the YAML. Documentation links rot. The portal was supposed to improve developer experience and reduce toil. Now you have engineers whose job is to maintain the portal. Spotify reports 99% internal adoption of Backstage. External organizations average 10%. That gap tells you everything about who can afford the maintenance burden.
-
The portal is the interface. Humans navigate to a web UI, search or click around, find what they need. This assumes a human in the loop, consuming information visually. But the LLM is the new interface. When the primary consumer of your infrastructure data is an agent, the portal becomes irrelevant. What matters is the API, the data model, and whether relationships are queryable.
-
Relationships are declared, not discovered. If you want Service A to show its dependency on Service B, someone has to write that down. The portal doesn't know what actually connects to what. It only knows what someone claimed connects to what.
-
Little to no runtime state. IDPs know what was declared. They don't know what's happening right now. Is the service healthy? What version is deployed? Who's on call this week? That information lives in other systems. The portal can link to them, but it can't answer questions that require correlating live state across systems.
-
Connections are built in code, not rules. In Backstage, integrations are plugins written in TypeScript. Want to connect a new system? Write code, maintain it, debug plugin compatibility on every upgrade. This is engineering overhead masquerading as extensibility. A catalog should be trivially extensible. A rules-based approach lets you define new entity types and relationships declaratively, without shipping custom code or waiting for a plugin ecosystem to catch up.
These assumptions made sense in 2019. They don't make sense when the "user" is an AI agent.
What Agents Need Instead
When an LLM-powered agent needs to understand your infrastructure, it doesn't open a browser and navigate to your portal. It needs to make queries and get structured answers.
Consider a real scenario: a critical CVE drops at 2am. Your security team deploys an AI agent to assess blast radius. The agent needs to answer: which services use the affected library, which of those are internet-facing, who owns them, and who's on call right now?
What does an IDP offer here? A static catalog that might list services. Maybe ownership fields, if someone filled them out. No runtime state. No dependency graph derived from actual connections. No integration with your identity provider or on-call system.
The agent would need to:
- Query the portal API (assuming it has one) to find services
- Hope the catalog has accurate dependency data (it doesn't)
- Make separate calls to GitHub to find library usage
- Make separate calls to Kubernetes to find what's deployed
- Make separate calls to PagerDuty to find who's on call
- Somehow correlate all of this without any system understanding the relationships
This isn't using a context layer. This is archaeology. The agent is doing the same Slack-and-spreadsheet correlation that humans do, just faster.
The Context Layer Difference
A context layer inverts the model. Instead of waiting for humans to declare what exists, it continuously discovers entities and relationships from your actual infrastructure.
When an integration connects to Kubernetes, it doesn't just list deployments. It maps each deployment to its namespace, its container images, the source repositories those images came from, the teams that commit to those repos. When it connects to your identity provider, it maps users to teams, teams to services, services to on-call schedules.
The result is a graph. Not a catalog of declarations. A queryable graph of relationships derived from system state. And because it's connected to live systems, it can show runtime state: what's deployed right now, what's healthy, what's degraded, who's currently on call. Not what someone wrote in a YAML file last quarter.
A context layer can also integrate with your existing IDP and on-call systems. It doesn't replace them. It connects them. The service catalog becomes one more data source in the graph, correlated with runtime state from Kubernetes, ownership from your identity provider, and schedules from PagerDuty. The portal's declarations get validated against reality instead of existing in isolation.
Now that CVE scenario works differently. The agent queries: "Find all deployments using containers built from repositories that include the affected library, filter to those with ingress rules allowing external traffic, return the owning teams and their current on-call."
One query. Six relationship hops. Answers in seconds. No archaeology.
Why IDPs Can't Just Add AI Features
Some IDP vendors are adding "AI assistants" or "copilots" to their products. These features let you ask questions in natural language and get answers from the catalog. Port has added AI-powered "catalog discovery" that suggests missing entities. OpsLevel has shipped an MCP server to expose their catalog to agents.
These help at the margins, but they don't solve the structural problem. Port's AI discovery still produces suggestions that require manual approval, layered on top of a declaration-based model. OpsLevel's MCP server gives agents faster access to catalog data, but if that data is stale, the agent just gets stale answers faster. The AI is still querying a declaration-based catalog. If the underlying data is wrong, the AI gives wrong answers with more confidence. If relationships aren't declared, the AI can't infer them. Garbage in, garbage out. Just with a friendlier interface.
The deeper issue is architectural. IDPs are portal-first. They're built around a web UI for humans. APIs exist, but they're secondary. The data model reflects "what humans need to browse," not "what agents need to query."
A context layer is API-first (or more specifically, MCP-first). The entire system is designed around programmatic queries. The graph is the primary artifact. Human interfaces are views into the graph, not the other way around.
Portals Are Not Platforms
There's a branding problem in this space. IDP vendors call their products "platforms." Marketing pages promise "the platform for developer experience" or "your internal developer platform."
But a portal is not a platform.
A platform is something you build on. It has primitives. It's composable. Other systems integrate with it as infrastructure, not as a destination. Kubernetes is a platform. AWS is a platform. Your CI/CD system is a platform. You deploy things to them, build workflows on top of them, extend them with your own logic.
A portal is a view. You look at it. You don't build on it. You might embed some widgets or write a plugin, but the portal isn't the foundation of anything. It's the glass window you peer through to see what's behind it.
And you better hope the portal has the view you need. When an incident hits and you need to know which customers are affected by a failing service, does your portal show that? When you need to trace a deployment back to the PR that introduced the bug, through the CI pipeline, to the engineer who merged it, does your portal connect those dots? Probably not. Portals show what they were designed to show. If your question doesn't fit the pre-built views, you're back to Slack archaeology.
This distinction matters because it reveals where the value actually lives. When an IDP vendor says "platform," what they mean is "we aggregated data from your actual platforms and put a UI on it." The platforms are still GitHub, Kubernetes, PagerDuty, your cloud provider. The portal is just a lens.
An actual platform for developer experience would be the system of record, not a viewer of other systems of record. It would be the thing other tools integrate with, not the thing that integrates with other tools. It would have primitives that your workflows depend on.
A context layer is closer to this. The ontology is the system of record for relationships. Agents query it as infrastructure. Other tools can integrate with it through MCP. The graph is a primitive that workflows depend on. It's not a view you open in a browser tab. It's the connective tissue that other systems use.
The MCP Inflection Point
The Model Context Protocol changed what's possible. Before MCP, integrating an AI agent with infrastructure systems meant building custom tooling for each combination of agent framework and data source. The integration surface was enormous.
MCP provides a standard interface. Any MCP-compatible agent can query any MCP-compatible data source. This isn't a theoretical benefit. It's how agents are being built right now.
IDP vendors see this. They're racing to bolt MCP onto their catalogs. "Now with MCP support!" the roadmaps say. But this misses the point entirely.
Exposing a declaration-based catalog through MCP just gives agents programmatic access to stale data. The agent can now query your catalog faster, but the catalog is still wrong. The relationships are still whatever someone remembered to declare. The ownership is still from last quarter. MCP is a transport layer, not a truth layer.
The problem isn't the interface. It's the data model. You can't MCP your way out of YAML rot.
A context layer built for the MCP era starts from different assumptions: relationships are discovered, state is real-time, and the primary consumer is an agent making structured queries, not a human browsing a portal. MCP isn't bolted on as an afterthought. It's the primary interface, and the data behind it is derived from reality, not declarations.
Progressive Disclosure for Agents
There's another dimension that IDPs miss entirely: progressive disclosure.
When an agent connects to a context layer, it shouldn't see every possible entity type and every possible action. That's as overwhelming to an LLM as it would be to a human. The context layer should expose only what's relevant to the current query.
If the agent is investigating a GitHub repository, it sees GitHub-relevant tools and can traverse to related entities (deployments, teams, CI pipelines). It doesn't see tools for systems that aren't connected. The context layer constrains the action space based on what the query discovers.
IDPs have no concept of this. They're flat catalogs. Everything is equally accessible, which means nothing is prioritized. An agent querying an IDP gets the same undifferentiated firehose that a human gets, just through an API.
When IDPs Still Make Sense
IDPs aren't useless. They serve real functions:
Golden paths and templates. If you want to give engineers a standardized way to spin up new services with all the right scaffolding, an IDP is a reasonable delivery mechanism. This is a human workflow for humans.
Scorecards and compliance dashboards. If you need to track which services meet production-readiness criteria, IDPs can provide that view. Again, this is for human consumption.
Documentation aggregation. A central place to find docs, runbooks, and API specs has value. Humans search for these things.
The pattern: IDPs work when the audience is humans consuming information visually. They fail when the audience is agents executing queries programmatically.
The Real Question
If you're evaluating your infrastructure tooling with AI agents in mind, ask these questions:
Can an agent query relationships, or only retrieve catalog entries? If your tooling can answer "what services exist" but not "what depends on what," agents will hit a wall on any non-trivial task.
Is the data discovered or declared? Declared data goes stale. Agents making decisions on stale data make wrong decisions. Real-time discovery is the only way to keep pace with systems that change constantly.
Is the interface agent-first or portal-first? MCP compatibility is table stakes. But the data model matters more than the protocol. An MCP interface to a static catalog is still a static catalog.
Does the system support relationship traversal? One-hop queries are easy. Multi-hop queries ("find deployments using containers built from repos owned by teams without on-call coverage") require a graph. Most IDPs don't have one.
The IDP era solved human-scale discovery. The AI agent era requires machine-scale reasoning. These are different problems with different solutions.
SixDegree is built as a context layer for AI agents: continuous discovery, real-time relationships, MCP-native queries. If you're deploying agents and hitting the limits of your current tooling, let's talk.