MCP Needs Connective Tissue
Model Context Protocol gives LLMs powerful tools, but without relationship data, every request becomes trial and error. Why that leads to hallucinations and why context matters.

When Anthropic announced the Model Context Protocol (MCP) last year, the developer community rightfully got excited. Finally, a standardized way to give LLMs access to tools, data sources, and external systems. No more bespoke integrations for every AI application. Just plug in an MCP server, and your AI assistant can suddenly interact with databases, APIs, file systems, and more.
It's a step forward. MCP solves a real problem.
But after building with it for a while, I've noticed something: MCP without meaningful surrounding context turns every request into trial and error. The LLM sees a pile of tools with no map of how they connect, and has to guess its way through.
The Missing Piece: Relationship Data
What typically happens when you configure MCP tools for an LLM:
You launch a few MCP servers for your database, filesystem, and Slack. The LLM suddenly sees a list of available tools:
query_databaseread_filewrite_filesend_slack_messagelist_slack_channels- ... dozens more
And then what?
The LLM has to figure out from scratch:
- Which tools are related to each other
- What workflows make sense
- What data flows between systems
- Which operations should happen in sequence
- What the dependencies are
It's doing this inference every single time, from first principles (if you're lucky), burning tokens and time just to reconstruct the basic topology of your systems.
Why This Matters
Without explicit relationships, LLMs spend enormous amounts of cognitive effort (and tokens) on basic discovery:
User: "Deploy the latest version of our API to production"
Without relationship data:
- LLM calls
list_toolsto see what's available - Tries to infer which tools are deployment-related
- Guesses at the sequence of operations
- Might miss critical steps like running tests first
- Has to ask clarifying questions
- Eventually gets it done, but inefficiently
With entity relationships:
- LLM has axioms: repo → service → team → Slack channel
- Understands what operations each entity supports
- Reasons from these truths instead of guessing from patterns
- Executes based on your live domain model
The difference in reliability and speed is massive.
Trial and Error at Scale
The LLM sees dozens of tools and has to:
- Try tools that seem relevant
- See what works
- Backtrack when something fails
- Try a different combination
- Repeat until it stumbles onto the right path
It's brute-force problem solving. It's wasteful (most attempts miss), slow (trial and error burns time and tokens), unreliable (success depends on luck as much as skill), and dangerous (the LLM might confidently act on the wrong system).
Worse: this is how you get hallucinations.
When an LLM doesn't have explicit relationship data, it guesses. It infers. It fills in gaps based on patterns from training data that might not match your actual systems. It confidently tells you that Service A depends on Database B because that seems like a reasonable guess, even though in your infrastructure, it's actually Database C. It suggests a deployment workflow that sounds plausible but skips a critical security check your team added last month.
These hallucinations aren't just annoying. In a business context, they're a severe impediment. In many cases, they're a show-stopper. You can't build reliable automation on top of guesses. You can't trust an AI assistant that makes up dependencies. You can't deploy systems where the AI might confidently give you wrong information about blast radius or ownership.
Context and relationship data change this completely. Instead of guessing, the LLM knows. Instead of inferring based on patterns, it has explicit information. One precise action instead of dozens of scattered attempts. No guessing. No hallucinations. Just accurate information leading to correct actions.
Without it, the LLM is groping through a flat list of tools, hoping the sheer volume of attempts eventually produces a result. And hoping it doesn't confidently hallucinate something that breaks production.
Your Systems Are a Graph of Entities
Most real-world systems aren't flat lists of tools. They're graphs of entities with rich relationships:
- This GitHub repo deploys to that Kubernetes service
- That service depends on this database
- This team owns both the repo and the service
- That PagerDuty alert monitors the service health
- This Slack channel is where the team coordinates
- That Jira epic tracks the feature being built
MCP gives you tools, but it doesn't understand these entities or their relationships. The LLM has to reconstruct the entire domain model every time through inference and trial-and-error.
It's like giving someone a social network where you can see all the people but none of the friendships, follows, or group memberships. Or giving a developer a codebase where they can see all the classes but none of the inheritance, composition, or dependencies. Technically you have all the pieces, but you're missing the structure that makes it comprehensible.
Why Training Won't Save You
You might be thinking: "Can't we just train an LLM on our systems? Let it learn how everything connects?"
In theory, yes. With enough training data, an LLM could learn how everything in your systems connects. It could memorize which services depend on which databases, what the deployment workflow looks like, how your teams are structured.
The problem: your real-world business is constantly evolving.
That service architecture you had last month? Three teams just shipped changes. The database you used to query directly? Now there's a new API layer. The deployment process? Updated yesterday with new security checks. The team that owned authentication? Half of them moved to a different project.
Training is a snapshot. It captures a moment in time. But modern engineering systems are living organisms that change daily. By the time you've finished training on your current state, your current state is already outdated.
This is fundamentally different from training on stable knowledge domains. Python's syntax doesn't change every Tuesday. The laws of physics are pretty reliable. But your infrastructure? Your services? Your team structure? They're in constant flux.
You can't train your way out of this. You need something that adapts in real-time.
What Actually Needs to Happen
The solution isn't to change MCP itself. MCP is doing its job: providing a standard way to expose tools.
The missing piece is an intelligence layer that maintains a live ontology of your entities and their relationships. Think of it like object-oriented programming for your entire engineering ecosystem.
Instead of just mapping tools to tools, you need a graph of entities:
- Source code repos
- Deployment targets
- Services and APIs
- Database schemas
- Teams and ownership
- Incidents and alerts
- Features and tickets
Each entity has implicit operations (like methods in OOP) that can act on it. A GitHub repo has deploy, rollback, view_logs. A Kubernetes service has scale, restart, check_health. A team has notify, escalate, query_ownership.
The power comes from understanding the relationships between entities. When you know that:
- This source code repo deploys to that Kubernetes service
- That service depends on this database
- This team owns both
- That PagerDuty alert fires when it goes down
- This Slack channel is where the team coordinates
...the LLM can reason about what actions make sense in context. Not because you've explicitly mapped every workflow, but because it understands the domain model.
This means:
Entity-centric modeling where the graph captures your actual systems (repos, services, deployments, teams, incidents) and how they relate, not just abstract tool relationships.
Dynamic ontology that updates in real-time as your systems evolve. When a new service deploys, when team ownership changes, when dependencies shift, the graph reflects it immediately.
Implicit tool discovery where the LLM understands what operations are valid for each entity type and can compose them based on the relationships, rather than brute-forcing through a flat list of disconnected tools.
LLM-agnostic context so whether you're using OpenAI, Anthropic, Grok, Ollama, or your own models, they all get the same rich understanding of your entities and relationships without needing to be retrained.
The Bottom Line
MCP gives LLMs access to powerful tools. But access without context means every request is trial and error, and the LLM is one confident hallucination away from acting on the wrong system.
The real power comes when we give LLMs not just the tools, but the context: the live, evolving relationship data that shows how everything fits together right now.
That's when we stop asking LLMs to guess and infer from scattered attempts or stale training data, and start letting them work confidently from accurate, real-time understanding.
The tools are the foundation. The relationships are what make them useful. And keeping those relationships current is what makes them reliable.
This is exactly what we're building with SixDegree. We create live ontologies of your engineering systems, providing the relationship data and context that makes AI tools actually effective. If you're thinking about these problems too, I'd love to show you how it works.