sixdegree

Operations·Enterprise AI·Context

Your operations run on tribal knowledge. AI will make that worse.

Every operations leader has tried to fix the opacity problem. AI agents make the cost of failure visible and immediate. Here is what changes now.


By Craig Tracey · 

Your operations run on tribal knowledge. AI will make that worse.

I have hated dealing with RFPs for as long as I have been senior enough to be on the hook for them.

Not because the questions are hard. The questions are almost never hard. Which subprocessors handle customer PII. Who owns the runbook for production incidents. What's the access review cadence for financial systems. Where does customer data live, geographically. Who is the executive owner of business continuity.

None of those are hard questions in the abstract. They are hard questions in your company. They are hard questions in every company I have ever worked at.

The reason they are hard is that the answer to "who handles customer PII" lives across three vendor lists, two security reviews, and a Slack thread someone screenshotted last summer. The answer to "who owns the runbook" depends on which version of "owns" you mean and which reorg the asker would recognize. The answer to "where does our data live" is correctly documented in five places and wrong in three.

So you do what every operations team does. You divide up the questions. You send messages to people who might know. You collect answers in a shared doc. You reconcile, somewhat. You submit on day eleven, with caveats. The answers are mostly right. They are inconsistent in ways nobody catches. The procurement team on the other side finds three of the inconsistencies in the follow-up call. You explain them away. The deal closes anyway. Nobody talks about what just happened.

What just happened is that your company didn't know the answers to basic structural questions about itself. The cost of that ignorance was a week of senior operator time, a slightly damaged trust signal to a buyer, and a quiet acceptance that this is just how it works.

I have lived this from both sides. As the technical leader on the hook for answering security questionnaires and RFPs that my sales team needed back yesterday. And as the technical leader inside an M&A diligence process, where the same questions get asked under far higher stakes, and where the inconsistencies don't just damage trust, they reduce the price.

It never gets easier, because the cost is chronic, not acute. Nobody dies from a slow RFP. The deal closes anyway. The lesson never lands hard enough for anyone to fix the underlying problem.

The underlying problem is that operations runs on tribal knowledge, and AI agents are about to make that situation a lot worse before they make it better.

Operations runs on connections nobody ever wrote down

The work of an operations team is about relationships. Which vendor is on which contract for which business unit. Which employee is the backup for which process. Which system depends on which other system. Which compliance obligation maps to which workflow. Which spend category covers which initiative. Which approval chain governs which kind of decision.

None of this lives in any single tool. All of it lives in people's heads, in spreadsheets nobody else can find, in Slack threads from six months ago, in the institutional memory of three people who have been there long enough to know.

Every operations leader has watched this play out. Every operations leader has tried to fix it. Process maps. BPM tools. Internal wikis. Service catalogs. "Single source of truth" initiatives. New hires assigned to "document the operating model." Quarterly reorgs that promised to clean things up.

They all decay. They decay because the business changes faster than the maintenance cycle. By the time the documentation is written, two vendors have been swapped, three teams have been reorganized, and the process owner you interviewed has moved to a different role. The decay isn't a failure of execution. It's what happens when you try to capture a moving target with a static artifact.

This isn't new. Operations leaders have managed around it for decades. The cost has always been there. Cross-cutting questions take longer than they should. Audits surface things nobody knew. Vendor consolidation projects discover redundancies and gaps. Senior operations people earn their salary partly because they hold the picture in their heads when the systems can't.

That cost has been tolerable because the consumers of operational information have been patient humans. Patient humans wait two weeks. Patient humans call three colleagues. Patient humans read between the lines of a stale wiki page. Patient humans default to "let me check on that" when they aren't sure.

AI agents are not patient humans.

Why AI agents make this worse before they make it better

The natural assumption is that AI agents will solve operational opacity. Ask the agent which subprocessors touch customer data, ask it who owns the runbook, ask it which contracts are exposed to a new regulation, and it will figure it out. That intuition is right about the destination and wrong about the path.

When you point an AI agent at fragmented systems and ask a cross-cutting question, it does one of two things. It guesses, sounding confident while being partially wrong. Or it pulls data from one system and presents a partial answer as a complete one. Either failure mode is worse than the human RFP process it replaces, because the human process at least had the honesty to put caveats in. The agent doesn't say "let me check with the team." It produces an answer.

The audit log will be clean. Every workflow will sign off. The agent will have done what it was asked. The answer will be wrong, and you won't find out until the regulator does, or the customer does, or the acquirer does in due diligence.

This is what happens when you deploy an agent on top of operational data that is fragmented, partially stale, and never authoritatively connected. The model is fine. The agent is doing its job. The substrate underneath is what's broken, and the substrate is what your operations team has been wanting to fix for the last fifteen years.

What operations leaders have always wanted

Here is what you have actually wanted, every time you have tried to fix this. A live picture of how your business runs. Vendor relationships derived from contract systems and AP records, not maintained in a separate vendor list. Ownership derived from your org chart and the systems people actually use, not maintained in a wiki. Compliance obligations linked to the specific contracts, processes, and systems they apply to, derived from the work you have already done, not redone every quarter.

You wanted this in 2010. The technology wasn't there. Knowledge graphs were enterprise-grade complicated. Integration projects took quarters. The people who could build it cost more than the value they would produce.

You wanted it in 2018. Closer. Some of the pieces existed. But the consumer wasn't there yet. The value of having a connected picture of operations was still mostly theoretical, because the patient humans were still patient.

You want it now, and three things have changed at the same time. Discovery integrations have gotten dramatically simpler. You can connect a system in minutes instead of months. Graph databases that hold relationship data are mature and operationally manageable. And AI agents have made the demand-side pressure undeniable. Once you have agents trying to answer cross-cutting questions, the cost of opacity becomes immediate and visible. Every wrong answer is a near-miss or worse.

The technology is here. The forcing function is here. The thing operations leaders have wanted for fifteen years is buildable, and the cost of not building it is rising fast.

What this looks like in practice

Three concrete shifts you can recognize from your own work.

The next RFP. Today, eleven days of cross-team archaeology. With a connected operational picture, the structural questions answer themselves from the systems of record. You spend your time on the questions that genuinely require judgment, not on reconciling whether two spreadsheets agree about who handles PII.

Vendor risk. "What happens to operations if vendor Y goes under?" Today, panicked Slack thread, three meetings, partial answer. With a connected picture, vendor relationships are derived continuously from your contract and procurement systems. Dependent processes and systems are linked. The answer is a query, not a project.

Compliance obligations. A new regulation lands. "Which parts of our business are affected?" Today, a working group, a steering committee, a six-week mapping exercise. With a connected picture, the regulation maps once to the entities and processes it touches. Going forward, every change in the business propagates against that mapping automatically.

These are not AI features. They are operations questions that should always have been answerable. AI is a consumer of the picture, sometimes the most demanding consumer, which is what makes it the forcing function. The picture itself is the asset, and the asset is what your operations team has been wanting to build for fifteen years.

What to ask vendors pitching you AI agents

Three diagnostic questions that cut through the marketing.

When the agent says "this customer's data is processed only by approved subprocessors," which system did it read that from, and when was that system last reconciled with the others? If the answer is vague, the agent is guessing. You don't want a confident guesser making compliance assertions about your operations.

If we reorganize ownership next quarter, how long until the agent reflects the change? If the answer is "we will update the prompts" or "we will retrain," you are being sold a static system that pretends to be dynamic. Real operational change has to propagate to your agents within hours, not quarters.

Can the agent answer questions that span vendors, contracts, processes, and people? Or only questions that fit inside one system? If the answer is the latter, the agent is operating on a slice of your operations, not on your operations. Slice-based answers to cross-cutting questions are how confidently wrong outputs get produced.

If the answers are vague or system-bounded, you are being sold an agent. You need a substrate.

Why this matters now

Operations leaders have spent careers being told that opacity is just the cost of running a complex business. It isn't. It has been the cost of running a complex business with the tools we had.

The tools changed. The cost no longer has to be paid.

The leaders who recognize this first will run their businesses with a clarity their competitors can't match. They will answer RFPs in a day instead of a week. They will catch vendor risk before it becomes vendor crisis. They will know what their operations are doing in real time, because the systems will tell them. They will deploy AI agents that work, because the agents will be reading from something real.

The leaders who don't will spend the next decade losing ground to the ones who do. Their AI initiatives will stall. Their cross-cutting questions will keep taking two weeks. Their senior operations people will keep being the only path to clarity. Those people will keep retiring, and the picture will keep being incomplete.

I built SixDegree because I am tired of operations running on tribal knowledge, and because the next generation of operators deserves better than the system I came up in. A live picture of how your business runs, derived from the systems you already use, queryable by humans and the AI agents you trust. The first time you ask a question and get a real answer in two minutes that used to take two weeks, you will wonder how you ever ran the business without it.

Running operations on tribal knowledge?

We're working with a small number of design partners. If you're building for operational clarity and want a live picture of how your business runs, let's talk.