The Trust Stack: Reality Maintenance as a Leadership Discipline

Last week, a video clip went viral not because of what it showed, but because people couldn’t agree whether it was real. The arguments in the comments — pixel analysis, motion blur, finger counts — were more prominent than the content itself. The question “is this authentic?” had eclipsed the question “what does this mean?”
That’s a small signal of something structural. Authenticity disputes are becoming the default mode of engaging with information. Not the exception. The default.

It’s happening at every scale. Regulatory levers are being used to influence media behavior in ways that look less like oversight and more like leverage over the information environment. Major AI companies are stepping back from categorical safety commitments toward something more like triage — not because they’ve abandoned their values, but because the original framework assumed global governance coordination that never materialized and a pace of development that turned out to be faster than anyone’s safety science could keep up with. And compounding infrastructure shocks — power outages layered on weather events layered on disrupted supply chains — are reminding organizations that the systems they depend on are more brittle than their continuity plans acknowledge.
None of this is an argument for alarm. It is an argument for a different kind of rigor.
The Operating System Problem
“Trust” has become one of those words that can mean anything or nothing. Like “authentic.” Or “transformation.” Or, lately, “responsible.”
At its best, trust names something precise: the social and organizational infrastructure that lets us coordinate, take risks, and build things that extend beyond a small circle of people who already agree. At its worst, it becomes a brand modifier. A vibe. A communications strategy.
But in the environment we’re actually in, trust is neither a slogan nor a sentiment.
It’s an operating system problem.
The conditions that simulate certainty have gotten cheaper. We can generate a confident-sounding explanation, a plausible image, a persuasive narrative — at industrial speed, on demand, from a prompt. And when certainty is cheap, the ability to tell the difference gets expensive.

That’s why hot takes are now a leadership risk. They reward speed over accuracy, confidence over competence, and performance over accountability. They make it easy to confuse commentary for leadership. And they make it easy to skip the work and still feel productive.
But the work is the point.

There’s a line attributed to Eisenhower that I keep returning to: “In preparing for battle I have always found that plans are useless, but planning is indispensable.”
In other words:
Plans are useless; planning is indispensable.
It’s usually cited as military pragmatism — a reminder that the map is not the territory. But it holds a governance principle that leaders are still underleveraging: the value was never the plan. It was the discipline of thinking that produced it.
Better planning beats a better plan.
And in a default-distrust era, that planning discipline needs a name.
I’ve been calling it “reality maintenance.”
Reality Maintenance (Definition)
Reality maintenance is the set of repeatable habits that help an organization decide, communicate, and act responsibly under uncertainty.

It’s what keeps teams from drifting into false certainty, weaponized ambiguity, or outsourced judgment — to algorithms, to whoever’s loudest in the meeting, or to the AI tool that generates a confident answer at the push of a button.
Reality maintenance doesn’t mean insisting everyone shares the same worldview. It means making it harder for an organization to accidentally manufacture its own.
“Let’s be more thoughtful” doesn’t translate into practice. You can’t audit it. You can’t train it. You can’t hand it to a new manager and expect consistent results.
For a discipline to be operational, it needs structure. A stack.
The Trust Stack

Five layers. Each one supports the ones above it.
1. Provenance: Where did this come from?
Before we debate what something means, we need to know what it is.

Provenance is the difference between first-hand reporting and interpretation. Between a primary source and a polished summary. Between an internal metric that someone measured and a number that’s been passed along so many times it’s lost its origin story.
When provenance is weak, the rest of the stack becomes performance.
In an environment where authenticity disputes are becoming routine — where the first instinct is to ask “but is this real?” before asking “what does this tell us?” — provenance is not a nice-to-have. It’s the foundation that makes every subsequent question answerable.
In practice: require sources for high-impact claims. Ask what would need to be true for the opposite conclusion to hold. If you can’t answer that, you don’t yet understand the claim.
2. Verification: What does “true enough to act on” mean here?
Verification isn’t a hunt for perfect certainty. It’s an agreement about your standards of evidence.

If you don’t define your standard of evidence, your organization will default to whatever the culture rewards most: speed, seniority, or a good story told with confidence. Usually it’s some combination of all three.
This matters especially when AI is in the loop. A well-produced AI summary feels authoritative. It presents no hedging, no uncomfortable pauses, no “I’m actually not sure about this one.” The confidence of the output is decoupled from the confidence that should attach to the underlying claim. That decoupling is a systems problem, and it doesn’t solve itself.
In practice: use a simple confidence scale in internal updates — high, medium, low — and make it genuinely safe to say “we don’t know yet.” That phrase should be a signal of rigor, not a professional liability.
3. Accountability: Who owns the decision and the outcome?
Accountability is where trust becomes operational.

It answers the question that most organizations avoid until something goes wrong: if this fails, who is responsible for fixing it?
The risk of diffuse accountability is that decisions still get made and outputs still get deployed, but when something goes wrong, the answer to “who owns this?” is a list of people who were all involved but none of whom feels responsible. That’s not a people failure. It’s a systems failure.
When AI governance moves from categorical commitments to triage — as we’re seeing in parts of the industry — the underlying risk doesn’t disappear. It diffuses across a more complex set of contingencies. That’s precisely when explicit accountability structures matter most.
In practice: for high-impact decisions, name an owner explicitly. Separate “who recommended” from “who decided.” These are not the same thing, and conflating them is how accountability gets lost quietly.
4. Resilience: What breaks if we’re wrong?
Resilience is what keeps a mistake from becoming a cascade.

We tend to plan as if our assumptions will mostly hold. The more useful question is: what’s the failure mode? If the information was wrong, if the AI output was subtly off, if the situation changes faster than anticipated — what breaks, and does it break in a way we can learn from?
The pattern visible in compounding infrastructure failures — where a power event meets a weather system meets a disrupted logistics network — has a direct organizational analog. It’s not any single failure. It’s the brittleness that emerges when interdependent systems have no slack, no fallbacks, and no practiced recovery.
In practice: run short pre-mortems before major decisions. For AI-assisted work, ask: what’s the cost of this being wrong, and how would we know quickly? Build fallbacks for critical workflows before you need them.
5. Agency: How do humans stay meaningfully in control?
Agency is the human layer. And it’s the one most easily eroded by systems that are nominally designed to help.

If people can’t understand, challenge, or refuse a decision, then “trust” is a decorative word wrapped around coercion. That holds whether the authority doing the constraining is an algorithm, an executive, or a media environment shaped by regulatory pressure rather than editorial judgment.
Preserving human agency doesn’t mean being skeptical of technology. It means treating AI as a thinking partner — a tool for stress-testing ideas, surfacing alternatives, and expanding what a team can consider — rather than an oracle whose outputs are easier to accept than to interrogate.
In practice: preserve the right to pause, appeal, and escalate. Avoid automation-by-default for high-stakes decisions. Be explicit about which decisions humans must own, and protect the time and psychological safety to exercise that ownership.
What Leaders Can Do This Week

The Trust Stack is a framework, not a prescription. You don’t implement all of it at once. Here’s where to start — pick two:
Add source + confidence level to one internal update this week.
Try something like: “Source: Q4 customer survey, n=312, conducted February 2026. Confidence: high on the directional trend, medium on the magnitude.” It takes fifteen seconds to write. It changes the conversation meaningfully — it invites scrutiny rather than passive acceptance, and it signals that “we don’t know yet” is a legitimate and valued contribution.
Write down the three assumptions your current plan depends on — and the early signals that each one is failing.
Not “the plan falls apart.” The leading indicator, three steps before that. What would you see in week two that would tell you assumption one is shaky? If you can’t name it, the assumption is doing more work than you’re accounting for.
In your next AI-assisted decision, separate the recommendation from the reasoning. Ask: what’s the AI claiming, and what would I need to believe about the underlying data for that claim to hold? The answer is often clarifying.
Name an explicit owner for one high-impact in-flight decision. Not the team. One person. If the answer is genuinely unclear, that is the work to do first.

Short checklist (copy it, use it):
▢ Source every high-stakes claim
▢ Confidence-level every significant update: high / medium / low
▢ Name decision owners explicitly — separate “recommended” from “decided”
▢ Pre-mortem major decisions before they’re final
▢ Preserve the human right to pause, appeal, and escalate
The Moral High Ground, Without a Halo
There’s a version of “trust” that is branding. There’s another version that is moralizing — where the point is to signal that you’re one of the good ones, and the actual mechanics stay vague because vagueness is comfortable.
I’m not interested in either.
The moral high ground here is unglamorous: tell the truth as best you can. Don’t manufacture certainty you don’t have. Don’t outsource human judgment to systems — automated or otherwise — that can’t be held accountable. Make responsible behavior structurally easier than irresponsible behavior.

This is ethically cleaner. It is also, not incidentally, strategically stronger.
Trust is not a communications strategy. It’s a systems design constraint. Organizations that treat it as such — not because it sounds good, but because they’ve embedded it into how they decide, communicate, and act — will maintain their footing when the ground keeps shifting.
And the ground is going to keep shifting.
A Closing Question
What’s one reality-maintenance habit you could add to your decision-making process this quarter? Not a policy. Not a principle. A habit — something you could actually do next Tuesday.
That’s where it starts.
Kate O’Neill is a Thinkers50-recognized strategist, author, and keynote speaker. She is the founder and CEO of KO Insights and the author of What Matters Next and five other books on technology, humanity, and the future of decision-making.

