Vibes All the Way Down: Dmitry Grozoubinski discusses tariffs and more on The Tech Humanist Show
Dmitry Grozoubinski has spent his career making trade legible to people who aren’t trade experts. When I asked him what people are experiencing as they try to follow the tariff news — 90-day pauses, court rulings, retaliations, reversals — he said it was more unsettling than just complicated policy.
He called it vibes-based.
“It’s not just that the rules are falling away,” he told me on The Tech Humanist Show. “If it were just the rules breaking down but everyone was operating in calculated, predictable ways, you could work with that. The problem with vibes-based trade policy is you can’t fit it into any predictive matrix.”
He gave an example: a U.S. president threatening to significantly disrupt North American trade over a television commercial made by a Canadian regional government that quoted Ronald Reagan. There’s no strategic framework that gets you to that outcome. And that’s the point — when policy is driven by vibes rather than calculation, even rational worst-case planning breaks down.
I told him I thought people were living by vibes as much as we were governing by vibes. He said: very much.
That’s the part of this conversation I keep coming back to.

The black box problem in trade and tech
One thing Dmitry is precise about is why these lies land. From an overall human experience standpoint, trade, like the AI infrastructure that underlies a ChatGPT query, is a black box phenomenon. Most people interact with both at the endpoint — the box arrives at the door, the answer arrives on the screen — without any exposure to the layers of infrastructure that made it possible.
“Where it becomes dangerous,” he said, “is when someone comes in and says there’s a very easy fix to this.” When the visible part of a system is the only part most people ever see, the lies that exploit that invisibility are especially effective.
This is also, notably, how we built the fragility in the first place. Nobody built extra shipping containers before COVID because there was no incentive to. Nobody funded redundancy in the systems that kept the internet running because the systems were working. Optimization was the logic, and optimization traded resilience away without ever naming what it was trading.
The part of this that should concern anyone running an organization
Here’s what keeps Dmitry up at night, and it should probably concern more people than it currently does.
The trade disruption of the last decade — the so-called China shock — hit manufacturing workers hardest. Foreign competition combined with automation. Factories needed fewer workers to make things, and suddenly those workers were competing with factories around the world. The political backlash was enormous.
That same pressure is now arriving for the services sector. Accountants, HR professionals, artists, radiologists — anyone whose work can be digitized and moved across borders or handed to an AI is now in the same position manufacturing workers were in. “We have seen the scale of the backlash to the China shock,” Dmitry said. “What happens when governments start reaching for a broader set of tools to try to fight back against job losses in the much, much larger services sector?
And here’s where it gets concrete for organizations that think the tech layer insulates them from trade exposure: it doesn’t. Dmitry’s example was business process outsourcing — a company routing its payroll function to a qualified firm in India. Entirely legal, efficient, common. “All a government would have to do is make it illegal to move personnel financial data overseas,” he said. “They can do that with a stroke of a pen.” One data residency rule, and the entire BPO model — and the AI tools built on top of it — is suddenly a legal liability.
Boards weren’t tracking this a decade ago. Whether they are now is a key question.
If we want legibility, here’s what it requires
Dmitry’s work is fundamentally about closing the gap between what policy does and what people are told it does. That gap isn’t accidental — it’s maintained by institutions that found it easier to communicate with experts and insiders than with the public, and by politicians who found it easier to promise painless choices than to explain real trade-offs.
He’s genuinely hopeful that the current moment is closing some of that gap. “Improved public understanding,” he said, “whether through clear communication or — as in this case — having to see it. Having to feel it. Having to watch prices rise without seeing factories come back. All of those things take away the ability to do costless vibes-based campaigning.”
That’s not cynicism dressed up as optimism. It’s the specific mechanism by which an informed public actually changes outcomes: not by agreeing with experts, but by becoming harder to mislead.
There’s a parallel in technology that I’ve been tracing across my work for years. The AI decisions that tend to go the most wrong aren’t necessarily made in bad faith — they’re the ones that thought that they could conceal their trade-offs, that promised optimization without cost, that assumed the invisible layers would keep working. The discipline Dmitry is calling for in trade is the same discipline that makes AI governance actually function: clarity about what’s being chosen, and what’s being given up.
What leaders should actually do with this
Stop waiting for the environment to stabilize before making decisions. It’s not going to stabilize. The vibes-based dynamic Dmitry describes is structural, not temporary.
Map your services-sector dependencies against the data localization risk. If any part of your operations involves cross-border data flows — payroll, legal, HR analytics, creative production — understand what a data residency regulation would actually break. Not to move immediately, but to know.
Treat geopolitical risk as a board-level topic. Dmitry noted that boards at most companies weren’t asking about geopolitical risk a decade ago. “Maybe they will now.” This is the window where adding that capacity looks strategic rather than reactive.
Build some institutional memory around what actually happened during the tariff experiment. Public understanding of how tariffs worked — who paid them, what happened to prices, what didn’t come back — is actually high right now. That clarity has a shelf life. Capture it while the specifics are still visible.
Checklist:
- Map cross-border data flows against potential data residency regulation scenarios
- Add geopolitical risk to board or leadership team agenda as a standing item
- Audit which vendor relationships are exposed to services-sector trade disruption
- Document your organization’s experience of tariff-era cost increases while the specifics are still clear
- Separate your resilience strategy from your redundancy strategy in planning language
What Dmitry said that stayed with me longest wasn’t about trade at all. It was about how we evaluate leaders: “The most important thing to understand is how they make choices. What are their values. What’s their decision-making matrix. When you lie about the choices involved, you take away the ability of the public to understand who you are as a leader.”
“When you lie about the choices involved,
you take away the ability of the public
to understand who you are as a leader.”
— Dmitry Grozoubinski on The Tech Humanist Show
That applies well beyond trade. And it’s a harder standard to meet than it sounds.
What’s the trade-off in a decision you’re facing right now that you haven’t named out loud yet?
Dmitry Grozoubinski is the author of Why Politicians Lie About Trade (2024) and the founder of Explain Trade. Find him at explaintrade.com and on Bluesky @explaintrade.com. This post is drawn from our conversation on The Tech Humanist Show — full episode at https://www.thetechhumanist.com/2026/04/16/the-truth-about-trade-with-dmitry-grozoubinski/.
Kate O’Neill is a Thinkers50-recognized strategist, author, and keynote speaker. She is the founder and CEO of KO Insights, the host of The Tech Humanist Show, and the author of What Matters Next and five other books on technology, humanity, and the future of decision-making.



