Why We Can’t Leave Meaning Up to Machines: Watch My TEDxWaldenPond Talk

The most important question about AI isn’t whether it works—it’s whether it understands what matters. And whether we do.
I’m excited to share my TEDxWaldenPond talk, “We Cannot Leave Meaning Up to Machines,” now live on YouTube. In this 13-minute talk, I explore the critical gap between what AI can do and what it can truly comprehend—and why that distinction matters more than ever.
The Problem With AI’s “Understanding”
At seven years old, I sat on a library floor holding a book in one hand with the tension between the English word “book” and the Spanish word “libro” tugging at my mind. In that moment, I realized something profound: the thing itself exists apart from the word. Language is a tool we use to point at reality—but it’s not reality itself.

AI doesn’t know this. It can’t. Because AI has been trained on our words, not our lived experiences.
What’s actually happening? AI tools are regurgitating patterns of what humans have said to one another. When we unquestioningly accept their output, we confuse linguistic likelihood with lived experience—and that confusion has consequences.
Why This Matters Right Now
We’re at an inflection point. Organizations everywhere are implementing AI systems to make decisions about hiring, healthcare, education, and justice. But if we’re not careful about the difference between pattern recognition and genuine understanding, we risk building systems that optimize for the wrong things.
The big question at the heart of the AI-human dynamic: How can we prevent AI from faking understanding so convincingly that we forget to ask what’s real?
What to Do When Reading AI-Generated Content
In the talk, I share a practical framework for engaging with AI-generated text—questions to ask that help you distinguish between statistical probability and meaningful insight.
Why You Should Watch (and Share) This Talk
This isn’t just about understanding AI better. It’s about protecting what makes us human in an age of automation. It’s about making better decisions when technology is moving faster than our ability to fully grasp its implications.
✨ Watch the full talk here, on YouTube: ✨
✨ ➡️✨ https://youtu.be/7fHjbqWRL4E?si=MDC6mjIZWHMVcDPE ✨ ➡️✨
If the message resonates with you:
- Like the video to help it reach more people
- Comment with your thoughts—engagement helps in the YouTube algorithm, and I’d genuinely love to hear how you’re thinking about AI and meaning
- Share it widely with leaders, decision-makers, and anyone grappling with AI implementation
The conversation about AI and humanity isn’t optional anymore. It’s urgent. And it requires all of us.
About the Talk
“We Cannot Leave Meaning Up to Machines” was delivered at TEDxWaldenPond in Lincoln, Massachusetts, on October 30, 2025. The event’s theme was “Connecting Worlds”—and this talk bridges the gap between technological capability and human experience, between what AI can compute and what it means to be meaningfully human.
Kate O’Neill is a strategic advisor known globally as “the Tech Humanist” who helps organizations make better technology decisions that prioritize human experience. Her latest book is What Matters Next: A Leader’s Guide to Making Human-Friendly Tech Decisions in a World That’s Moving Too Fast.