All Insights > Culture & Humanity > When Does Presence Become Permission? Making Responsible Technology Decisions in Real Time

Culture & Humanity

When Does Presence Become Permission? Making Responsible Technology Decisions in Real Time

I deleted my X account this weekend.

This post is not about that decision. But it’s not not about that decision either.

It’s about a pattern every leader faces right now: How do you make responsible technology decisions when the evidence is mounting, the harm is accelerating, and the cost of waiting becomes its own choice?

My X account became a test case for a question I’ve been helping organizations answer for years: At what point does your continued participation in a technology system—even passive, even silent—shift from strategic positioning to tacit enablement?

This is the question underneath the question. Not “Should I leave this platform?” but rather: “What framework helps me recognize when a technology decision I thought was neutral has become complicit?”

The Grok crisis—where xAI’s image generation tool was used to create 6,000 sexualized deepfakes per hour at peak, including AI-generated child sexual abuse material—is the immediate catalyst. But it’s also the latest data point in a much larger pattern that every technology leader needs to understand.

Because the same dynamics playing out on X are playing out in boardrooms, product teams, and strategic planning sessions everywhere:

  • Technology capabilities advancing faster than governance structures
  • Predictable harms following predictable choices
  • The gap between “we should address this” and “we must address this now” compressing to zero
  • Leaders waiting for perfect information while the cost of inaction compounds

This post is about how to recognize that compression, read those signals, and make decisions that align your actions with your values—before the market, regulators, or your own conscience forces your hand.


The Case Study: Grok and the Anatomy of Predictable Failure

Let’s start with what happened, because the timeline reveals something essential about how responsible technology decisions get deferred until they become crises.

November 2024: xAI laid off its trust and safety team.

Early January 2025: Grok’s image generation tools were being used to create non-consensual sexualized imagery at scale—as many as 6,000 requests per hour at peak.

Nine days: That’s how long it took for the platform to respond substantively, during which time countless women and children were victimized.

Even after restrictions: The separate Grok app continued allowing non-paying users to generate harmful imagery.

This wasn’t a bug. This was the predictable outcome of deliberate choices:

  • Eliminate the team responsible for preventing harm
  • Deploy powerful capability without adequate safeguards
  • Wait for harm to manifest before responding
  • Respond incrementally, incompletely, reactively

Here’s the pattern leaders need to recognize: Every step in this sequence was a decision. Every gap was a choice. And collectively, they created a system where harm wasn’t just possible—it was inevitable.

The question for leaders isn’t “How could they let this happen?” It’s “What version of this pattern exists in my organization, and how do I recognize it before harm scales?

a fun moment on Twitter, back in the day

The Data Behind the Decision: What Leaders Need to Know

If you’re making technology decisions based on anecdote, intuition, or projected timelines from 2023, you’re operating with insufficient information. Here’s what the evidence shows:

Scale and Velocity:

Detection and Capability Gap:

Economic and Regulatory Impact:

Consider the asymmetry: Technology capability is advancing exponentially. Governance structures are advancing linearly. The gap between them isn’t closing—it’s widening.

That gap is where harm scales—predictably, repeatedly, increasingly.


A Framework for Responsible Decisions: Recognizing When “Neutral” Becomes “Complicit”

This is where my X decision becomes useful as a case study, because it forced me to apply the same framework I use with clients to my own choices.

The Questions That Reveal the Pattern:

1. What are the vectors of change, and where do they point?

Don’t track where you hope the technology will go, or where marketing materials claim it’s heading. Instead, track where the actual evidence points—the decisions being made, the safeguards being removed, the harms being tolerated.

For X, the vectors were clear:

  • Ownership ideology rejecting content moderation as “censorship”
  • Business model profiting from engagement regardless of harm
  • Trust & safety infrastructure systematically dismantled
  • Technological capability advancing without corresponding governance
  • Victim populations (women, children) with limited recourse

Those vectors lead to more harm, faster harm, harm at greater scale with less accountability.

For your technology decisions: What are the vectors in your AI deployment, your data practices, your platform partnerships, your product roadmap? Follow them forward honestly.

2. What am I enabling by my continued participation?

This is where “neutral” becomes impossible.

Every dormant account contributes to user metrics. Every partnership agreement signals viability. Every technology adoption without governance signals that speed trumps safety. Every “we’ll address that later” compounds into systemic risk.

The question isn’t whether you intend to enable harm. It’s whether the system you’re participating in creates predictable harm regardless of your intent—and whether your participation makes that system more viable.

For your organization: What systems are you enabling through adoption, partnership, or passive acceptance? What would happen if you applied the same scrutiny to your internal technology choices that you’re now applying to external platforms?

3. What does this system need to demonstrate before I’ll return/adopt/deploy?

This is the governance question disguised as a loyalty question.

For platforms: What evidence of rebuilt trust & safety infrastructure? What commitment to proactive rather than reactive moderation? What accountability mechanisms for when harm occurs?

For internal technology: What auditable safeguards? What cross-functional oversight? What clear escalation protocols when systems produce harmful outputs?

What this requires: Write down your criteria before the crisis. Because in the moment, pressure to move fast, capture value, or avoid seeming obstructionist will blur your boundaries.

4. What responsibility do I have to the most vulnerable users when systems fail?

Women and girls bear 96-98% of deepfake harm. AI-generated CSAM increased 1,325%. The Internet Watch Foundation reports being overwhelmed by the volume of reports.

When platforms fail to protect the most vulnerable, what is the responsibility of:

  • Individuals who remain on the platform?
  • Organizations that partner with the platform?
  • Leaders who deploy similar technologies internally?
  • Boards who oversee technology strategy?

If you’re waiting for perfect information, definitive proof, or regulatory mandate before acting, you’ve already chosen. You’ve chosen to prioritize convenience, momentum, or strategic positioning over protection of those who will be harmed first and most severely.


From Platform Decisions to Product Decisions: The Transferable Pattern

The Grok crisis isn’t just about X. It’s a preview of what happens when:

Capability deployment outpaces governance readiness

  • AI tools released without adequate safeguards
  • Features shipped before red team testing is complete
  • “Move fast” trumping “protect users”

Economic incentives mis-align with safety outcomes

  • Engagement metrics rewarding harmful content
  • Cost-cutting eliminating safety infrastructure
  • Revenue goals overriding risk assessment

Accountability mechanisms are fragmented or absent

  • No clear ownership of AI risk
  • Cross-functional oversight groups never established
  • Incident response protocols undefined until after incidents occur

Vulnerable populations bear asymmetric harm

  • Product decisions made without representation from affected communities
  • Safety features designed for majority users, failing edge cases
  • Harm concentrated among those with least recourse

This pattern is playing out in organizations everywhere—in AI adoption decisions, data practices, algorithmic systems, and product launches.

The X/Grok case study matters because it’s illustrative, not exceptional. It shows what happens when these dynamics go unchecked.


What Leaders Can Do: A Governance Framework for Responsible Technology Decisions

1. Map Your Risk Profile with Radical Honesty

Inventory your technology systems and ask:

  • Where are we deploying AI without adequate governance?
  • Which partnerships or platform dependencies create liability exposure?
  • What would our own “Grok crisis” look like, and how would we know if we’re heading toward it?
  • Who is most vulnerable if our systems fail, and do they have voice in our decision-making?

2. Assign Clear Accountability

Establish cross-functional oversight teams with authority to:

  • Review and approve AI deployments before launch
  • Conduct ongoing risk assessment of existing systems
  • Escalate concerns without career penalty
  • Halt deployments when safeguards are insufficient

How this works: Someone must be able to say “no” or “not yet” and have that decision stick.

3. Prioritize Governance Over Speed

It may seem like speed is all-important, but effective and future-ready organizations need to implement auditable, explainable, accountable AI governance that allows them to scale safely.

This means:

  • Establishing clear criteria for technology adoption before evaluating specific tools
  • Building in time for red team testing, bias assessment, and impact analysis
  • Creating feedback loops that surface harm signals early
  • Having the discipline to pause or reverse course when evidence warrants

4. Exercise Strategic Discretion About Partnerships

The burden of proof is shifting. Platforms and vendors must demonstrate they can protect users and maintain integrity before earning trust, not after losing it.

When a technology partner generates coordinated international regulatory action, they’ve become a liability to organizations associated with them. Strategic withdrawal from systems that create predictable harm is increasingly recognized as prudent risk management, not political positioning.

The key questions:

  • What governance standards do our partners maintain?
  • What happens to our data and our users’ data if their systems fail?
  • What reputational and regulatory risk do we inherit through association?
  • What alternatives exist that better align with our values and risk tolerance?

5. Build “Reverse the Ratchet” Capability

Technology decisions often feel one-way: once adopted, momentum carries systems forward even when evidence suggests they shouldn’t.

Build organizational capability to:

  • Revisit technology decisions periodically with fresh eyes
  • Sunset or swap out tools that no longer meet governance standards
  • Communicate changes to stakeholders without defensiveness
  • Learn from near-misses and course-correct before harm scales

The core idea: Your past technology decisions don’t have to determine your future ones. The discipline is recognizing when new evidence warrants new choices.


The Bigger Question: Where Do We Go From Here?

We’re entering a period of intensifying global scrutiny on AI safety, platform accountability, and the protection of vulnerable populations online. The Grok crisis is one chapter in a much longer story.

The question for leaders is: Will you wait for regulation to catch up, or will you exercise the agency you have now to make decisions that align with your values before the market or regulators force your hand?

Here’s what I know from two decades of helping organizations navigate technology decisions:

The cost of waiting compounds. By the time harm is undeniable, your options narrow. Regulatory penalties, reputational damage, operational disruption, and trust erosion don’t wait for perfect information.

The vectors are already visible. You may not know exactly what your “Grok moment” will look like, but you can see the conditions that make it likely. Act on what you can see now.

Governance isn’t the enemy of innovation. The organizations that will thrive aren’t those that moved fastest—they’re those that built governance structures that allowed them to scale safely, maintain trust, and avoid catastrophic failures that force expensive reversals.

You don’t need everyone to agree. This isn’t about consensus or purity politics. It’s about reading signals honestly, understanding technology trajectories, and making strategic decisions that serve long-term organizational interests and human well-being.


What This Means

Make conscious choices. Make them based on evidence. Make them before crisis forces them. And make them knowing that in systems designed to maximize engagement or profit at any cost, there is no neutral position.

Your continued participation is a choice. Your technology adoption is a choice. Your platform partnerships are choices. Your governance priorities—or lack thereof—are choices.

The technology will keep advancing. The harm will keep scaling. And each of us—as individuals, as leaders, as organizations—will keep deciding where we draw the line.

My X account was a small decision in the grand scheme of things. But it forced me to apply the framework I use professionally to my own life. And when I did, the answer was clear.

The question I’m asking you to consider: What decisions are you deferring, and what might become clear if you applied the same framework to them?

The evidence is mounting. The trajectory is visible. And the window for proactive decision-making is narrowing.

What will you choose?


Kate O’Neill is the “Tech Humanist,” founder and CEO of KO Insights, and author of “What Matters Next: A Leader’s Guide to Making Human-Friendly Tech Decisions in a World That’s Moving Too Fast” (Thinkers50 2025 Best New Management Booklist). She advises Fortune 500 companies, the United Nations, and major tech firms on responsible AI implementation and strategic technology decision-making.