All Insights > Culture & Humanity > When Training Data Becomes Targeting Data

Culture & Humanity

When Training Data Becomes Targeting Data

The ChatGPT Caricature Challenge has that familiar viral momentum. Upload a photo, get an AI-generated cartoon version back, share it with friends. Harmless fun, right?

We’ve been here before. And I’ve been warning about this for years.

In January 2019, the 10 Year Challenge swept through social media—millions posting side-by-side photos from 2009 and 2019. I raised questions about it, first on Twitter and then in WIRED, asking whether we were inadvertently training facial recognition algorithms to understand how faces age over time.

The response was immediate and global. The New York Times, NPR, The Atlantic, Slate, BBC, and dozens of other outlets picked up the story. Some people called it paranoid. Others shrugged it off as inevitable tech progress. The conversation wasn’t about assigning blame—it was about asking better questions before the technology outpaced our ability to govern it.

But here’s what matters: I saw the pattern. And now, seven years later, that pattern has intensified in ways that prove the concern wasn’t paranoia—it was pattern recognition.

From the 10 Year Challenge to the Caricature Challenge: What’s Changed

In 2019, the question was whether our data might be used to train facial recognition systems. In 2026, we know the answer is yes—and we’ve learned exactly what those systems are being used for.

The technology hasn’t just advanced. The context has shifted beneath our feet. Governments now routinely deploy facial recognition for tracking, targeting, and surveillance. The playful photos we share don’t just train neutral algorithms—they feed systems used to identify protesters, monitor marginalized communities, and enforce policies we may never see or consent to.

This is cross-scale thinking in action. What feels like an individual choice—sharing a fun AI-generated image—cascades into organizational data collection practices, which then enable societal-level surveillance infrastructure. The caricature isn’t the product. Your participation in the system is.

The harm isn’t in the technology itself. It’s in the asymmetry of knowledge and power.

Most people engaging with these challenges don’t know:

  • Where their image data ultimately lives
  • What secondary uses it might serve
  • Which governments or entities have access
  • How long it persists or how to revoke consent

Meanwhile, the organizations collecting this data know exactly what they’re building—and they’re building it at scale.

The reality is both/and, as it so often is: AI image generation is legitimately impressive and useful. These tools can enhance creativity, accessibility, and expression. And the same underlying technology enables unprecedented surveillance capabilities with minimal oversight.

The question isn’t whether we should halt all AI development. It’s whether we’re willing to design systems that respect the full humanity of the people whose data makes them possible.

What’s different now—and what we must demand

In 2019, we could still ask “Are we training their algorithms?” with some uncertainty. In 2026, we know the answer is yes. The new questions are:

Can we see the harms of action versus inaction clearly? Action means participating in viral challenges without transparency about data use. Inaction means missing the chance to establish guardrails before these systems become further entrenched.

Are we designing for meaningful consent at scale? Real consent requires understanding. When participation is gamified and consequences are opaque, consent becomes performance theater.

Who bears the cost of our collective data donations? Not everyone faces equal risk. Surveillance technology disproportionately impacts already-vulnerable populations—activists, immigrants, religious minorities, people of color.

I remain optimistic about technology’s potential to enhance human experience. But optimism without accountability is just willful blindness.

In Tech Humanist, I wrote:

“After Amazon introduced real-time facial recognition services in late 2016, they began selling those services to law enforcement and government agencies, such as the Orlando Police Department and Washington County, Oregon89. But the technology raises major privacy concerns; the police could use the technology not only to track people who are suspected of having committed crimes, but also people who are not committing crimes, such as protestors and others whom the police deem a nuisance. It’s probably not surprising to note that the American Civil Liberties Union (ACLU) asked Amazon to stop selling this service, but it may surprise you to hear that a portion of both Amazon’s shareholders and employees asked Amazon to stop, as well, citing concerns for the company’s valuation and reputation given the risk of misapplication and public outcry.  The fullness of how technology stands to impact humanity is tough to overstate. The opportunity exists for us to make it better, but to do that, we also have to recognize some of the ways in which it can get worse. Once we understand the issues, it’s up to all of us to weigh in on how we want to see the future play out.”

The pattern is clear: viral engagement challenges are never just about the immediate output. They’re about building datasets, refining models, and normalizing data extraction as the cost of participation in digital life.

Before you upload that next photo, ask yourself: If this data were used to track someone you love, would you still consider it harmless?

That’s not paranoia. That’s pattern recognition.

And it’s time we started designing—and demanding—systems that see the full picture.