All Insights > Business & Strategy > UN Calls for AI Red Lines in Global Governance: Why Maria Angelita Ressa’s Warning Demands Immediate Action

Business & Strategy

UN Calls for AI Red Lines in Global Governance: Why Maria Angelita Ressa’s Warning Demands Immediate Action

UN AI "Red Lines" in Global Governance

During an already newsworthy second day of the United Nations General Assembly week, activist and author Maria Angelita Ressa took the stage to call for clear AI “red lines.” Within hours, questions from media outlets started to come my way: Are these boundaries realistic? Who would enforce them, and how soon? This piece takes those questions seriously. Drawing on my work with the UN, with Big Tech companies, and within a wide variety of industries that rely on AI, I’m laying out what these red lines mean in practice—where human judgment must stay in the loop, what accountability looks like beyond rhetoric, and how we translate intent into implementable policy.

Recognize Existential Threats

The ‘red lines’ of AI extend far beyond the conventional governance concerns such as bias and disclosure that dominate many public discussions. These are important, yes. But those are the speed bumps. The cliff edge is up ahead. Europe has already codified this distinction: the EU AI Act applies proportionate obligations by risk tier and brings general‑purpose AI into scope, signaling that emergent, higher‑impact capabilities warrant tighter controls. The truly pressing issues lie in existential threats that could fundamentally alter our relationship with technology and potentially threaten human autonomy. Research has documented concerning instances of AI systems demonstrating resistance to shutdown commands, and even while these are generative models that possess no real understanding of their output, this trend suggests an emergent tendency for models to learn self-preservation that warrants serious examination. The point isn’t panic; it’s design clarity about who stays in the loop—and why.

Additionally, the rapid deployment of AI technologies in warfare represents another critical red line, as billions are being invested in autonomous weapons systems that operate with minimal human oversight and outside public scrutiny. Yet even here concrete precedents for protections already exist: the U.S.-led Political Declaration on Responsible Military Use of AI and Autonomy outlines ten measures for human control and accountability, while recent analyses document autonomous systems deployments in Libya and ongoing UN‑EU governance debates on military AI.

Other alarming developments include AI’s potential for self-replication, AI-on-AI infiltration tactics, and the possibility of systems that could independently manipulate or control critical infrastructure. These existential concerns demand immediate and thoughtful attention from global governance bodies before technological development outpaces our ability to establish meaningful safeguards.

Embrace Strategic Optimism

One of the more common questions I get asked is “isn’t it too late for all of this?” Has the ship sailed? Will tech companies — some with trillion-dollar valuations greater than the GDP of many nations — bow to regulation that might compromise their business models?

It’s a fair and valid question. But we must act as if preventative guardrails can still work—because if we don’t, no reactive regulation ever will. We act “as if” guardrails can work because that’s how they start working. And because if we don’t, the only guardrails we’ll get are on the apology tour.

Precaution isn’t a brake. It’s traction.

When it comes to this sort of policy discussion, we don’t have to be limited by what is practical or achievable in what we call for. While the UN provides a valuable platform for idealistic visions of the future, it serves as an even more effective forum for Strategic Optimism. Think of it as optimism with receipts. This approach differs fundamentally from mere idealism—it acknowledges and names the real harms and risks associated with AI, then develops concrete plans to prevent them. At the same time, it articulates a hopeful vision and identifies desired outcomes, developing actionable plans to achieve these integrated aims.

After all, accountability is how optimism grows up.

It’s not enough to deal only in risks or only in hope. Worthwhile policy, like worthwhile strategy, needs to be both achievable and aspirational. The world will inevitably water down ambitious proposals to make them more palatable, which is why we must begin with what we genuinely hope for — and maybe a little on top of that. In this way, even compromised solutions have a chance of maintaining the essence of our original vision.

Collaborate for AI Red Lines Policy Development

At the UN, terms like “multilateral” and “stakeholders” are thrown around a lot. While language like this may seem technical or arcane, its intended purpose is inclusivity. It implies that everyone affected should have a seat at the table when making decisions that will shape not only our lives but also the world that future generations will inherit.

Ambition belongs on global stages like the UN, but teeth come from member states and regional regulators—one reason the EU’s risk‑based regime matters as a template for market monitoring, conformity assessments, and penalties.

Effective AI governance will emerge from collaborative efforts among member states and international bodies. Leaders must advocate for policies that not only address immediate concerns but also promote a brighter future for AI technologies. Engaging with global forums can amplify calls for comprehensive strategies that safeguard against potential AI risks.

The future we can trust is the one we’re willing to write down and commit to together.

Let’s draw the lines so that our progress knows where to aim.


For ongoing foresight on AI and governance, start here → https://www.koinsights.com/strategic-foresights-2025-dont-call-them-trends-if-theyre-not-already-happening/

For leaders ready to translate this into action, explore KO Insights advisory offerings → https://www.koinsights.com/advisory/