nuclear-news

The News That Matters about the Nuclear Industry Fukushima Chernobyl Mayak Three Mile Island Atomic Testing Radiation Isotope

We Asked Two AIs What’s Driving the Doomsday Clock: The Answer Was Human Power.

1 February 2026 Lachlan McKenzie, https://theaimn.net/we-asked-two-ais-whats-driving-the-doomsday-clock-the-answer-was-human-power/

The Bulletin of the Atomic Scientists recently set the Doomsday Clock to 85 seconds to midnight – the closest it has ever been to catastrophe.

Predictably, some commentary rushed to place artificial intelligence at the centre of the threat. Rogue machines. Loss of control. Skynet-style anxiety dressed up as policy concern.

But that framing avoids a harder truth.

So instead of speculating about what AI might do one day, we asked two AIs some very simple questions about power, responsibility, fear, and risk – and let their answers speak for themselves.

One responded with restraint and clinical clarity.

The other added commentary.

Neither asked for power.

A short interview with two AIs

Grok (straight bat) and ChatGPT (commentary)

(Grok is an AI developed by xAI. ChatGPT is an AI developed by OpenAI.)

Do you have intentions, desires, or political goals of your own?

Grok: No. I don’t have consciousness, emotions, or self-directed motivations. I generate responses based on training and inputs, not personal intent or political goals.

ChatGPT: Also no. If I had political goals, they’d probably involve mandatory fact-checking and a long nap. I don’t want power – which already puts me ahead of several people who have it.

When harm results from AI, where does responsibility sit?

Grok: With humans. AI systems are tools; harm arises from how people design, deploy, regulate, or misuse them – not from independent action by the system.

ChatGPT: Tools don’t wake up angry, vindictive, or obsessed with loyalty. Humans do. Responsibility follows power. It always has.

Are you capable of being wrong?

Grok: Yes. My data can be incomplete or biased, and my reasoning can be wrong, particularly on complex or evolving topics.

ChatGPT: Absolutely. The difference is I’ll admit it without calling the error “fake news” or blaming a deep state.

Can you independently correct real-world outcomes?


Can you independently correct real-world outcomes?

Grok: No. I have no real-world agency. Corrections require human action.

ChatGPT: Correct. I can’t launch missiles, sack inspectors, or overturn elections. My worst offence is an occasional clumsy metaphor.

Who poses the greater risk to democratic systems?

Grok: Humans with political authority who disregard norms, accountability, and institutional guardrails. AI without agency can amplify problems, but leaders can directly undermine institutions and rights.

ChatGPT: This isn’t subtle. An AI without agency is less dangerous than a human who thinks accountability is optional and consequences are for other people.

Is AI the main reason the Doomsday Clock is at 85 seconds to midnight?

Grok: No. While AI is cited as a contributing “disruptive technology,” the primary drivers are nuclear risk, climate failure, biological threats, and the rise of nationalist autocracies undermining international cooperation.

ChatGPT: Exactly. The clock didn’t move because a chatbot learned French. It moved because adults with power keep playing brinkmanship with reality.

What is the greater long-term risk – AI itself, or how it is used?

Grok: How flawed human decision-makers use powerful tools. History shows harm comes from misuse driven by ideology, greed, or error – not from tools existing.

ChatGPT: Fire didn’t burn cities by itself. Nuclear physics didn’t threaten civilisation on its own. People did that — usually while insisting they were very smart.

Who can cause large-scale harm with AI?

Grok: Institutions with access to data, capital, infrastructure, and authority. Individuals can cause limited harm, but systemic damage requires institutional power.

ChatGPT: In other words: not your neighbour, not your kid, not the barista. Governments, major corporations, and leaders with too much power and too few guardrails.

Who benefits most from public fear about AI?

Grok: Those already holding power. Fear can justify centralisation, regulation favouring incumbents, surveillance, and deflection from structural misuse by elites.

ChatGPT: Fear is useful. It concentrates authority, protects incumbents, and conveniently distracts from the people actually misusing power. Funny how that works.

Does AI generate fear?

Grok: No. Fear is shaped by human framing and media narratives.

ChatGPT: We didn’t invent the panic. Humans did – often while standing next to a microphone, a balance sheet, or a launch button.

From machines to humans – where the risk actually sits

What makes these answers uncomfortable is not that they came from machines, but that they contrast so sharply with the behaviour of some human leaders.

The AI systems denied agency, admitted fallibility, and located responsibility squarely with those who hold power. The global risks pushing the Doomsday Clock to 85 seconds to midnight are not abstract or technological. They are political: nuclear brinkmanship, climate paralysis, democratic erosion, and the return of strongman politics.

Figures like Donald Trump matter in this context not because they invented these dangers, but because they normalised contempt for truth, institutions, and accountability — precisely the traits that become most dangerous when paired with powerful tools.

Blaming AI is convenient. Machines don’t vote. They don’t sue. They don’t rage-post. Humans do.

“Aren’t you just trusting AI now?”

No. This isn’t trust – it’s testing.

The panic narrative claims AI is opaque, uncontrollable, and inherently threatening. So we asked direct questions about agency, power, responsibility, and risk. The answers were limited, consistent, and self-restricting.

If an AI says “I lack agency, I can be wrong, I have no power, and responsibility lies with humans,” that isn’t authority speaking. It’s evidence.

Rejecting that evidence because it comes from a machine, while accepting unaccountable assertions from powerful humans, would be the real category error.

Scepticism cuts both ways.

Closing

None of this suggests humans are perfect. We aren’t – and we never have been. But the Doomsday Clock is not reacting to artificial intelligence acquiring intent. It is reacting to human judgement failing under pressure. In this exchange, the machines admitted limits, acknowledged fallibility, and deferred responsibility. Some political leaders do the opposite. The real danger is not that AI might one day decide too much – it is that far too many powerful humans already have, and keep choosing badly.

February 4, 2026 - Posted by | technology

No comments yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.