nuclear-news

The News That Matters about the Nuclear Industry Fukushima Chernobyl Mayak Three Mile Island Atomic Testing Radiation Isotope

Palantir’s Palestine: How AI Gods Are Building Our Extinction

Zionism is evil,” she says with the quiet certainty of someone who has spent a lifetime studying its fruits. “It is purely evil. It has created disasters, misery, atrocities, wars, aggression, unhappiness, insecurity for millions of Palestinians and Arabs. This ideology has no place whatsoever in a just world. None

The machines are not coming for us. They are already here. And the men who control them have made their intentions terrifyingly clear.

BettBeat Medi, Dec 26, 2025

But Zionism, in its current iteration, is not merely an ideology. It is a business model. It is a technology demonstration. It is the beta test for systems that will eventually be deployed everywhere.

The Israeli military’s Project Lavender uses AI to identify targets for assassination. Soldiers describe processing “dozens of them a day” with “zero added value as a human.” The algorithm marks. The human clicks. The bomb falls.

This is not a war. It is a sick twisted video game.

Palantir’s technology identifies the targets. Musk’s Starlink provides the communications. American military contractors supply the weapons. And the entire apparatus is funded by governments whose citizens have marched in the millions demanding it stop.

There is a moment in every civilization’s collapse when the instruments of its destruction become visible to those paying attention. We are living in that moment now. But the warning signs are not carved in stone or written in prophecy—they are embedded in source code, amplified by algorithms, and funded by men who speak openly of human extinction while racing to cause it.

In a nondescript office in Palo Alto, a man who claims to fear fascism has become its most sophisticated architect. In a sprawling Texas compound, another man who styles himself a free speech absolutist uses his platform to amplify the voices calling for ethnic cleansing. And in the bombed-out hospitals of Gaza, their technologies converge in a laboratory of horrors that prefigures what awaits us all.

The four horsemen of this apocalypse do not ride horses. They deploy algorithms.

The Confession

Professor Stuart Russell has spent fifty years studying artificial intelligence. He wrote the textbook from which nearly every AI CEO in Silicon Valley learned their craft. And now, at eighty hours a week, he works not to advance the field he helped create, but to prevent it from annihilating the species.

They are playing Russian roulette with every human being on Earth,” Russell said in a recent interview, his voice carrying the weight of someone who has seen the calculations and understood their implications. “Without our permission. They’re coming into our houses, putting a gun to the head of our children, pulling the trigger, and saying, ‘Well, you know, possibly everyone will die. Oops. But possibly we’ll get incredibly rich.’

This is not hyperbole from an outsider. This is the assessment of a man whose students now run the companies building these systems. And here is what should terrify you: the CEOs themselves agree with him.

Dario Amodei, CEO of Anthropic, estimates a 25% chance of human extinction from AI. Elon Musk puts it at 20-30%Sam Altman, before becoming CEO of OpenAI, declared that creating superhuman intelligence is “the biggest risk to human existence that there is.”…………………………………………………………………………………………………………………………………………………………………………..

“These bombs are cheaper and you don’t want to waste expensive bombs on unimportant people”

……………………………………………………………………………………………………..The [Palantir] company’s software now powers what Israeli soldiers describe with chilling bureaucratic efficiency: “I would invest 20 seconds for each target and do dozens of them a day. I had zero added value as a human. Apart from being a stamp of approval.”

Twenty seconds. That is the value of a Palestinian life in the algorithmic calculus of Alex Karp’s creation. The machine decides who dies. The human merely clicks.

When whistleblowers revealed that Israeli intelligence officers were using “dumb bombs”—unguided munitions with no precision capability—on targets identified by Palantir’s AI, their justification was purely economic: “These bombs are cheaper and you don’t want to waste expensive bombs on unimportant people.

Unimportant people. Children. Doctors. Journalists. Poets.

Karp has admitted, in a moment of rare candor: “I have asked myself if I were younger, at college, would I be protesting me?

He knows the answer. We all know the answer. He simply does not care.

………………………………………………………………………………………………………………………. Musk is the CEO of xAI, OpenAI’s largest competitor. He has declared himself a 30% believer in human extinction from AI. And he is using the world’s most influential social media platform to promote the political movements most likely to strip away the regulations that might prevent that extinction.

The fascists have captured the algorithm.

The Laboratory of the Future

Dr. Ghada Karmi was a child in 1948 when she lost her homeland. She remembers enough to know that she lost her world. For seventy-seven years, she has watched as the mechanisms of Palestinian erasure evolved from rifles and bulldozers to algorithms and autonomous weapons systems.

Zionism is evil,” she says with the quiet certainty of someone who has spent a lifetime studying its fruits. “It is purely evil. It has created disasters, misery, atrocities, wars, aggression, unhappiness, insecurity for millions of Palestinians and Arabs. This ideology has no place whatsoever in a just world. None. It has to go. It has to end. And it has to be removed. Even its memory has to go.

But Zionism, in its current iteration, is not merely an ideology. It is a business model. It is a technology demonstration. It is the beta test for systems that will eventually be deployed everywhere.

The Israeli military’s Project Lavender uses AI to identify targets for assassination. Soldiers describe processing “dozens of them a day” with “zero added value as a human.” The algorithm marks. The human clicks. The bomb falls.

This is not a war. It is a sick twisted video game.

Palantir’s technology identifies the targets. Musk’s Starlink provides the communications. American military contractors supply the weapons. And the entire apparatus is funded by governments whose citizens have marched in the millions demanding it stop.

The genocide has not provoked a change in the official attitude,” Dr. Karmi observes. “I’m astonished by this and it needs an explanation.

The explanation is simpler and more terrifying than any conspiracy. The explanation is that the people who control these technologies have decided that some lives are worth twenty seconds of consideration and others are worth none at all. And the governments that might regulate them have been captured by men waving fifty billion dollar checks.

They dangle fifty billion dollar checks in front of the governments,” Professor Russell explains. “On the other side, you’ve got very well-meaning, brilliant scientists like Jeff Hinton saying, actually, no, this is the end of the human race. But Jeff doesn’t have a fifty billion dollar check.”

The King Midas Problem

Russell invokes the legend of King Midas to explain the trap we have built for ourselves. ………………………………………………………………………………..

The CEOs know this. They have signed statements acknowledging it. They estimate the odds of catastrophe at one in four, one in three, and they continue anyway.

Why?

Because the economic value of AGI—artificial general intelligence—has been estimated at fifteen quadrillion dollars. This sum acts, in Russell’s metaphor, as “a giant magnet in the future. ……………………………………………………………

The people developing the AI systems,” Russell observes, “they don’t even understand how the AI systems work. So their 25% chance of extinction is just a seat of the pants guess. They actually have no idea.

No idea. But they’re spending a trillion dollars anyway. Because the magnet is too strong. Because the incentives are too powerful. Because they have convinced themselves that someone else will figure out the safety problem. Eventually. Probably. Maybe.

What Now?

If everything goes right—if somehow we solve the control problem, if somehow we prevent extinction, if somehow we navigate the transition to artificial general intelligence without destroying ourselves—what then? ………………………………………………………………………………………………………………………………………………………………………………………………………………………….

The Enablers

Dr. Karmi returns again and again to a simple question: Why?

Why should a state that was invented, with an invented population, have become so important that we can’t live without it?” she asks of Israel. But the question applies equally to Silicon Valley, to the tech platforms, to the entire apparatus of algorithmic control that now shapes our politics, our perceptions, our possibilities.

The answer, she suggests, lies in understanding the enablers.

I think it’s absolutely crucial now to focus on the enablers,” she argues. “Because we can go on and on giving examples of Israeli brutality, of the atrocities, of the cruelties. That’s not for me the point. The point is who is allowing this to happen?

The same question must be asked of AI. Who is allowing this to happen? Who is funding the companies that acknowledge a 25% chance of human extinction and continue anyway? Who is providing the regulatory vacuum in which these technologies develop unchecked? Who is amplifying the voices calling for acceleration while silencing those calling for caution?

The answer is the same class of people who have enabled every catastrophe of the modern era: the comfortable, the compliant, the compromised. The politicians who take the fifty billion dollar checks. The journalists who amplify the preferred narratives. The citizens who scroll past the warnings because they are too busy, too distracted, too convinced that someone else will handle it.

All the polls that have been done say most people, 80% maybe, don’t want there to be super intelligent machines,” Russell notes. “But they don’t know what to do.”

They don’t know what to do. So they do nothing. And the machines keep learning. And the algorithms keep shaping. And the billionaires keep abusing. And the bombs keep falling. And the future keeps narrowing.

The Resistance

Russell’s advice is almost quaint in its simplicity: “Talk to your representative, your MP, your congressperson. Because I think the policymakers need to hear from people. The only voices they’re hearing right now are the tech companies and their fifty billion dollar checks.

………………………………… the point is not that resistance will succeed. The point is that resistance is the only thing that might succeed.

…………………………….We still have a choice. The machines are not yet smarter than us. The algorithms are not yet in complete control. The billionaires are not yet omnipotent.

But the window is closing. The event horizon may already be behind us. And the men who control the most powerful technologies in human history have made their values abundantly clear.

They will pursue profit over safety. They will amplify hatred over tolerance. They will choose rape over romance. They will enable genocide if the margins are favorable. They will risk extinction if the upside is sufficient.

This is not speculation. This is the record. This is what they are doing, right now, in plain sight.

The question is not whether we understand the danger. The question  is what we will do about it……………………………………………………….. https://bettbeat.substack.com/p/palantirs-palestine-how-ai-gods-are?utm_source=post-email-title&publication_id=437130&post_id=180304933&utm_campaign=email-post-title&isFreemail=true&r=46by4&triedRedirect=true&utm_medium=email

December 29, 2025 - Posted by | Religion and ethics, technology

No comments yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.