Why the Godfather of A.I. Fears What He’s Built
Geoffrey Hinton has spent a lifetime teaching computers to learn. Now he worries that artificial brains are better than ours.
New Yorker, By Joshua Rothman 13 Nov 23
In your brain, neurons are arranged in networks big and small. With every action, with every thought, the networks change: neurons are included or excluded, and the connections between them strengthen or fade. This process goes on all the time—it’s happening now, as you read these words—and its scale is beyond imagining. You have some eighty billion neurons sharing a hundred trillion connections or more. Your skull contains a galaxy’s worth of constellations, always shifting.
Geoffrey Hinton, the computer scientist who is often called “the godfather of A.I……………………
New knowledge incorporates itself into your existing networks in the form of subtle adjustments. ……………………………small changes create the possibility for profound transformations.
………………………………………For decades, Hinton tinkered, building bigger neural nets structured in ingenious ways………………………………………………. He didn’t anticipate the speed with which, about a decade ago, neural-net technology would suddenly improve. Computers got faster, and neural nets, drawing on data available on the Internet, started transcribing speech, playing games, translating languages, even driving cars. Around the time Hinton’s company was acquired, an A.I. boom began, leading to the creation of systems like OpenAI’s ChatGPT and Google’s Bard, which many believe are starting to change the world in unpredictable ways.
…………………………………………………………………………………………………………. Earlier this year, Hinton left Google, where he’d worked since the acquisition. He was worried about the potential of A.I. to do harm, and began giving interviews in which he talked about the “existential threat” that the technology might pose to the human species. The more he used ChatGPT, an A.I. system trained on a vast corpus of human writing, the more uneasy he got.
One day, someone from Fox News wrote to him asking for an interview about artificial intelligence. Hinton enjoys sending snarky single-sentence replies to e-mails—after receiving a lengthy note from a Canadian intelligence agency, he responded, “Snowden is my hero”—and he began experimenting with a few one-liners. Eventually, he wrote, “Fox News is an oxy moron.” Then, on a lark, he asked ChatGPT if it could explain his joke. The system told him his sentence implied that Fox News was fake news, and, when he called attention to the space before “moron,” it explained that Fox News was addictive, like the drug OxyContin. Hinton was astonished. This level of understanding seemed to represent a new era in A.I.
There are many reasons to be concerned about the advent of artificial intelligence. It’s common sense to worry about human workers being replaced by computers, for example. But Hinton has joined many prominent technologists, including Sam Altman, the C.E.O. of OpenAI, in warning that A.I. systems may start to think for themselves, and even seek to take over or eliminate human civilization. It was striking to hear one of A.I.’s most prominent researchers give voice to such an alarming view.
……………………………………………….Hinton thinks that “large language models,” such as GPT, which powers OpenAI’s chatbots, can comprehend the meanings of words and ideas
……………………………Hinton argues that the intelligence displayed by A.I. systems transcends its artificial origins.
……………………………………………………………………………………………………………………………………… How useful—or dangerous—will A.I. turn out to be? No one knows for sure, in part because neural nets are so strange. In the twentieth century, many researchers wanted to build computers that mimicked brains. But, although neural nets like OpenAI’s GPT models are brainlike in that they involve billions of artificial neurons, they’re actually profoundly different from biological brains. Today’s A.I.s are based in the cloud and housed in data centers that use power on an industrial scale. Clueless in some ways and savantlike in others, they reason for millions of users, but only when prompted. They are not alive.
They have probably passed the Turing test—the long-heralded standard, established by the computing pioneer Alan Turing, which held that any computer that could persuasively imitate a human in conversation could be said, reasonably, to think. And yet our intuitions may tell us that nothing resident in a browser tab could really be thinking in the way we do. The systems force us to ask if our kind of thinking is the only kind that counts.
……………………………..As a scientific enterprise, mortal A.I. might bring us closer to replicating our own brains. But Hinton has come to think, regretfully, that digital intelligence might be more powerful………………………. he says, suggests that “we should be concerned about digital intelligence taking over from biological intelligence.”
How should we describe the mental life of a digital intelligence without a mortal body or an individual identity? In recent months, some A.I. researchers have taken to calling GPT a “reasoning engine”—a way, perhaps, of sliding out from under the weight of the word “thinking,” which we struggle to define…………………………………………………………………………
Precisely because he thinks that A.I. is truly intelligent, he expects that it will contribute to many fields. Yet he fears what will happen when, for instance, powerful people abuse it. “………………………..He believes that autonomous weapons should be outlawed—the U.S. military is actively developing them—but warns that even a benign autonomous system could wreak havoc. “If you want a system to be effective, you need to give it the ability to create its own subgoals,” he said. “Now, the problem is, there’s a very general subgoal that helps with almost all goals: get more control. The research question is: how do you prevent them from ever wanting to take control? And nobody knows the answer.” (Control, he noted, doesn’t have to be physical: “It could be just like how Trump could invade the Capitol, with words.”)
………………………………………………… If the U.N. really worked, possibly something like that could stop it. Although, even then, A.I. is just so useful. It has so much potential to do good, in fields like medicine—and, of course, to give an advantage to a nation via autonomous weapons.”………………………………………………………………………..
No comments yet.
-
Archives
- December 2025 (277)
- November 2025 (359)
- October 2025 (377)
- September 2025 (258)
- August 2025 (319)
- July 2025 (230)
- June 2025 (348)
- May 2025 (261)
- April 2025 (305)
- March 2025 (319)
- February 2025 (234)
- January 2025 (250)
-
Categories
- 1
- 1 NUCLEAR ISSUES
- business and costs
- climate change
- culture and arts
- ENERGY
- environment
- health
- history
- indigenous issues
- Legal
- marketing of nuclear
- media
- opposition to nuclear
- PERSONAL STORIES
- politics
- politics international
- Religion and ethics
- safety
- secrets,lies and civil liberties
- spinbuster
- technology
- Uranium
- wastes
- weapons and war
- Women
- 2 WORLD
- ACTION
- AFRICA
- Atrocities
- AUSTRALIA
- Christina's notes
- Christina's themes
- culture and arts
- Events
- Fuk 2022
- Fuk 2023
- Fukushima 2017
- Fukushima 2018
- fukushima 2019
- Fukushima 2020
- Fukushima 2021
- general
- global warming
- Humour (God we need it)
- Nuclear
- RARE EARTHS
- Reference
- resources – print
- Resources -audiovicual
- Weekly Newsletter
- World
- World Nuclear
- YouTube
-
RSS
Entries RSS
Comments RSS


Leave a comment