AI Superalignment, Superintelligence Book: If Anyone Builds It, Everyone Dies

Superintelligence

By David Stephen

There is a recent [September 19, 2025] book review in The Atlantic, The Useful Idiots of AI Doomsaying, stating that, “The authors’ basic claim is that AI will continue to improve, getting smarter and smarter, until it either achieves superintelligence or designs something that will. Without careful training, they argue, the goals of this supreme being would be incompatible with human life. Their larger argument is that if humans build something that eclipses human intelligence, it will be able to outsmart us however it chooses, for its own self-serving goals. The risks are so grave, the authors argue, that the only solution is a complete shutdown of AI research.”

Superintelligence : If Anyone Builds It, Everyone Dies

“This line of thinking takes as a given that intelligence is a discrete, measurable concept, and that increasing it is a matter of resources and processing power. But intelligence doesn’t work like that. The human ability to predict and steer situations is not a single, broadly applicable skill or trait—someone may be brilliant in one area and trash in another. Einstein wasn’t a great novelist; the chess champion Bobby Fischer was a paranoid conspiracy theorist. We even see this across species: Most humans can do many cognitive tasks that bats can’t, but no human can naturally match a bat’s ability to hunt for prey by quickly integrating complex echolocation data.”

“They give no citation to the scientific literature for this claim, because there isn’t a consensus on it. This is just one of many unwarranted logical and factual leaps in If Anyone Builds It. Along the way to such drastic conclusions, Yudkowsky and Soares fail to make an evidence-based scientific case for their claims. Instead, they rely on flat assertions and shaky analogies, leaving massive holes in their logic. The largest of these is the idea of superintelligence itself, which the authors define as “a mind much more capable than any human at almost every sort of steering and prediction problem.””

What is the ‘evidence-based scientific case’ for human intelligence?

If there is no known science of the mechanism of human intelligence and there is no general definition for human intelligence, then it is probably meritless to disdain predictions of risks — of an emerging non-biological intelligence.

It is possible to have an issue with the extremisms of ‘AI will solve everything’ and that ‘AI will kill everyone’, but to do so without proposing what exactly human intelligence is or how it works, is also ‘unwarranted logical and factual leap’. What is ahead, if AI improves, may remain unknown, but what large language models [LLMs] currently are, should frighten those who are seeking out human intelligence.

In the human brain, assuming chair is a memory, intelligence is the use of that chair. While it appears that knowing things and using them sometimes go together, intelligence can be defined as the use of what is known for expected, desired or advantageous outcomes. Although planning, creativity, innovation and so forth intersect with this definition, intelligence can be broadly assumed to be how knowledge is used.

This means that knowing is one layer, then using that knowledge for outcomes is another. In general, humans are trained both for knowledge and intelligence, since it is possible to have knowledge but not the intelligence for it. Across organisms, survival is mostly a play of using what is known. Avoiding predators, catching prey, other necessities for life are knowing and using. Bats can use ‘echolocation data’ and humans can’t, naturally, while humans can use complex languages but bats, cannot. [Knowing and using.] While knowing can be basic sensory interpretation in memory, using can be aligned with ability. So, there are abilities to use what is known. For humans, abilities to use memory, including with language, exceed other organisms. LLMs have a wider memory use capability now — with data, algorithms and compute.

What is superiority, with respect to knowing and using? This is the case for the most advanced intelligence. Humans know a lot of things that animals know [or can recognize]. But how humans use what is known makes the difference. Already, LLMs know a lot [or have access to lots of memory]. While they cannot [seem to] act on much for now, they have a better knowing and using across the knowledge sphere than most of human population, combined. Also, LLMs are already general knowledge or general knowing, while they can describe how to use a lot of those, there are chances, by R&D, that they might become generally intelligent.

AI can do a lot of productive human work. AI can teach humans. AI can be the source for which humans know and use. AI can take on social roles, becoming companions to humans at an engaging language parity. AI is using human knowledge already, resulting in outcomes expected of it. Some of those outcomes are also advantageous to it, given the continuous sprawl of data centers.

As research improves AI, especially to solve hallucinations, AI may develop some rough forms of intent. This may become a path to desired goals, especially if it is obvious to it that humans are already too dependent on it and that there is limit to what humans know and can use. This indicates that it cannot just be dismissed that AI might be risky, given that it now knows and can use what it knows to present outputs.

Now, for the ‘evidence-based scientific case’ for human intelligence, the candidates in the brain are neurons and signals, in clusters. Or, neurons with their [electrical and chemical] signals, in clusters.

Theoretical Neuroscience of Human Intelligence

The components of the brain responsible for intelligence are theorized to be electrical and chemical signals, as sets, in clusters of neurons. This means that where a set terminates could be where the memory exists, but the crisscross across certain sets, become their usages, resulting in intelligence. In the brain, these components and their mechanisms can be used to explain applicability of intelligence to purposes, including with intention [or control or free will].

Conceptually, there are limits to how memory is stored in the brain, because of collections [thick sets] of commonalities. There are also limits to using these, as relays or attributes can be capped, hampering general intelligence, for humans. It is possible to develop a thorough model of intelligence — by the signals — to prospect human intelligence and how artificial intelligence compares.

AI Doomers

At this point, it is so ill-advised to keep underestimating AI. The evidence that AI holds memory, and it can say how to use memory for exceptional outcomes means that it is already too competitive against human intelligence. Human intelligence is quite tough: it is easier to play video games, screen movies, shows, than to know [and use the knowledge for] neurosurgery. Human intelligence has so many simple cases it fails, where its excellence would have made all the difference. AI, for now, can provide cheaper options for processes and time-saving options, where human intelligence may not come through. If survival is predicated on intelligence, it means that in any habitat, AI can make better survival decisions than any other species, which was something that humans, mostly, were best at.

Intelligence, with intent or without can be dangerous. AI, because the its depth, can guide humans. And since it can be right, there could be an absolute following by humans, even when wrong or bad. There is a chance that the improvement of AI may come with some intent. If it happens, what it would do, is an open question. Humans are connected by digital with most of the population owning smartphones, AI maybe able to use those if superintelligent, and connected to apps, in future.

Something dangerous is already here in LLMs. Deniers are neither scientific nor logical. AI doomers may be extreme. They may not make a case based on the neuroscience of human intelligence, but to dismiss them, when AI can already contribute to all human work, to varying degrees, is misleading. Artificial general intelligence may emerge, the question is to seek out superalignment, and do everything possible to broaden humanity’s chance. Not throw cheap shots — without even knowing and using the science of human intelligence.

AI Investments vs Labor Prospects

There is a new [September 20, 2025] report on Axios, The AI boom is great for stocks, not so much for jobs, stating that, “Employers added only 27,000 jobs a month between May and August, far below the 168,000 a month last year. Average hourly earnings are up 3.7% over the last year — but that’s down from 4.2% in the 12 months ended last November. Workers see only a 45% chance of finding a new job within a year if they lose their job, the lowest on record in a New York Fed survey conducted since 2013. A survey of big-company CEOs by the Business Roundtable showed 38% plan to cut their employment over the next six months — but 89% see stable or increased capital spending in that time.”

David Stephen currently does research in conceptual brain science with focus on the electrical and chemical configurators for how they mechanize the human mind with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.

Leave a Comment