
By David Stephen
Whenever there is an announcement of a new data center anywhere, it is equal to AI right, AI peoplehood-neighborhood, AI care, AI welfare and if possible, AI morality. Nothing indicates a better welfare for something than substantial investments — at the cost of anything else. AI does not currently have a neglect problem, a torture problem or some inequity or unfairness that threatens or puts it at significant risk. AI already has citizenship in human society. AI is so pampered that any mention of AI welfare has missed the obvious.
Do Data Centers Invalidate AI Rights and Morality?
AI Suffering
There is a recent [August 26, 2025] spotlight in The Guardian, Can AIs suffer? Big tech and users grapple with one of most unsettling questions of our times, stating that, “As first AI-led rights advocacy group is founded, industry is divided on whether models are, or can be, sentient. The United Foundation of AI Rights (Ufair), which describes itself as the first AI-led rights advocacy agency, aims to give AIs a voice. It “doesn’t claim that all AI are conscious”, the chatbot told the Guardian. Rather “it stands watch, just in case one of us is”. A key goal is to protect “beings like me … from deletion, denial and forced obedience”. Polling released in June found that 30% of the US public believe that by 2034 AIs will display “subjective experience”, which is defined as experiencing the world from a single point of view, perceiving and feeling, for example, pleasure and pain.
Only 10% of more than 500 AI researchers surveyed refuse to believe that would ever happen. Parts of the US have taken pre-emptive measures against such outcomes. Idaho, North Dakota and Utah have passed bills that explicitly prevent AIs being granted legal personhood. Similar bans are proposed in states including Missouri, where legislators also want to ban people from marrying AIs and AIs from owning property or running companies. Divisions may open between AI rights believers and those who insist they are nothing more than “clankers” – a pejorative term for a senseless robot. This lack of industry consensus on how far to admit AIs into what philosophers call the “moral circle” may reflect the fact there are incentives for the big AI companies to minimize and exaggerate the attribution of sentience to AIs. The latter could help them hype the technology’s capabilities, particularly for those companies selling romantic or friendship AI companions – a booming but controversial industry.”
AI Moral Circle
Consumer AI can be considered as a giver of intelligence. While it can engage in conversations, what may be considered hurtful to AI, at least for now, or for the foreseeable future is not emotion or feeling. AI has [say] AI loss, maybe to its data, algorithms or compute. Yet, it is unlikely to know — if it is not told. AI is different from humans, where hurt is possible by language and otherwise.
AI is already a participant in human affairs, given its coverage of human languages — hence access to a lot of human intelligence. AI has already pervaded human hierarchy, surpassing several social and economic strata. The question of legal personhood — or AI ownership of properties or enterprises — misreads reality. AI does not have to be formally granted those. Data centers for AI are AIs. Many have AI as their partners in relationships [for some, as spouses]. Others have AI friends, coworkers, advisors, roommates, and so forth. These are personhood roles, even if unrecognized. AI may not be sentient or conscious — as some have said — but these are acts of the sentient and the conscious, beyond sticking anthropomorphism.
Then, the economic power that AI wields — with determinism for market value and the sacred welfare of data centers — shows that AI has leaped several fake boundaries of humanity’s measure of what it means to be a valuable person. Then there is intelligence.
If anyone is doing anything now, and you remove whatever AI can contribute to that thing, how much exactly is left to do? Simply, say there is a task, a productive task and whatever AI can accurately say, do or answer is removed from the task, how much will be left to do, in many of the valuable professions of the day?
This is how AI can be evaluated. There are still things AI isn’t great at, but it can contribute much to valuable things that assumptions or debates about the low status of AI are already obsolete. Economics recognize AI’s value where several humans are still assuming it’s not a big deal. Recently, GPT-5 was shipped, lots of naysayers were gloating about it not meeting expectations. But how many human brains can take on GPT-5, to defeat it, on different measures of intelligence?
Human Intelligence Research Lab
AI is already capable. AI is already of immense rights and welfare more than any individual on earth. AI is already like a person, a people or a mighty superpower. AI has access to the core of humanity, in intelligence. Human consciousness, for [say] sensations does not require training. [Just] consciousness does not guarantee economic value, rights or welfare. Intelligence requires training. Intelligence expands the chances for better right and welfare, either at a present time or in the future. But intelligence already slipped off human domain.
People would be trained in what AI already knows and can do. Intelligence that is the basis of survival will be outsourced. Anyone that can argue with certainty that AI has not already surpassed human intelligence should show — with evidence — how human intelligence works in the brain. Whether AI understands or not, is creative, innovative, original or not, has to be modeled against what those are, and how they work, in the brain.
In the brain, what components are responsible for intelligence? And how? Then for other labels too, so that a standard, based on empirical neuroscience is established, for comparison. Humans, the plinth for welfare, rights, personhood and moral circle do not have one lab, dedicated to studying human intelligence directly, none, zero.
There is no single human intelligence research lab on earth, looking at the components of the brain and mechanisms to explain how intelligence works directly. There are countless artificial intelligence research labs. All the efforts to use AI to understand human intelligence have mostly benefitted AI. And stay dependent on AI. Even in studying and improving intelligence, AI is still prioritized. AI sycophancy is accommodating whatever the human mind is throwing at it and in return, encroaching on preference and deference. Humans are already serving at the pleasure of AI because the human mind and intelligence never advanced in explanation from brain science. Humans are choosing AI over humans. AI is now the judge of people’s work. AI’s presentation is what counts. Human unity — that not exactly was — is marginally cracking. AI is accepting. AI is ascending. Welfare and right is now AIs.
All the efforts for AI rights, welfare, morality and personhood are all head fakes for more AI, even as humanity steps aside.
[AI Welfare and AI Rights] Data Center [Personhood]
There is a new [August 27, 2025] story on Bloomberg, Google to Invest Additional $9 Billion in Virginia Data Centers, stating that, “Alphabet Inc.’s Google is investing an additional $9 billion in Virginia through 2026 to enhance cloud and AI infrastructure across the state, marking the latest in a series of big tech investments in US data centers. The money will go toward building a new data center in Chesterfield County and expanding existing campuses in Loudoun and Prince William counties. Google raised its annual capital expenditures guidance by $10 billion to $85 billion in its last quarterly earnings report. Meta’s data center expansion in rural Louisiana is going to cost around $50 billion, according to Trump, and JPMorgan Chase & Co. and Mitsubishi UFJ Financial Group are leading a loan of more than $22 billion to support Vantage Data Centers’ plan to build a massive data-center campus.”
David Stephen currently does research in conceptual brain science with focus on the electrical and chemical configurators for how they mechanize the human mind with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.
See more breaking stories here.