Nuclear-Powered Search
One-upmanship is driving demand for artificial intelligence and the unsustainable amounts of energy needed to power it all.
In a growing number of markets around the world, Google searches generate an “AI overview” response. Already, there are browser extensions to block them, with at least one available in multiple languages. But browser extensions blocking Google’s AI overview does not turn it off, saving energy. Instead, these extensions just filter AI overviews out, keeping them from appearing on the screen.

To power artificial intelligence in all its various forms, big tech companies are increasingly turning to nuclear energy despite the downsides, which include rare accident risks, possible nuclear proliferation, and definite radioactive nuclear waste. New technologies promise to reduce waste (so-called “next-generation fission” which uses technology invented or demonstrated many decades ago) or to completely transform nuclear power from fission to fusion (as Google has been investing in since 2015) and lots more time and investment are needed to make a reality. But in the meantime, what is driving demand for AI appears to be one-upmanship among the tech giants themselves. Although creating demand for new products and services is part of big tech’s financial success, the energy costs of adding AI to everyday activities—such as searching the Internet—is proving unsustainable.

Google’s “AI overview” responses build upon Google’s search results, an offshoot of a federally funded project at Stanford University. Search results themselves “require an extraordinary amount of computing resources” as John Battelle put it in his 2005 Wired article. Indeed, when then-graduate students Sergey Brin and Larry Page first jacked into Stanford’s broadband campus network, their experiment soon “consumed nearly half of Stanford’s entire network bandwidth, an extraordinary fact considering that Stanford was one of the best-networked institutions on the planet. And in the fall of 1996 the project would regularly bring down Stanford's Internet connection.”
Today, each ChatGPT query costs approximately ten times the amount of energy as a Google search. That’s enough to power a lightbulb for several minutes. Although there are disputes about that often-cited comparison, even if new efficiencies reduce the costs of a ChatGPT query to be on par with a Google search, again, Google searches themselves are energy intensive and AI energy demands are “out of control,” reports Wired. For these reasons, some cities and countries are placing moratoriums on building new data centers—where processing for AI gets done—because of the lack of power infrastructure to support them. For example, Amsterdam, Dublin, and Singapore have done so already according to a report published last fall by McKinsey & Company.
Although AI requires lots of energy, using AI may be the only way to solve the energy challenges of using AI. This is the so-called “AI energy paradox” as described in a white paper published 21 January 2025 by the World Economic Forum. In other words, people may be able to use AI to inform efforts to transform our energy infrastructure, improve efficiencies, accelerate renewable energy integration, and make power grids more resilient. But from a usage standpoint, how many of us are using AI those ways or to, say “analyse significant volumes of data to provide insights into climate trends and the effectiveness of emissions reduction strategies that can lead to faster action”?
Instead, people are using AI to search or “to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages,” which was Google’s definition for what AI was for according to a 2018 article from The Bulletin of Atomic Scientists. These days, Google’s definition of AI is far more nuanced. But it still blends together extremely different uses of what AI is for, with the more widespread day-to-day tasks—speech transcription, image recognition, and language translation—alongside more intensive tasks that most people do not do, such as predictive modeling, data analytics, and cybersecurity.
And this is where one-upmanship comes in, driving demand for using AI for widespread, day-to-day tasks such as search. But imagine for a moment if people start using AI for search in large numbers. Then it won’t be long before people start making content that is, let’s call it “AIO” — AI-optimized — to replace content that is SEO — search-engine optimized — in order that their websites have a broader reach. This could lead to a kind of nightmare scenario where people use AI to make AIO material, and so “risk killing the creativity that AI LLMs [large language models] rely on for their training data and end up with useless results from AI searches based on AI-generated gloop,” writes Jowi Morales in Tom’s Hardware.

Today, Wired’s Senior Reviews Editor Julian Chokkattu published a reported op-ed titled “It's Time to Kill Siri,” Apple’s virtual assistant. Though enabled with AI, Siri is generally under used. Instead, Chokkattu reports, people mostly use Siri for “playing music, checking weather, and setting timers, and aren't even pushing the boundaries of its current, relatively limited, capabilities. It's hard to see that changing anytime soon, even if Siri's feature-packed next generation arrives as promised.”
Speaking of anytime soon, also today Apple announced new AI initiatives beyond a next-generation Siri, building more AI features into more of its operating system but without hyping it up so much. So, the trend is for more AI whether people know it or not. Indeed, for Wired’s Chokkattu, killing Siri is not at all about reducing AI usage. Rather, it’s about giving people a newly named tool—re-branding, like Google did by renaming “Google Assistant” with “Gemini”—to help people with the transition to using AI more.
If Big Tech has their way, that transition will lead to more nuclear power and more rebranding. Expected to come online in 2028, “Three Mile Island Nuclear Station, Unit 1” is to power data centers for Microsoft. It’s been renamed the “Christopher M. Crane Clean Energy Center.”
Laws and policies always lag behind technological innovations. But with AI, the amount of energy consumption is well known and growing. Although the refrain from corporations is often to ‘trust the market,’ laws and policies allow the market to discount effects to the environment. So, a thought experiment: what would be the various effects of putting policies and laws in place to eliminate or curb the use of AI for everyday tasks such as internet searches and queries? Beyond considering the environment and increased energy usage, are there other reasons to do so, such as consumer choice that allows people to turn off such AI capabilities, say, even allowing parents to turn it off for their children’s devices, or for schools to ban use of it for students in classes?
Excellent thought experiment! We need to move from a "Ready! Fire! Aim" environment and consider the consequences (intended and otherwise) of our actions.