OpenAI’s recent rollout of its new video generator Sora 2 marks a watershed moment in AI. Its ability to generate minutes of hyper-realistic footage from a few lines of text is astonishing, and has raised immediate concerns about truth in politics and journalism.
But Sora 2 is rolling out slowly because of its enormous computational demands, which point to an equally pressing question about generative AI itself: What are its true environmental costs? Will video generation make them much worse?
The recent launch of the Stargate Project — a US$500 billion joint venture between OpenAI, Oracle, SoftBank and MGX — to build massive AI data centres in the United States underscores what’s at stake. As companies race to expand computing capacity on this scale, AI’s energy use is set to soar.
The debate over AI’s environment impact remains one of the most fraught in tech policy. Depending on what we read, AI is either an ecological crisis in the making or a rounding error in global energy use. As AI moves rapidly into video, clarity on its footprint is more urgent than ever.
Two competing narratives
From one perspective, AI is rapidly becoming a major strain on the world’s energy and water systems.
Alex de Vries-Gao, a researcher who has long tracked the electricity use of bitcoin mining, noted in mid-2025 that AI was on track to surpass it. He estimated that AI already accounted for about 20 per cent of global data-center power consumption; this is likely to double by year’s end.
According to the International Energy Agency, data centres used up to 1.5 per cent of global electricity consumption last year, with consumption growing four times faster than total global demand. The IEA predicts that data centres will more than double their use by 2030, with AI processing the leading driver of growth.
Research cited by MIT’s Technology Review concurs, estimating that by 2028, AI’s power draw could exceed “all electricity currently used by US data centers” — enough to power 22 per cent of U.S. households each year.
‘Huge’ quantities
AI’s water use is also striking. Data centres rely on ultra-pure water to keep servers cool and free of impurities. Researchers estimated that training GPT-3 would have used up 700,000 litres of freshwater at Microsoft’s American facilities. They predict that global AI demand could reach four to six billion cubic metres annually by 2027.
Hardware turnover adds further strain. A 2023 study found that chip fabrication requires “huge quantities” of ultra-pure water, energy-intensive chemical processes and rare minerals such as cobalt and tantalum. Manufacturing the high-end graphics processing units — the engines that drive AI boom — has a much larger carbon footprint than most consumer electronics.
Read more: The importance of critical minerals should not condone their extraction at all costs
Generating an image uses the electricity of a microwave running for five seconds, while making a five-second video clip takes up as much as a microwave running for over an hour.
The next leap from text and image to high-definition video could dramatically increase AI’s impact. Early testing bears this out — finding that energy use for text-to-video models quadruples when video length doubles.
The case for perspective
Others see the alarm as overstated. Analysts at the Center for Data Innovation, a technology and policy think tank, argue that many estimates about AI energy use rely on faulty extrapolations. GPU hardware is becoming more efficient each year, and much of the electricity in new data centres will come from renewables.
Recent benchmarking puts AI’s footprint in context. Producing a typical chatbot Q&A consumes about 2.9 watt-hours (Wh) — roughly 10 times a Google search. Google recently claimed that a typical Gemini prompt uses only 0.24 Wh and 0.25 mL of water, though independent experts note those numbers omit indirect energy and water used in power generation.
Context is key. An hour of high-definition video streaming on Netflix uses roughly 100 times more energy than generating a text response. An AI query’s footprint is tiny, yet data centres now process billions daily, and more demanding video queries are on the horizon.
Jevons paradox
It helps to distinguish between training and use of AI. Training frontier models such as GPT-4 or Claude Opus 3 required thousands of graphics chips running for months, consuming gigawatt-hours of power.
Using a model takes up a tiny amount of energy per query, but this happens billions of times a day. Eventually, energy from using AI will likely surpass training.
The least visible cost may come from hardware production. Each new generation of chips demands new fabrication lines, heavy mineral inputs and advanced cooling. Italian economist Marcello Ruberti observes that “each upgrade cycle effectively resets the carbon clock” as fabs rebuild highly purified equipment from scratch.
And even if AI models become more efficient, total energy keeps climbing. In economics, this is known as the Jevons paradox: in 19th-century Britain, the consumption of coal increased as the cost of extracting it decreased. As AI researchers have noted, as costs per-query fall, developers are incentivized to find new ways to embed AI into every product. The result is more data centres, chips and total resource use.
A problem of scale
Is AI an ecological menace or a manageable risk? The truth lies somewhere in between.
A single prompt uses negligible energy, but the systems enabling it — vast data centres, constant chip manufacturing, round-the-clock cooling — are reshaping global energy and water patterns.
The International Energy Agency’s latest outlook projects that data-centre power demand could reach 1,400 terawatt-hours by 2030. This is the equivalent of adding several mid-sized countries to the world’s grid. AI will count for a quarter of that growth.
Transparency is vital
Many of the figures circulating about AI energy use are unreliable because AI firms disclose so little. The limited data they release often employ inconsistent metrics or offset accounting that obscures real impacts.
One obvious fix would be to mandate disclosure rules: standardized, location-based reporting of the energy and water used to train and operate models. Europe’s Artificial Intelligence Act requires developers of “high-impact” systems to document computation and energy use.
Similar measures elsewhere could guide where new data centres are built, favouring regions with abundant renewables and water — this could encourage longer hardware lifecycles instead of annual chip refreshes.
Balancing creativity and cost
Generative AI can help unlock extraordinary creativity and provide real utility. But each “free” image, paragraph or video has hidden material and energy costs.
Acknowledging those costs doesn’t mean we need to halt innovation. It means we should demand transparency about how great the environmental cost is, and who pays it, in order to address AI’s environmental impacts.
As Sora 2 begins to fill social feeds with highly realistic visuals, the question won’t be whether AI uses more energy than Netflix, but whether we can expand our digital infrastructure responsibly enough to make room for both.

Robert Diab does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
This article was originally published on The Conversation. Read the original article.