Thirsty Bots Are Drinking Our Scarce Water

Mapping the water footprint of AI.

Jonathan Zawada for Noema Magazine
Credits

Nathan Gardels is the editor-in-chief of Noema Magazine.

In various Noema essays, we have sought to underscore the material basis of the information/clean energy economy. Far from the myth of de-materialization, the extraction and use of resources — from cobalt, copper and rare earth elements for energy storage batteries to burning fossil fuels to power data server farms — is no less than that of the maligned industrial smokestack era.

Now, we are learning that generative AI is not just sucking up our abundant content, but also drinking our scarce water.

According to a new study by University of California Riverside researchers, roughly two weeks of training for GPT-3 consumed about 700,000 liters of freshwater. The global AI demand is projected by 2027 to account for 4.2-6.6 billion cubic meters of water withdrawal, which is more than the total annual water withdrawal of Denmark or half of the United Kingdom.

Much of the water to cool the cloud is lost in steam emissions “where the water will evaporate and remove the heat from the data center to the environment,” according to one of the authors of the study, Shaolei Ren.

Other research has pointed out the growing carbon footprint of AI. For example, in 2019, researchers found that training one of the larger AI models can emit as much greenhouse gas as five average American cars over their entire lifetimes.

The chief concern of the Riverside researchers is that the rapid advance of generative AI, which is being commercialized with billions in new investment, will become ever thirstier even as climate-change-induced drought makes freshwater resources ever scarcer.

One silver lining of water consumption for AI, if we can call it that, is that scheduling training runs are flexible. AI models could train during cooler hours when less water is lost through evaporation. “AI training is like a very big lawn and needs lots of water for cooling,” Ren says. “We don’t want to water our lawns during the noon hour, so let’s not water our AI [at] noon either.” 

There is no reason why responsible government cannot mandate that, just as during the recent drought years in California when lawn watering during the heat of the day was prohibited.

At the same time, Ren acknowledges, this may conflict with carbon-efficient scheduling. If solar comes to supplant fossil fuels as the primary energy source for training AI models at large server centers, its optimal moment is during the most intensive heat of the day — around noon. That too could be managed if we are resource-mindful. “We can’t shift cooler weather to noon, but we can store solar energy, use it later, and still be ‘green,’” he proposes.

Perhaps the greatest hurdle to becoming cognizant of the resource intensity of the information age is experiential. We can see and breathe the exhaust from diesel trucks or coal plants. The dull clack on the keyboard of our laptops for a chatbot query doesn’t get our digits dirty, spew air pollution or drain our drinking glass. It is not immediately apparent to us what is happening out there, far away and unseen behind the computer screen.

Only if we become fully aware of the chain of inputs and consequences in our effortless digital searches can we ever come to grips with how to address them, whether that entails pricing in the real costs of these externalities as we use our devices or, if push comes to shove in climate matters, mandating how and when they can be used.