The advent of Large Language Models (LLMs) such as ChatGPT has led to an explosion of interest in artificial intelligence (AI). Prognostications regarding AI’s immanent future, both utopian and dystopian, abound. Futurists, including Silicon Valley tech bros and their venture capitalist backers, suggest that an Artificial General Intelligence (AGI), as or more intelligent than a human, will exist before the decade is out, and look toward AI as a potential solver of the myriad woes that beset modern society. Hopes abound that AI might manage traffic, surveil criminals, tutor students, streamline production, devise new medical cures, and even come up with novel solutions to the problems of war and climate change.
Like promises, perils abound. Dystopians—otherwise known as Doomers—worry that AI will disrupt labour markets, further empower the wealthy over the poor, fall into the hands of hackers and criminals, and even, should AGI be developed with no ethical safeguards, doom humanity itself. Either way, as the Future of Life Institute notes, ‘Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources.’
I believe the biggest problem with AI and related new technologies such as cryptocurrency has, so far, generally flown under the radar. These technologies exacerbate climate change when used at scale. They work well for a small handful of enthusiasts, but should these technologies integrate into the wider society and become the standard for most of us, they will rapidly deplete our energy resources and exponentially accelerate the warming of our already overheated planet. We need to move from thinking of intelligence as merely mental to a recognition that all intelligence has an energy cost, and the cost of machine intelligence vastly outweighs that of human intelligence. These technologies might well doom humanity, not directly through their decisions, but indirectly through our decision to use them at the expense of warming the planet. Artificial intelligence is a hazard to Earth’s ecology.
Embodied intelligence
Valerie Hudson writes of generative AI: ‘This is an intelligence based on language alone, completely disembodied. Every other intelligence on Earth is embodied, and that embodiment shapes its form of intelligence. Attaching a robot to an AI system is arguably attaching a body to a preexisting brain, rather opposite to how humans evolved a reasoning brain as part of a body.’ [i]
All animal intelligences that we have heretofore encountered are equally embodied and embedded, products of an evolutionary process that has formed them to fit in their environment. AI is different. It is not evolved but designed. But Hudson is wrong to call it disembodied. AI is as much a part of the material environment as we are, however, as it is not a product of evolution, it is not necessarily well-designed for our physical environment.
It is easy to think of AI as disembodied—merely algorithms that calculate and create in a place called ‘the cloud’. It sounds so clean. So nice. So cerebral. But there is, of course, no ‘cloud’. Cyberspace is an illusion. Computing is a physical process requiring machines, cables, and energy. The production and storage of data takes energy. And we produce a lot of data. According to the World Economic Forum, in one day we produce forty times more bytes of data than there are stars in the observable universe—44 zetabytes of data. That’s 44 x 1,000,000,000,000,000,000,000. Much of this data is not particularly productive. It includes 500 million tweets, 294 billion emails, 4 million gigabytes of data on Facebook, 4000 gigabytes from each computer-connected car, 65 billion messages on WhatsApp, and 5 billion Google searches.[ii]
One might argue that none of this is AI. But all this internet activity is precisely what is needed to train LLMs and generative AI. How would Midjourney be able to generate a picture in the style of van Gogh other than by having been fed a pixelated version of each of his paintings? LLMs form stochastic models of human language that let them respond to our queries precisely by having seen millions of web pages of human language. The public Web functions as the primary training source for every large language model today. It forms both these programs’ memory and their experience.
According to Common Crawl, a service that crawls the web every month to see what is out there, in June 2023 the web contained more than 3 billion pages and approximately 400 terabytes (a terabyte is one million megabytes) of uncompressed data.[iii] This data is stored in massive server farms, often built in rural areas. Companies such as Google, Amazon, Microsoft, and Meta have placed millions of square feet of server space in rural Virginia, California, Texas, Washington and Oregon. These centres count on cheap land, cheap electricity, and tax incentives from dying small towns looking to attract capital. They are part of a long tradition of appropriation of rural resources for urban development: ‘In the same ways that urban areas depend on agricultural lands and distant resources for food, energy, materials, and water, the growth of digital capitalism also depends on rural resources to power and secure our Facebook status updates, Google photos, Kindle obsessions, Netflix streaming services, and iTunes music libraries.’ One of Microsoft’s data centers sits in the middle of potato fields in Quincy, Washington. The facility is over 450,000 square feet, housing tens of thousands of computers. It consumes thirty percent more energy than all the people in the entire county. A single server farm can consume as much energy as 40,000 homes. The Washington site employs about seventy-five people. While these sites do not bring jobs, they do bring noise. The air-conditioning units needed to keep the massive banks of computers cool produce a loud hum that can be heard for miles.
In terms of CO2, a study from the University of Massachusetts Amherst found that the energy used in training a typical AI linguistics program emits 284 tons of carbon dioxide, five times the lifetime emissions of a mid-sized car or equivalent to more than a thousand round trip flights from London to Rome. And this is only increasing. As deep learning models get more and more sophisticated, they consume more data. Their carbon footprint increased by a factor of 300,000 between 2012 and 2018.[iv] If data centers were a nation, they would place in between Japan in India in the amount of energy they use in a year. By 2030 it is estimated that in some countries data centers will make up as much as thirty percent of the annual energy consumption.
Evil and thoughtlessness
Most technologies—and computer technologies are no exception—are developed with bright prospects in mind. These prospects are often exaggerated for the benefit of granting agencies or venture capitalists. However, most technologies are developed with a vision of producing some good in the world. Harm arises when our technologies distance us from and thus obscure the effects of our actions. Philosopher and theologian Emmanual Lévinas underlines this importance of face-to-face encounter in our postmodern world.[v] A face makes a person real and immediate. The challenge, Lévinas says, is to extend our natural response to the faces we know to the faces of people we shall never meet, the faces found among other species, and the face of our planet as a whole.

The advantages of our computer systems (super-fast Internet, cloud storage, instant search results, money transfer in seconds), have transitioned in our minds from a luxury to a necessity, or a fundamental right. We mean no harm. We are simply living our lives as best we can in our technologically saturated world. Most of us don’t know that an email sent to the person sitting next to you, or in the next office, may travel across the entire continent or even to another continent, to the company’s server and then back. All the undeleted email, text, Instagram, and other messages of countless users across the planet remain stored in server farms, located in obscure corners of the world, gobbling up land, water, and energy.
Sociologist Andrew Kimbrell has dubbed the evil perpetrated on ‘no one’ by ‘no one’ cold evil—an evil not of anger or hatred but of distance and disinterest. Kimbrell notes that,
[F]ew of us relish the thought that our automobile is causing pollution and global warming or laugh fiendishly because refrigerants in our air conditioners are depleting the ozone layer. I have been in many corporate law firms and boardrooms and have yet to see any ‘high fives’ or hear shouts of satisfaction at the deaths, injuries, or crimes against nature these organizations often perpetrate… We are confronted with an ethical enigma; far from the simple idea of evil we harbored in the past, we now have an evil that apparently does not require evil people to purvey it.[vi]
Cold evil requires a rethinking of sin. While the medieval seven deadly sins were individual sins of commission, today much of the evil in the world comes from corporate acts. Many are sins of omission. Sin in a globalized world is communal and often damages society as a whole. In his encyclical Laudato Si, Pope Francis has inveighed against technologically enhanced sins against nature and the poor, calling Christians to a new level of responsibility for the world, whose stewardship has been entrusted to them. Putting the label of sin on our technological isolation from our neighbours—an isolation promoted by our cars, smartphones, Zoom, and AI—is a hard pill to swallow. The story of the Good Samaritan, however, demands that we do, pointing out that we need not be the one who beat the man and left him on the road to be complicit in his plight.
In her landmark study, Eichmann in Jerusalem, Hannah Arendt notes that many Germans in the 1930s and 40s were not actively anti-semitic. They simply went on with their lives, turning a blind eye to what was happening around them. Others aided the Nazi machine, not by force of arms, but by simply shuffling papers or ‘doing their jobs’. Of those who refused to be complicit in the Nazi machine, she writes, ‘they asked themselves to what an extent they would still be able to live in peace with themselves after having committed certain deeds; and they decided that it would be better to do nothing, not because the world would then be charged for the better, but because only on this condition could they go on living with themselves’.[vii]
Notice here that she speaks not of doing, but of not doing, not going along with ‘business as usual’. Each of us must ask ourselves where we are ‘going along’ as a cog in a wheel of cold evil, what faces we are not seeing, and what we might choose to do without. We may not change the world, but as Arendt notes, ‘in the world of appearances, where I am never alone and always too busy to be able to think, [t]he manifestation of the wind of thought is not knowledge; it is the ability to tell right from wrong, beautiful from ugly. And this, at the rare moments when the stakes are on the table, may indeed prevent catastrophes.’[viii]
We may not need an AGI that is smarter than we are for AI to precipitate a catastrophe or doom humanity. All we may need is to continue to thoughtlessly follow our current path, using AI to solve trivial problems, not because we need it, but because it is there.

Noreen Herzfeld is Director of the Benedictine Spirituality and the Environment Program at St. John's School of Theology and Seminary, Collegeville MN, USA.
† This article is an excerpt of Noreen Herzfeld, ‘Call Me Bigfoot: The Ecological Footprint of AI and Related Technologies’ in Ted Peters (Ed.) The Promise and Peril of AI and IA: New Technology Meets Religion, Theology, and Ethics (ATF Press, 2025), pp131-140. The excerpt reprinted here is from pp131-134,138-140 of the original. Used by permission.
[i] Valerie Hudson, ‘Perspective: Why putting the brakes on AI is the right thing to do’. Deseret News, April 16, 2023.
[ii] Benedetta Brevini, Is AI Good for the Planet? (Polity Press, 2022), pp42–43.
[iii] Michael Humor, ‘How much data from the public Internet is used for training LLMs?’ Medium, 25 September 2023.
[iv] Brevini, Op.cit., pp66–67.
[v] E. Lévinas, Totality and Infinity (Duquesne University Press, 1969), p199.
[vi] Andrew Kimbrell, ‘Cold Evil: Technology and Modern Ethics’, revised transcript of the E. F. Schumacher Lecture, Oct 20, 2000.
[vii] Hannah Arendt, ‘Personal Responsibility under Dictatorship’ The Listener, BBC, August 6, 1964, p205.
[viii] Hannah Arendt, The Life of the Mind. Mary McCarthy (Ed.), (Harcourt Brace Jovanovich, 1978), p193.
Comments will be approved before showing up.