In an Archdruid Report post a couple of weeks ago, I used the well-worn metaphor of bacteria in a petri dish to talk about the way that ecological limits constrain the range of possibilities open to any life form, Homo sapiens included. Several people objected to the comparison, insisting basically that human beings are enough smarter than microbes that the same rules don’t apply. Flattering to human vanity as this insistence may be, I have to admit to a certain skepticism about the claim in the light of current events.
According to the best current figures, world production of petroleum peaked almost two years ago and has been declining ever since; world production of all liquid fuels peaked more than a year ago, and is likewise declining; much of the Third World is already in desperate straits as its access to fossil fuels dries up — and government and business leaders across the industrial world, which has far more to lose from the twilight of cheap abundant energy than the Third World does, are still treating peak oil as a public relations problem. If we’re enough smarter than microbes in a petri dish to escape the same fate, we have yet to demonstrate it.
On a deeper level, of course, such comments miss the point just as thoroughly as the claim they’re meant to satirize. The value of the petri dish metaphor is that it shows how ecological processes works in a context simple enough to make them clear. The same pattern can be traced in more complex biological systems, human societies among them. The logic of the petri dish, after all, is the same logic that drove dieoff on Easter Island and among the lowland Maya: if you use the resources necessary to your survival at an unsustainable rate, you get the classic overshoot curve — population boom, followed by population bust.
Thus humanity is no more exempt from ecological processes than from the law of gravity. The invention of airplanes doesn’t mean that gravity no longer affects us; it means that if we use a lot of energy, we can overcome the force of gravity and lift ourselves off the ground for a while. The same principle holds with the laws of ecology. Using an immense amount of energy, we lifted a minority of the world’s population high above the subsistence level for a while, but that doesn’t mean that ecological laws no longer affect us. It means that for three hundred years, we’ve been able to push past the limits normally imposed by those laws, by burning up huge amounts of fossil fuels. When the fossil fuels are gone, the laws will still be there.
One of the central principles of ecology, in fact, is that similar patterns can be traced among organisms at many different levels of complexity. The difference in intelligence between yeast and deer is many times greater than the difference between deer and human beings, and yet deer and yeast alike go through exactly parallel cycles of boom and bust when resource availability rather than predators functions as the primary means of population control. Thus it’s reasonable to look to ecological patterns among other living things for clues to the driving forces behind equivalent processes in human societies.
One ecological pattern that deserves especially close attention as we begin the long slide down the back end of Hubbert’s peak is the process called succession. Any of my readers who were unwise enough to buy a home in one of the huge and mostly unsold housing developments cranked out at the top of the late real estate bubble will be learning quite a bit about succession over the next few years, so it may be useful for more than one reason to summarize it here.
Imagine an area of bare bulldozed soil someplace where the annual rainfall is high enough to support woodland. Long before the forlorn sign saying “Coming Soon Luxury Homes Only $450K” falls to the ground, seeds blown in by the wind send up a first crop of invasive weeds. Those pave the way for other weeds and grasses, which eventually choke out the firstcomers. After a few years, shrubs and pioneer trees begin rising, and become anchor species for a young woodland, which shades out the last of the weeds and the grass. In the shade of the pioneer trees, saplings of other species sprout. If nothing interferes with the process, the abandoned lot can pass through anything up to a dozen different stages before it finally settles down as an old growth forest community a couple of centuries later.
This is what ecologists call succession. Each step along the way from bare dirt to mature forest is a sere or a seral stage. The same process shapes the animal population of the vacant lot, as one species after another moves into the area for a time, until it’s supplanted by another better adapted to the changing environment and food supply. It also proceeds underground, as the dizzyingly complex fabric of life that makes up healthy soil reestablishes itself and then cycles through its own changes. Watch a vacant lot in a different ecosystem, and you’ll see it go through its own sequence of seres, ending in its own climax community — that’s the term for the final, relatively stable sere in a mature ecosystem, like the old growth forest in our example. The details change, but the basic pattern remains the same.
Essential to the pattern is a difference in the way that earlier and later seres deal with energy and other resources. Species common in early seres – R-selected species, in ecologists’ jargon – usually maximize their control over resources and their production of biomass, even at the cost of inefficient use of resources and energy. Weeds are a classic example of R-selected species: they grow fast, spread rapidly, and get choked out when slower-growing plants get established, or the abundant resources that make their fast growth possible run short. Species common in later seres – K-selected species – maximize efficiency in using resources and energy, even when this means accepting limits on biomass production and expansion into available niches. Temperate zone hardwood trees are a classic example of K-selected species: they grow slowly, take years to reach maturity, and endure for centuries when left undisturbed.
Apply the model of succession to human ecology and a remarkably useful way of looking at the predicament of industrial society emerges. In successional terms, we are in the early stages of the transition between an R-selected sere and the K-selected sere that will replace it. The industrial economies of the present, like any other R-selected sere, maximizes production at the expense of sustainability; the successful economies of the future, emerging in a world without today’s cheap abundant energy, will need to maximize sustainability at the expense of production, like any other K-selected sere.
To put this into the broader picture it’s necessary to factor in the processes of evolutionary change, because climax communities are stable only from the perspective of a human lifetime. Environmental shifts change them; so, often on a much faster timescale, does the arrival of new species on the scene. Sometimes this latter process makes succession move in reverse for a while. For example, when an invasive sere of R-selected species outcompetes the dominant species of a K-selected climax community; eventually the succession process starts moving forward again, but the new climax community may not look much like the old one.
Apply this to the human ecology of North America, say, and it’s easy to trace the pattern. A climax community of K-selected Native American horticulturalists and hunter-gatherers was disrupted and largely replaced by an invasive sere of European farmers with a much more R-selected ecology. Not long after the new community established itself, and before succession could push it in the direction of a more K-selected ecology, a second invasive sere – the industrial economy – emerged, using resources the first two seres could not access. This second invasive sere, the first of its kind on the planet, was on the far end of the R-selected spectrum; its ability to access and use extravagant amounts of energy enabled it to dominate the farming sere that preceded it, and push the remnants of the old climax community to the brink of extinction.
Like all R-selected seres, though, the industrial economy was vulnerable on two fronts. Like all early seres in succession, it faced the risk that a more efficent K-selected sere would eventually outcompete it, and its ability to use resources at unsustainable rates made it vulnerable to disruptive cycles of boom and bust that would sooner or later guarantee that a more efficient sere would replace it. Both those processes are well under way. The industrial economy is well into overshoot at this point, and at this point a crash of some kind is pretty much inevitable. At the same time, the more efficient K-selected human ecologies of the future have been sending up visible shoots since the 1970s, in the form of a rapidly spreading network of small organic farms, local farmer’s markets, appropriate technology, and alternative ways of thinking about the world, among many other things.
Three points deserve to be made in this context. First, one of the differences between human beings and other organisms is that human ecologies are culturally rather than biologically determined; the same individuals are at least potentially able to shift from an R-selected to a K-selected human ecology by changing their means of subsistence. Since it’s unlikely that a K-selected human ecology can or will be expanded fast enough to take up the slack of the disintegrating R-selected industrial system, there’s still likely to be a great deal of human suffering and disruption over the next century or so. Still, those individuals willing to make the transition to a K-selected lifestyle sooner rather than later may find that the disintegration of the industrial system opens up opportunities to survive and even flourish.
The second point circles back to the subject of last week’s Archdruid Report post, Fermi’s paradox. The assumption at the core of the paradox, as mentioned in that post, is that today’s extravagantly energy-wasting system is the wave of the future, and more advanced civilizations than ours will have even more energy and use it even more lavishly. The concept of succession suggests a radically different view of what an advanced civilization might look like. Modern industrial society here on Earth is the exact equivalent of the first sere of pioneer weeds on the vacant lot described above – fast-growing, resource-hungry, inefficient, and destined to be supplanted by more efficient K-selected seres as the process of succession unfolds.
A truly advanced civilization, here or elsewhere, might well have more in common with a climax community: it might use very modest amounts of energy and resources with high efficiency, maximize sustainability, and build for the long term. Such a civilization would be very hard to detect across interstellar distances, and the limits to the energy resources available to it make it vanishingly unlikely that it would attempt to cross those distances; this would hardly make it a failure as a civilization, except in the eyes of those for whom the industrial-age fantasies of science fiction trump all other concerns.
The third point leads into issues that will be central to a great many future posts on this blog. The climax community that emerges after a period of prolonged ecological disruption and the arrival of new biotic assemblages rarely has much in common with the climax community that prevailed before the disruptions began. In the same way, and for most of the same reasons, claims that the deindustrial world will necessarily end up as an exact equivalent of some past society – be that medieval feudalism, tribal hunter-gatherer cultures, or anything else – need to be taken with more than the usual grain of salt. Much of the heritage of today’s industrial societies will likely prove unsustainable in the future ahead of us, but not all; some technologies of the present and recent past could easily continue to play important roles in the human ecologies of the deindustrial future, and many more can help cushion the descent. Tracing out some of the options can help guide today’s choices at a time when constructive action is desperately needed.
Wednesday, September 26, 2007
Wednesday, September 19, 2007
Solving Fermi's Paradox
One of the besetting sins of today’s intellectual climate is the habit of overspecialization. Too often, people involved in one field get wrapped up in that field’s debates and miss the fact that the universe is not neatly divided into watertight compartments. With this excuse, if any is needed, I want to shift the ground of The Archdruid Report’s discussion a bit and talk about Fermi’s paradox.
First proposed by nuclear physicist Enrico Fermi in 1950, this points out that there’s a serious mismatch between our faith in technological progress and the universe our telescopes and satellites reveal to us. Our galaxy is around 13 billion years old, and contains something close to 400 billion stars. There’s a lot of debate around how many of those stars have planets, how many of those planets are capable of supporting life, and what might or might not trigger the evolutionary process that leads to intelligent, tool-using life forms, but most estimates grant that there are probably thousands or millions of inhabited planets out there.
Fermi pointed out that an intelligent species that developed the sort of technology we have today, and kept on progressing, could be expected eventually to work out a way to travel from one star system to another; they would also leave traces that would be detectable from earth. Even if interstellar travel proved to be slow and difficult, a species that developed starflight technology could colonize the entire galaxy in a few tens of millions of years – in other words, in a tiny fraction of the time the galaxy has been around. Given 400 billion chances to evolve a species capable of inventing interstellar travel, and 13 billion years to roll the dice, the chances are dizzyingly high that if it’s possible at all, at least one species would have managed the trick long before we came around, and it’s not much less probable that dozens or hundreds of species could have done it. If that’s the case, Fermi pointed out, where are they? And why haven’t we seen the least trace of their presence anywhere in the night sky?
Fermi’s paradox has been the subject of lively debate for something like half a century now, and most books on the possibility of extraterrestrial life discuss it. There are at least two reasons for that interest. On the one hand, of course, the possibility that we might someday encounter intelligent beings from another world has been a perennial fascination since the beginning of the industrial age – a fascination that has done much to drive the emergence of the folk theologies masquerading as science in today’s UFO movement.
On another level, though, Fermi’s Paradox can be restated in another and far more threatening way. The logic of the paradox depends on the assumption that unlimited technological progress is possible, and it can be turned without too much difficulty into a logical refutation of the assumption. If unlimited technological progress is possible, then there should be clear evidence of technologically advanced species in the cosmos; there is no such evidence; therefore unlimited technological progress is impossible. Crashingly unpopular though this latter idea may be, I suggest that it is correct – and a close examination of the issues involved casts a useful light on the present crisis of industrial civilization.
Let’s start with the obvious. Interstellar flight involves distances on a scale the human mind has never evolved the capacity to grasp. If the earth were the size of the letter “o” on this screen, for example, the moon would be a little over an inch and three quarters away from it, the sun about 60 feet away, and Neptune, the outermost planet of our solar system now that Pluto has been officially demoted to “dwarf planet” status, a bit more than a third of a mile off. On the same scale, though, Proxima Centauri – the closest star to our solar system – would be more than 3,000 miles away, roughly the distance from southern Florida to the Alaska panhandle. Epsilon Eridani, thought by many astronomers to be the closest star enough like our sun to have a good chance of inhabitable planets, would be more than 7,500 miles away, roughly the distance across the Pacific Ocean from the west coast of North America to the east coast of China.
The difference between going to the moon and going to the stars, in other words, isn’t simply a difference in scale. It’s a difference in kind. It takes literally unimaginable amounts of energy either to accelerate a spacecraft to the relativistic speeds needed to make an interstellar trip in less than a geological time scale, or to keep a manned (or alienned) spacecraft viable for the long trip through deep space. The Saturn V rocket that put Apollo 11 on the moon, the most powerful spacecraft to date, doesn’t even begin to approach the first baby steps toward interstellar travel. This deserves attention, because the most powerful and technologically advanced nation on Earth, riding the crest of one of the greatest economic booms in history and fueling that boom by burning through a half billion years’ worth of fossil fuels at an absurdly extravagant pace, had to divert a noticeable fraction of its total resources to the task of getting a handful of spacecraft across what, in galactic terms, is a whisker-thin gap between neighboring worlds.
It’s been an article of faith for years now, and not just among science fiction fans, that progress will take care of the difference. Progress, however, isn’t simply a matter of ingenuity or science. It depends on energy sources, and that meant biomass, wind, water and muscle until technical breakthroughs opened the treasure chest of the Earth’s carbon reserves in the 18th century. If the biosphere had found some less flammable way than coal to stash carbon in the late Paleozoic, the industrial revolution of the 18th and 19th century wouldn’t have happened; if nature had turned the sea life of the Mesozoic into some inert compound rather than petroleum, the transportation revolution of the 20th century would never have gotten off the ground. Throughout the history of our species, in fact, each technological revolution has depended on accessing a more concentrated form of energy than the ones previously available.
The modern faith in progress assumes that this process can continue indefinitely. Such an assertion, however, flies in the face of thermodynamic reality. A brief summary of that reality may not be out of place here. Energy can neither be created nor destroyed, and left to itself, it always flows from higher concentrations to lower; this latter rule is what’s called entropy. A system that has energy flowing through it – physicists call this a dissipative system – can develop eddies in the flow that concentrate energy in various ways. Thermodynamically, living things are entropy eddies; we take energy from the flow of sunlight through the dissipative system of the earth in various ways, and use it to maintain concentrations of energy above ambient levels. The larger and more intensive the concentration of energy, on average, the less common it is – this is why mammals are less common than insects, and insects less common than bacteria.
It’s also why big deposits of oil and coal are much less common than small ones, and why oil and coal are much less common than inert substances in earth’s crust. Fossil fuels don’t just happen at random; they exist in the earth because biological processes put them there. Petroleum is the most concentrated of the fossil fuels, and the biggest crude oil deposits – Ghawar in Saudi Arabia, Cantarell in Mexico, the West Texas fields, a handful of others – represented the largest concentrations of free energy on earth at the dawn of the industrial age. They are mostly gone now, along with a great many smaller concentrations, and decades of increasingly frantic searching has failed to turn up anything on the same scale. Nor is there another, even more concentrated energy resource waiting in the wings.
If progress depends on getting access to ever more concentrated energy resources, in other words, we have reached the end of our rope. The resources now being proposed as ways to power industrial civilization are all much more diffuse than fossil fuels. (Nuclear power advocates need to remember that uranium-235, which has a great deal of energy when refined and purified, exists in very low concentrations in nature and requires a hugely expensive infrastructure to turn it into usable energy, so the whole system yields very little more energy than goes into it; fusion, if it even proves workable at all, will require an infrastructure a couple of orders of magnitude more expensive than fission, and the same is true of breeder reactors.) More generally, it takes energy to concentrate energy. Once we no longer have the nearly free energy of fossil fuels concentrated for us by half a billion years of geology, concentrating energy beyond a certain fairly modest point will rapidly become a losing game in thermodynamic terms. At that point, insofar as progress is measured by the kind of technology that can cross deep space, progress will be over.
We can apply this same logic to Fermi’s paradox and reach a conclusion that makes sense of the data. Since life creates localized concentrations of energy, each planet inhabited by life forms will develop concentrated energy resources. It’s reasonable to assume that our planet is somewhere close to the average, so we can postulate that some worlds will have more stored energy than ours, and some will have less. A certain fraction of planets will evolve intelligent, tool-using species that figure out how to use their planet’s energy reserves. Some will have more and some less, some will use their reserves quickly and some slowly, but all will reach the point we are at today – the point at which it becomes painfully clear that the biosphere of a planet can only store up a finite amount of concentrated energy, and when it’s gone, it’s gone.
Chances are that a certain number of the intelligent species in our galaxy have used these stored energy reserves to attempt short-distance spaceflight, as we have done. Some with a great deal of energy resources may be able to establish colonies on other worlds in their own systems, at least for a time. The difference between the tabletop and football-field distances needed to travel within a solar system, and the continental distances needed to cross from star to star, though, can’t be ignored. Given the fantastic energies required, the chance that any intelligent species will have access to enough highly concentrated energy resources to keep an industrial society progressing long enough to evolve starflight technology, and then actually accomplish the feat, is so close to zero that the silence of the heavens makes perfect sense.
These considerations suggest that White’s law, a widely accepted principle in human ecology, can be expanded in a useful way. White’s law holds that the level of economic development in a society is measured by the energy per capita it produces and uses. Since the energy per capita of any society is determined by its access to concentrated energy resources – and this holds true whether we are talking about wild foods, agricultural products, fossil fuels, or anything else – it’s worth postulating that the maximum level of economic development possible for a society is measured by the abundance and concentration of energy resources to which it has access.
It’s also worth postulating, along the lines suggested by Richard Duncan’s Olduvai theory, that a society’s maximum level of economic development will be reached, on average, at the peak of a bell-shaped curve with a height determined by the relative renewability of the society’s energy resources. A society wholly dependent on resources that renew themselves over the short term may trace a “bell-shaped curve” in which the difference between peak and trough is so small it approximates a straight line; a society dependent on resources renewable over a longer timescale may cycle up and down as its resource base depletes and recovers; a society dependent on nonrenewable resources can be expected to trace a ballistic curve in which the height of ascent is matched, or more than matched, by the depth of the following decline.
Finally, the suggestions made here raise the possibility that for more than a century and a half now, our own civilization has been pursuing a misguided image of what an advanced technology looks like. Since the late 19th century, when early science fiction writers such as Jules Verne began to popularize the concept, “advanced technology” and “extravagant use of energy” have been for all practical purposes synonyms, and today Star Trek fantasies tend to dominate any discussion of what a mature technological society might resemble. If access to concentrated energy sources inevitably peaks and declines in the course of a technological society’s history, though, a truly mature technology may turn out to be something very different from our current expectations. We’ll explore this further in next week’s post.
First proposed by nuclear physicist Enrico Fermi in 1950, this points out that there’s a serious mismatch between our faith in technological progress and the universe our telescopes and satellites reveal to us. Our galaxy is around 13 billion years old, and contains something close to 400 billion stars. There’s a lot of debate around how many of those stars have planets, how many of those planets are capable of supporting life, and what might or might not trigger the evolutionary process that leads to intelligent, tool-using life forms, but most estimates grant that there are probably thousands or millions of inhabited planets out there.
Fermi pointed out that an intelligent species that developed the sort of technology we have today, and kept on progressing, could be expected eventually to work out a way to travel from one star system to another; they would also leave traces that would be detectable from earth. Even if interstellar travel proved to be slow and difficult, a species that developed starflight technology could colonize the entire galaxy in a few tens of millions of years – in other words, in a tiny fraction of the time the galaxy has been around. Given 400 billion chances to evolve a species capable of inventing interstellar travel, and 13 billion years to roll the dice, the chances are dizzyingly high that if it’s possible at all, at least one species would have managed the trick long before we came around, and it’s not much less probable that dozens or hundreds of species could have done it. If that’s the case, Fermi pointed out, where are they? And why haven’t we seen the least trace of their presence anywhere in the night sky?
Fermi’s paradox has been the subject of lively debate for something like half a century now, and most books on the possibility of extraterrestrial life discuss it. There are at least two reasons for that interest. On the one hand, of course, the possibility that we might someday encounter intelligent beings from another world has been a perennial fascination since the beginning of the industrial age – a fascination that has done much to drive the emergence of the folk theologies masquerading as science in today’s UFO movement.
On another level, though, Fermi’s Paradox can be restated in another and far more threatening way. The logic of the paradox depends on the assumption that unlimited technological progress is possible, and it can be turned without too much difficulty into a logical refutation of the assumption. If unlimited technological progress is possible, then there should be clear evidence of technologically advanced species in the cosmos; there is no such evidence; therefore unlimited technological progress is impossible. Crashingly unpopular though this latter idea may be, I suggest that it is correct – and a close examination of the issues involved casts a useful light on the present crisis of industrial civilization.
Let’s start with the obvious. Interstellar flight involves distances on a scale the human mind has never evolved the capacity to grasp. If the earth were the size of the letter “o” on this screen, for example, the moon would be a little over an inch and three quarters away from it, the sun about 60 feet away, and Neptune, the outermost planet of our solar system now that Pluto has been officially demoted to “dwarf planet” status, a bit more than a third of a mile off. On the same scale, though, Proxima Centauri – the closest star to our solar system – would be more than 3,000 miles away, roughly the distance from southern Florida to the Alaska panhandle. Epsilon Eridani, thought by many astronomers to be the closest star enough like our sun to have a good chance of inhabitable planets, would be more than 7,500 miles away, roughly the distance across the Pacific Ocean from the west coast of North America to the east coast of China.
The difference between going to the moon and going to the stars, in other words, isn’t simply a difference in scale. It’s a difference in kind. It takes literally unimaginable amounts of energy either to accelerate a spacecraft to the relativistic speeds needed to make an interstellar trip in less than a geological time scale, or to keep a manned (or alienned) spacecraft viable for the long trip through deep space. The Saturn V rocket that put Apollo 11 on the moon, the most powerful spacecraft to date, doesn’t even begin to approach the first baby steps toward interstellar travel. This deserves attention, because the most powerful and technologically advanced nation on Earth, riding the crest of one of the greatest economic booms in history and fueling that boom by burning through a half billion years’ worth of fossil fuels at an absurdly extravagant pace, had to divert a noticeable fraction of its total resources to the task of getting a handful of spacecraft across what, in galactic terms, is a whisker-thin gap between neighboring worlds.
It’s been an article of faith for years now, and not just among science fiction fans, that progress will take care of the difference. Progress, however, isn’t simply a matter of ingenuity or science. It depends on energy sources, and that meant biomass, wind, water and muscle until technical breakthroughs opened the treasure chest of the Earth’s carbon reserves in the 18th century. If the biosphere had found some less flammable way than coal to stash carbon in the late Paleozoic, the industrial revolution of the 18th and 19th century wouldn’t have happened; if nature had turned the sea life of the Mesozoic into some inert compound rather than petroleum, the transportation revolution of the 20th century would never have gotten off the ground. Throughout the history of our species, in fact, each technological revolution has depended on accessing a more concentrated form of energy than the ones previously available.
The modern faith in progress assumes that this process can continue indefinitely. Such an assertion, however, flies in the face of thermodynamic reality. A brief summary of that reality may not be out of place here. Energy can neither be created nor destroyed, and left to itself, it always flows from higher concentrations to lower; this latter rule is what’s called entropy. A system that has energy flowing through it – physicists call this a dissipative system – can develop eddies in the flow that concentrate energy in various ways. Thermodynamically, living things are entropy eddies; we take energy from the flow of sunlight through the dissipative system of the earth in various ways, and use it to maintain concentrations of energy above ambient levels. The larger and more intensive the concentration of energy, on average, the less common it is – this is why mammals are less common than insects, and insects less common than bacteria.
It’s also why big deposits of oil and coal are much less common than small ones, and why oil and coal are much less common than inert substances in earth’s crust. Fossil fuels don’t just happen at random; they exist in the earth because biological processes put them there. Petroleum is the most concentrated of the fossil fuels, and the biggest crude oil deposits – Ghawar in Saudi Arabia, Cantarell in Mexico, the West Texas fields, a handful of others – represented the largest concentrations of free energy on earth at the dawn of the industrial age. They are mostly gone now, along with a great many smaller concentrations, and decades of increasingly frantic searching has failed to turn up anything on the same scale. Nor is there another, even more concentrated energy resource waiting in the wings.
If progress depends on getting access to ever more concentrated energy resources, in other words, we have reached the end of our rope. The resources now being proposed as ways to power industrial civilization are all much more diffuse than fossil fuels. (Nuclear power advocates need to remember that uranium-235, which has a great deal of energy when refined and purified, exists in very low concentrations in nature and requires a hugely expensive infrastructure to turn it into usable energy, so the whole system yields very little more energy than goes into it; fusion, if it even proves workable at all, will require an infrastructure a couple of orders of magnitude more expensive than fission, and the same is true of breeder reactors.) More generally, it takes energy to concentrate energy. Once we no longer have the nearly free energy of fossil fuels concentrated for us by half a billion years of geology, concentrating energy beyond a certain fairly modest point will rapidly become a losing game in thermodynamic terms. At that point, insofar as progress is measured by the kind of technology that can cross deep space, progress will be over.
We can apply this same logic to Fermi’s paradox and reach a conclusion that makes sense of the data. Since life creates localized concentrations of energy, each planet inhabited by life forms will develop concentrated energy resources. It’s reasonable to assume that our planet is somewhere close to the average, so we can postulate that some worlds will have more stored energy than ours, and some will have less. A certain fraction of planets will evolve intelligent, tool-using species that figure out how to use their planet’s energy reserves. Some will have more and some less, some will use their reserves quickly and some slowly, but all will reach the point we are at today – the point at which it becomes painfully clear that the biosphere of a planet can only store up a finite amount of concentrated energy, and when it’s gone, it’s gone.
Chances are that a certain number of the intelligent species in our galaxy have used these stored energy reserves to attempt short-distance spaceflight, as we have done. Some with a great deal of energy resources may be able to establish colonies on other worlds in their own systems, at least for a time. The difference between the tabletop and football-field distances needed to travel within a solar system, and the continental distances needed to cross from star to star, though, can’t be ignored. Given the fantastic energies required, the chance that any intelligent species will have access to enough highly concentrated energy resources to keep an industrial society progressing long enough to evolve starflight technology, and then actually accomplish the feat, is so close to zero that the silence of the heavens makes perfect sense.
These considerations suggest that White’s law, a widely accepted principle in human ecology, can be expanded in a useful way. White’s law holds that the level of economic development in a society is measured by the energy per capita it produces and uses. Since the energy per capita of any society is determined by its access to concentrated energy resources – and this holds true whether we are talking about wild foods, agricultural products, fossil fuels, or anything else – it’s worth postulating that the maximum level of economic development possible for a society is measured by the abundance and concentration of energy resources to which it has access.
It’s also worth postulating, along the lines suggested by Richard Duncan’s Olduvai theory, that a society’s maximum level of economic development will be reached, on average, at the peak of a bell-shaped curve with a height determined by the relative renewability of the society’s energy resources. A society wholly dependent on resources that renew themselves over the short term may trace a “bell-shaped curve” in which the difference between peak and trough is so small it approximates a straight line; a society dependent on resources renewable over a longer timescale may cycle up and down as its resource base depletes and recovers; a society dependent on nonrenewable resources can be expected to trace a ballistic curve in which the height of ascent is matched, or more than matched, by the depth of the following decline.
Finally, the suggestions made here raise the possibility that for more than a century and a half now, our own civilization has been pursuing a misguided image of what an advanced technology looks like. Since the late 19th century, when early science fiction writers such as Jules Verne began to popularize the concept, “advanced technology” and “extravagant use of energy” have been for all practical purposes synonyms, and today Star Trek fantasies tend to dominate any discussion of what a mature technological society might resemble. If access to concentrated energy sources inevitably peaks and declines in the course of a technological society’s history, though, a truly mature technology may turn out to be something very different from our current expectations. We’ll explore this further in next week’s post.
Wednesday, September 12, 2007
The Innovation Fallacy
The core concept that has to be grasped to make sense of the future looming up before us, it seems to me, is the concept of limits. Central to ecology, and indeed all the sciences, this concept has failed so far to find any wider place in the mindscape of industrial society. The recent real estate bubble is simply another example of our culture’s cult of limitlessness at work, as real estate investors insisted that housing prices were destined to keep on rising forever. Of course those claims proved to be dead wrong, as they always are, but the fact that they keep on being made – it’s been only a few years, after all, since the same rhetoric was disproven just as dramatically in the tech stock bubble of the late 1990s – shows just how allergic most modern people are to the idea that there’s an upper limit to anything.
It’s this same sort of thinking that drives the common belief that limits on industrial society’s access to energy can be overcome by technological innovations. This claim looks plausible at first glance, since the soaring curve of energy use that defines recent human history can be credited to technological innovations that allowed human societies to get at the huge reserves of fossil fuels stored inside the planet. The seemingly logical corollary is that we can just repeat the process, coming up with innovations that will give us ever increasing supplies of energy forever.
Most current notions about the future are based on some version of this belief. The problem, and it’s not a small one, is that the belief itself is based on a logical fallacy.
One way to see how this works – or, more precisely, doesn’t work – is to trace the same process in a setting less loaded with emotions and mythic narratives than the future of industrial society. Imagine for a moment, then, that we’re discussing an experiment involving microbes in a petri dish. The culture medium in the dish contains 5% of a simple sugar that the microbes can eat, and 95% of a more complex sugar they don’t have the right enzymes to metabolize. We put a drop of fluid containing microbes into the dish, close the lid, and watch. Over the next few days, a colony of microbes spreads through the culture medium, feeding on the simple sugar.
Then a mutation happens, and one microbe starts producing an enzyme that lets it feed on the more abundant complex sugar. Drawing on this new food supply, the mutant microbe and its progeny spread rapidly, outcompeting the original strain, until finally the culture medium is full of mutant microbes. At this point, though, the growth of the microbes is within hailing distance of the limits of the supply of complex sugar. As we watch the microbes through our microscopes, we might begin to wonder whether they can produce a second mutation that will let them continue to thrive. Yet this obvious question misleads, because there is no third sugar in the culture medium for another mutation to exploit.
The point that has to be grasped here is as crucial as it is easy to miss. The mutation gave the microbes access to an existing supply of highly concentrated food; it didn’t create the food out of thin air. If the complex sugar hadn’t existed, the mutation would have yielded no benefit at all. As the complex sugar runs out, further mutations are possible – some microbes might end up living on microbial waste products; others might kill and eat other microbes; still others might develop some form of photosynthesis and start creating sugars from sunlight – but all these possibilities draw on resources much less concentrated and abundant than the complex sugar that made the first mutation succeed so spectacularly. Nothing available to the microbes will allow them to continue to flourish as they did in the heyday of the first mutation.
Does this same logic apply to human beings? A cogent example from 20th century history argues that it does. When the Second World War broke out in 1939, Germany arguably had the most innovative technology on the planet. All through the war, German technology stayed far ahead of the opposition, fielding jet aircraft, cruise missiles, ballistic missiles, guided bombs, and many other advances years before anybody else. Their great vulnerability was a severe shortage of petroleum reserves, and even this area saw dramatic technological advances: Germany developed effective methods of CTL (coal to liquids) fuel production, and put them to work as soon as it became clear that the oil fields of southern Russia were permanently out of reach.
The results are instructive. Despite every effort to replace petroleum with CTL and other energy resources, the German war machine ran out of gas. By 1944 the Wehrmacht was struggling to find fuel even for essential operations. The outcome of the Battle of the Bulge in the winter of 1944-5 is credited by many military historians to the raw fact that the German forces didn’t have enough fuel to follow up the initial success of their Ardennes offensive. The most innovative technology on the planet, backed up with substantial coal reserves and an almost limitless supply of slave labor, proved unable to find a replacement for cheap abundant petroleum.
It’s worthwhile to note that more than sixty years later, no one has done any better. Compare the current situation with the last two energetic transitions – the transition from wind and water power to coal in the late 18th and early 19th centuries, and the transition from coal to petroleum at the beginning of the 20th – and a key distinction emerges. In both the earlier cases, the new energy resource took a dominant place in the industrial world’s economies while the older ones were still very much in use. The world wasn’t in any great danger of running out of wind and water in 1750, when coal became the mainspring of the industrial revolution, and peak coal was still far in the future in 1900 when oil seized King Coal’s throne.
The new fuels took over because they were more concentrated and abundant than the competition, and those factors made them more economical than older resources. In both cases a tide of technological advance followed the expansion of energy resources, and was arguably an effect of that expansion rather than its cause. In the 1950s and 1960s many people expected nuclear power to repeat the process – those of my readers who were around then will recall the glowing images of atomic-powered cities in the future that filled the popular media in those days. Nothing of the kind happened, because nuclear power proved to be much less economical than fossil fuels. Only massive government subsidies, alongside the hidden “energy subsidy” it received from an economy powered by cheap fossil fuels, made nuclear power look viable at all.
Mind you, uranium contains a very high concentration of energy, though the complex systems needed to mine, process, use, and clean up after it probably use more energy than the uranium itself contains. Most other resources touted as solutions to peak oil either contain much lower concentrations of energy per unit than petroleum, or occur in much lower abundance. This isn’t accidental; the laws of thermodynamics mandate that on average, the more concentrated an energy source is, the less abundant it will be, and vice versa. They also mandate that all energy transfers move from higher to lower concentrations, and this means that you can’t concentrate energy without using energy to do it. Really large amounts of concentrated energy occur on earth only as side effects of energy cycles in the biosphere that unfold over geological time – that’s where coal, oil, and natural gas come from – and then only in specific forms and locations. It took 500 million years to create our planet’s stockpile of fossil fuels. Once they’re gone, what’s left is mostly diffuse sources such as sunlight and wind, and trying to concentrate these so they can power industrial society is like trying to make a river flow uphill.
Thus the role of technological innovation in the rise of industrial economies is both smaller and more nuanced than it’s often claimed to be. Certain gateway technologies serve the same function as the mutations in the biological model used earlier in this post; they make it possible to draw on already existing resources that weren’t accessible to other technological suites. At the same time, it’s the concentration and abundance of the resource in question that determines how much a society will be able to accomplish with it. Improvements to the gateway technology can affect this to a limited extent, but such improvements suffer from a law of diminishing returns backed up, again, by the laws of thermodynamics.
Innovation is a necessary condition for the growth and survival of industrial society, in other words, but not a sufficient condition. If energy resources aren’t available in sufficient quality and quantity, innovation can make a successful society but it won’t make or maintain an industrial one. It’s worth suggesting that the maximum possible level of economic development in a society is defined by the abundance and concentration of energy resources available to that society. It’s equally possible, though this is rather more speculative, that the maximum possible technological level of an intelligent species anywhere in the universe is defined by the abundance and concentration of energy resources on the planet where that species evolves. (We’ll be talking more about this in next week’s post.)
What we’re discussing here is an application of one of the central principles of ecology. Liebig’s law – named after the 19th century German agronomist Justus von Liebig, who first proposed it – holds that the maximum growth of a community of organisms is limited by whatever necessary factor in the environment is in shortest supply. A simpler way of stating this law is that necessary resources aren’t interchangeable. If your garden bed needs phosphorus, adding nitrogen to it won’t help, and if it’s not getting enough sunlight, all the fertilizer in the world won’t boost growth beyond a certain point.
For most of human history, the resource that has been in shortest supply has arguably been energy. For the last three hundred years, and especially for the last three-fourths of a century, that’s been less true than ever before. Today, however, the highly concentrated and abundant energy resources stockpiled by the biosphere over the last half billion years or so are running low, and there are no other resources on or around Earth at the same level of concentration and abundance. Innovation is vital if we’re to deal with the consequences of that reality, but it can’t make the laws of thermodynamics run backwards and give us an endless supply of concentrated energy just because we happen to want one.
It’s this same sort of thinking that drives the common belief that limits on industrial society’s access to energy can be overcome by technological innovations. This claim looks plausible at first glance, since the soaring curve of energy use that defines recent human history can be credited to technological innovations that allowed human societies to get at the huge reserves of fossil fuels stored inside the planet. The seemingly logical corollary is that we can just repeat the process, coming up with innovations that will give us ever increasing supplies of energy forever.
Most current notions about the future are based on some version of this belief. The problem, and it’s not a small one, is that the belief itself is based on a logical fallacy.
One way to see how this works – or, more precisely, doesn’t work – is to trace the same process in a setting less loaded with emotions and mythic narratives than the future of industrial society. Imagine for a moment, then, that we’re discussing an experiment involving microbes in a petri dish. The culture medium in the dish contains 5% of a simple sugar that the microbes can eat, and 95% of a more complex sugar they don’t have the right enzymes to metabolize. We put a drop of fluid containing microbes into the dish, close the lid, and watch. Over the next few days, a colony of microbes spreads through the culture medium, feeding on the simple sugar.
Then a mutation happens, and one microbe starts producing an enzyme that lets it feed on the more abundant complex sugar. Drawing on this new food supply, the mutant microbe and its progeny spread rapidly, outcompeting the original strain, until finally the culture medium is full of mutant microbes. At this point, though, the growth of the microbes is within hailing distance of the limits of the supply of complex sugar. As we watch the microbes through our microscopes, we might begin to wonder whether they can produce a second mutation that will let them continue to thrive. Yet this obvious question misleads, because there is no third sugar in the culture medium for another mutation to exploit.
The point that has to be grasped here is as crucial as it is easy to miss. The mutation gave the microbes access to an existing supply of highly concentrated food; it didn’t create the food out of thin air. If the complex sugar hadn’t existed, the mutation would have yielded no benefit at all. As the complex sugar runs out, further mutations are possible – some microbes might end up living on microbial waste products; others might kill and eat other microbes; still others might develop some form of photosynthesis and start creating sugars from sunlight – but all these possibilities draw on resources much less concentrated and abundant than the complex sugar that made the first mutation succeed so spectacularly. Nothing available to the microbes will allow them to continue to flourish as they did in the heyday of the first mutation.
Does this same logic apply to human beings? A cogent example from 20th century history argues that it does. When the Second World War broke out in 1939, Germany arguably had the most innovative technology on the planet. All through the war, German technology stayed far ahead of the opposition, fielding jet aircraft, cruise missiles, ballistic missiles, guided bombs, and many other advances years before anybody else. Their great vulnerability was a severe shortage of petroleum reserves, and even this area saw dramatic technological advances: Germany developed effective methods of CTL (coal to liquids) fuel production, and put them to work as soon as it became clear that the oil fields of southern Russia were permanently out of reach.
The results are instructive. Despite every effort to replace petroleum with CTL and other energy resources, the German war machine ran out of gas. By 1944 the Wehrmacht was struggling to find fuel even for essential operations. The outcome of the Battle of the Bulge in the winter of 1944-5 is credited by many military historians to the raw fact that the German forces didn’t have enough fuel to follow up the initial success of their Ardennes offensive. The most innovative technology on the planet, backed up with substantial coal reserves and an almost limitless supply of slave labor, proved unable to find a replacement for cheap abundant petroleum.
It’s worthwhile to note that more than sixty years later, no one has done any better. Compare the current situation with the last two energetic transitions – the transition from wind and water power to coal in the late 18th and early 19th centuries, and the transition from coal to petroleum at the beginning of the 20th – and a key distinction emerges. In both the earlier cases, the new energy resource took a dominant place in the industrial world’s economies while the older ones were still very much in use. The world wasn’t in any great danger of running out of wind and water in 1750, when coal became the mainspring of the industrial revolution, and peak coal was still far in the future in 1900 when oil seized King Coal’s throne.
The new fuels took over because they were more concentrated and abundant than the competition, and those factors made them more economical than older resources. In both cases a tide of technological advance followed the expansion of energy resources, and was arguably an effect of that expansion rather than its cause. In the 1950s and 1960s many people expected nuclear power to repeat the process – those of my readers who were around then will recall the glowing images of atomic-powered cities in the future that filled the popular media in those days. Nothing of the kind happened, because nuclear power proved to be much less economical than fossil fuels. Only massive government subsidies, alongside the hidden “energy subsidy” it received from an economy powered by cheap fossil fuels, made nuclear power look viable at all.
Mind you, uranium contains a very high concentration of energy, though the complex systems needed to mine, process, use, and clean up after it probably use more energy than the uranium itself contains. Most other resources touted as solutions to peak oil either contain much lower concentrations of energy per unit than petroleum, or occur in much lower abundance. This isn’t accidental; the laws of thermodynamics mandate that on average, the more concentrated an energy source is, the less abundant it will be, and vice versa. They also mandate that all energy transfers move from higher to lower concentrations, and this means that you can’t concentrate energy without using energy to do it. Really large amounts of concentrated energy occur on earth only as side effects of energy cycles in the biosphere that unfold over geological time – that’s where coal, oil, and natural gas come from – and then only in specific forms and locations. It took 500 million years to create our planet’s stockpile of fossil fuels. Once they’re gone, what’s left is mostly diffuse sources such as sunlight and wind, and trying to concentrate these so they can power industrial society is like trying to make a river flow uphill.
Thus the role of technological innovation in the rise of industrial economies is both smaller and more nuanced than it’s often claimed to be. Certain gateway technologies serve the same function as the mutations in the biological model used earlier in this post; they make it possible to draw on already existing resources that weren’t accessible to other technological suites. At the same time, it’s the concentration and abundance of the resource in question that determines how much a society will be able to accomplish with it. Improvements to the gateway technology can affect this to a limited extent, but such improvements suffer from a law of diminishing returns backed up, again, by the laws of thermodynamics.
Innovation is a necessary condition for the growth and survival of industrial society, in other words, but not a sufficient condition. If energy resources aren’t available in sufficient quality and quantity, innovation can make a successful society but it won’t make or maintain an industrial one. It’s worth suggesting that the maximum possible level of economic development in a society is defined by the abundance and concentration of energy resources available to that society. It’s equally possible, though this is rather more speculative, that the maximum possible technological level of an intelligent species anywhere in the universe is defined by the abundance and concentration of energy resources on the planet where that species evolves. (We’ll be talking more about this in next week’s post.)
What we’re discussing here is an application of one of the central principles of ecology. Liebig’s law – named after the 19th century German agronomist Justus von Liebig, who first proposed it – holds that the maximum growth of a community of organisms is limited by whatever necessary factor in the environment is in shortest supply. A simpler way of stating this law is that necessary resources aren’t interchangeable. If your garden bed needs phosphorus, adding nitrogen to it won’t help, and if it’s not getting enough sunlight, all the fertilizer in the world won’t boost growth beyond a certain point.
For most of human history, the resource that has been in shortest supply has arguably been energy. For the last three hundred years, and especially for the last three-fourths of a century, that’s been less true than ever before. Today, however, the highly concentrated and abundant energy resources stockpiled by the biosphere over the last half billion years or so are running low, and there are no other resources on or around Earth at the same level of concentration and abundance. Innovation is vital if we’re to deal with the consequences of that reality, but it can’t make the laws of thermodynamics run backwards and give us an endless supply of concentrated energy just because we happen to want one.
Wednesday, September 5, 2007
Searching for Scapegoats
One of the things I find most interesting about economic crises is the way that the same rhetoric gets recycled in each one, as though official and unofficial media alike ran out of new ideas long ago and just repeat the same stories over again with the names changed and the old serial numbers filed off. When the mortgage bubble now deflating around us was still filling with air, media blared that a new economic model had arrived, that this particular asset class would keep appreciating forever, and that it really was different this time: all the same mantras heard back during the tech stock bubble of the late 1990s, or for that matter in every other speculative frenzy since the Dutch tulip mania of the 17th century.
Equally, once the mortgage bubble started leaking air and taking mortgage companies and hedge funds down with it, some US official or other tempted fate in a big way by proclaiming that “the fundamentals are sound.” The financial authorities said exactly the same thing in the aftermath of the 1929 stock market crash, and if there’s any phrase in economic history that translates better as “run for your lives,” I don’t know it. You’d think we would have learned from the tech stock boom and bust that it’s never different this time, and when an economy is propped up by the wishful thinking that drives all speculative frenzies, the fundamentals are never sound, and yet each bubble conjures up the same thinking and the same phrases all over again.
One song on this broken record, though, deserves special attention here. In The Great Crash 1929, one of the best (and certainly most readable) works of economic history ever written, John Kenneth Galbraith talked about “the notion that somewhere on Wall Street...there was a deus ex machina who somehow engineered the boom and bust.” His comment is apposite: “No one was responsible for the great Wall Street crash. No one engineered the speculation that preceded it. Both were the product of the free choice and decision of hundreds of thousands of individuals. The latter were not led to the slaughter. They were impelled to it by the seminal lunacy which has always seized people who are seized in turn with the notion that they can become very rich.”
Still, it’s not part of the standard rhetoric of economic crisis to encourage people to contemplate their own folly, and so we’ve already started to see claims that the great mortgage bubble and bust was deliberately engineered. There’s a twist, though, because the usual rhetoric of the past – the notion that the motive behind all this deviousness is pure greed – has been shouldered aside by the claim that the boom and bust were engineered to turn Americans into the debt slaves of a totalitarian state under the polymorphous banner of the New World Order.
That’s a claim worth noticing, not least because every significant crisis of the last dozen years or so has been interpreted by many of the same people in exactly the same way. It’s worth noticing as well because, like so much of what now passes for left-wing thought, this claim was pioneered by the John Birch Society, for many decades the cutting edge of the American extreme right. The phrase “new world order” itself was coined by the Society’s founder Robert Welch in 1972, as part of a florid system of conspiracy theory that blamed US corporations and the Trilateral Commission for everything Welch thought was wrong with the world. The ease with which these ideas of the far right were imported lock, stock and barrel by the other end of the political spectrum after the implosion of the New Left at the end of the 1960s is one of the richer ironies of recent cultural history.
Yet there’s another point worth noticing here, and that’s the extent to which the rhetoric of conspiracy has become a convenient way to evade any suggestion of personal responsibility for the consequences of one’s actions. Galbraith’s comments are relevant here. People plunged into real estate speculation because they thought they could get lots of money for nothing, and other people bought houses they couldn’t afford because they thought that, by the magic of repeated refinancing, they would never have to pay for them. Of course they were helped to embrace one or both of these delusions by the army of scam artists who always cluster around speculative bubbles, but the old maxim still holds: nobody can con you unless you first con yourself.
This habit of invoking conspiracy to dodge responsibility troubles me, not least because, as I’ve suggested in a previous post, the expansion of the industrial system over the last three hundred years or so – the Age of Exuberance, to use William Catton’s evocative phrase – resembles nothing so much as a speculative bubble on a titanic scale. We’ve seen how countless people in recent years have fondly credited their own financial brilliance for paper profits that turned out to be the product of a Ponzi scheme writ large. In the same way, the pundits and publicists of industrial society have insisted all along that our technological cleverness and creativity are responsible for a boom that, in the cold light of a deindustrial morning, will more likely be seen as the fantastically irresponsible exploitation of 500 million years’ worth of irreplaceable fossil fuels in an eyeblink of geological time.
The parallels run deep. Just as a speculative bubble lasts only so long as a steady stream of new speculators pour their money into the game and keep expanding the total amount of funds in play, the fossil fuel bubble of the last 300 years has depended on a steady stream of new energy reserves that keep expanding the total amount of cheap abundant energy available to the world’s industrial societies. Just as a speculative bubble routinely leads to extravagant misallocations of resources – the immense square footage of new home construction in the last five years or so might as well be the poster child for that just now – the fossil fuel bubble has resulted in resource allocations that our descendants will likely find at least as profoundly misguided.
Just as a speculative bubble ends in a sharp economic contraction unless some new financial gimmick can be found to reinflate the economy, finally, the most likely aftermath of the fossil fuel boom is a long and difficult contraction that will shake our societies to the core, since the likelihood that another energy source will be found in time to reinflate the fossil fuel bubble doesn’t look high just now. There’s even a parallel in the way one bubble feeds into another: just as the 1925 Florida land boom gave way to the 1927-9 stock market boom, and then to bust, and the tech stock boom of the late 1990s turned into the real estate boom now unraveling around us, the coal boom of 1725-1900 yielded to the petroleum boom of 1900-2005. If the parallel completes itself, the bust that follows will likely be on an epic scale – and it seems all too likely that this may work out in an equally massive hunt for scapegoats to blame for it all.
There’s already a substantial corner of the peak oil scene for whom discussing the end of the age of cheap abundant energy is inseparable from denouncing the current US government for an assortment of crimes, real or imagined. I’m no fan of the Bush administration – longtime readers of this blog know that I consider its energy and environmental policies disastrously misguided – but it seems to me that the effort to paint the leftover Reagan-era bureaucrats and politically naive right-wing intellectuals who run that administration as the modern liberal equivalent of Satan incarnate has less to do with their behavior than with the irruption of a frankly paranoid style of thinking into the American cultural mainstream.
Now to some extent this simply tracks the rise of a rhetoric of hatred in American politics, a process to which both parties have contributed mightily since 1970 or so. The vitriol heaped on the Bush administration by Democrats since 2001 matches with fair exactness the denunciations of the Clinton administration by Republicans in the eight years before then. Today’s claims that Bush is about to establish a dictatorship have an equally exact parallel in claims, circulated feverishly in far right circles after the 1992 election, that Clinton was about to do the same thing. Since neither party offers anything like a constructive approach to the problems besetting American society just now, it’s probably inevitable that they would both try to redirect the conversation to the supposed villainies of the other side.
Still, I’ve come to think that there’s more involved here than an escalation of partisan bickering. Like the conspiracy rhetoric beginning to circulate about the end of the housing bubble, it’s an evasion of responsibility – very few Americans, after all, had to be dragged kicking and screaming into eager participation in the fossil fuel powered orgy of consumption that peaked in the decade just past – and it’s also a way of not dealing with the hard work that has to be done if we’re to move into the end of the age of cheap abundant energy with anything worth saving still intact. On the list of work that has to be done as our society starts skidding down the far side of Hubbert’s peak, arguing over who’s most to blame may not deserve a very important place, but it’s also a good deal easier than some of the things that belong much higher up.
Finally, it may be worth thinking about where today’s search for scapegoats could lead. Imagine for a moment that the rhetoric we’re discussing succeeds in pinning the blame for peak oil on the Bush administration and American business leaders. It’s unpleasantly easy to imagine Republican politicians hanged en masse for crimes against humanity, oil executives and their families dragged from their homes and torn to pieces by screaming mobs, and the like. Such things have happened far too often in recent history to be dismissed as abstractions; they could all too readily shred what little is left of the basic civility any society needs to function at all, but they would not bring us one step closer to a meaningful response to the predicament of industrial society. I can only hope that enough people are willing to step back from today’s rhetoric of partisan hatred to make such things a little less likely, in a future that will be difficult enough without them.
Equally, once the mortgage bubble started leaking air and taking mortgage companies and hedge funds down with it, some US official or other tempted fate in a big way by proclaiming that “the fundamentals are sound.” The financial authorities said exactly the same thing in the aftermath of the 1929 stock market crash, and if there’s any phrase in economic history that translates better as “run for your lives,” I don’t know it. You’d think we would have learned from the tech stock boom and bust that it’s never different this time, and when an economy is propped up by the wishful thinking that drives all speculative frenzies, the fundamentals are never sound, and yet each bubble conjures up the same thinking and the same phrases all over again.
One song on this broken record, though, deserves special attention here. In The Great Crash 1929, one of the best (and certainly most readable) works of economic history ever written, John Kenneth Galbraith talked about “the notion that somewhere on Wall Street...there was a deus ex machina who somehow engineered the boom and bust.” His comment is apposite: “No one was responsible for the great Wall Street crash. No one engineered the speculation that preceded it. Both were the product of the free choice and decision of hundreds of thousands of individuals. The latter were not led to the slaughter. They were impelled to it by the seminal lunacy which has always seized people who are seized in turn with the notion that they can become very rich.”
Still, it’s not part of the standard rhetoric of economic crisis to encourage people to contemplate their own folly, and so we’ve already started to see claims that the great mortgage bubble and bust was deliberately engineered. There’s a twist, though, because the usual rhetoric of the past – the notion that the motive behind all this deviousness is pure greed – has been shouldered aside by the claim that the boom and bust were engineered to turn Americans into the debt slaves of a totalitarian state under the polymorphous banner of the New World Order.
That’s a claim worth noticing, not least because every significant crisis of the last dozen years or so has been interpreted by many of the same people in exactly the same way. It’s worth noticing as well because, like so much of what now passes for left-wing thought, this claim was pioneered by the John Birch Society, for many decades the cutting edge of the American extreme right. The phrase “new world order” itself was coined by the Society’s founder Robert Welch in 1972, as part of a florid system of conspiracy theory that blamed US corporations and the Trilateral Commission for everything Welch thought was wrong with the world. The ease with which these ideas of the far right were imported lock, stock and barrel by the other end of the political spectrum after the implosion of the New Left at the end of the 1960s is one of the richer ironies of recent cultural history.
Yet there’s another point worth noticing here, and that’s the extent to which the rhetoric of conspiracy has become a convenient way to evade any suggestion of personal responsibility for the consequences of one’s actions. Galbraith’s comments are relevant here. People plunged into real estate speculation because they thought they could get lots of money for nothing, and other people bought houses they couldn’t afford because they thought that, by the magic of repeated refinancing, they would never have to pay for them. Of course they were helped to embrace one or both of these delusions by the army of scam artists who always cluster around speculative bubbles, but the old maxim still holds: nobody can con you unless you first con yourself.
This habit of invoking conspiracy to dodge responsibility troubles me, not least because, as I’ve suggested in a previous post, the expansion of the industrial system over the last three hundred years or so – the Age of Exuberance, to use William Catton’s evocative phrase – resembles nothing so much as a speculative bubble on a titanic scale. We’ve seen how countless people in recent years have fondly credited their own financial brilliance for paper profits that turned out to be the product of a Ponzi scheme writ large. In the same way, the pundits and publicists of industrial society have insisted all along that our technological cleverness and creativity are responsible for a boom that, in the cold light of a deindustrial morning, will more likely be seen as the fantastically irresponsible exploitation of 500 million years’ worth of irreplaceable fossil fuels in an eyeblink of geological time.
The parallels run deep. Just as a speculative bubble lasts only so long as a steady stream of new speculators pour their money into the game and keep expanding the total amount of funds in play, the fossil fuel bubble of the last 300 years has depended on a steady stream of new energy reserves that keep expanding the total amount of cheap abundant energy available to the world’s industrial societies. Just as a speculative bubble routinely leads to extravagant misallocations of resources – the immense square footage of new home construction in the last five years or so might as well be the poster child for that just now – the fossil fuel bubble has resulted in resource allocations that our descendants will likely find at least as profoundly misguided.
Just as a speculative bubble ends in a sharp economic contraction unless some new financial gimmick can be found to reinflate the economy, finally, the most likely aftermath of the fossil fuel boom is a long and difficult contraction that will shake our societies to the core, since the likelihood that another energy source will be found in time to reinflate the fossil fuel bubble doesn’t look high just now. There’s even a parallel in the way one bubble feeds into another: just as the 1925 Florida land boom gave way to the 1927-9 stock market boom, and then to bust, and the tech stock boom of the late 1990s turned into the real estate boom now unraveling around us, the coal boom of 1725-1900 yielded to the petroleum boom of 1900-2005. If the parallel completes itself, the bust that follows will likely be on an epic scale – and it seems all too likely that this may work out in an equally massive hunt for scapegoats to blame for it all.
There’s already a substantial corner of the peak oil scene for whom discussing the end of the age of cheap abundant energy is inseparable from denouncing the current US government for an assortment of crimes, real or imagined. I’m no fan of the Bush administration – longtime readers of this blog know that I consider its energy and environmental policies disastrously misguided – but it seems to me that the effort to paint the leftover Reagan-era bureaucrats and politically naive right-wing intellectuals who run that administration as the modern liberal equivalent of Satan incarnate has less to do with their behavior than with the irruption of a frankly paranoid style of thinking into the American cultural mainstream.
Now to some extent this simply tracks the rise of a rhetoric of hatred in American politics, a process to which both parties have contributed mightily since 1970 or so. The vitriol heaped on the Bush administration by Democrats since 2001 matches with fair exactness the denunciations of the Clinton administration by Republicans in the eight years before then. Today’s claims that Bush is about to establish a dictatorship have an equally exact parallel in claims, circulated feverishly in far right circles after the 1992 election, that Clinton was about to do the same thing. Since neither party offers anything like a constructive approach to the problems besetting American society just now, it’s probably inevitable that they would both try to redirect the conversation to the supposed villainies of the other side.
Still, I’ve come to think that there’s more involved here than an escalation of partisan bickering. Like the conspiracy rhetoric beginning to circulate about the end of the housing bubble, it’s an evasion of responsibility – very few Americans, after all, had to be dragged kicking and screaming into eager participation in the fossil fuel powered orgy of consumption that peaked in the decade just past – and it’s also a way of not dealing with the hard work that has to be done if we’re to move into the end of the age of cheap abundant energy with anything worth saving still intact. On the list of work that has to be done as our society starts skidding down the far side of Hubbert’s peak, arguing over who’s most to blame may not deserve a very important place, but it’s also a good deal easier than some of the things that belong much higher up.
Finally, it may be worth thinking about where today’s search for scapegoats could lead. Imagine for a moment that the rhetoric we’re discussing succeeds in pinning the blame for peak oil on the Bush administration and American business leaders. It’s unpleasantly easy to imagine Republican politicians hanged en masse for crimes against humanity, oil executives and their families dragged from their homes and torn to pieces by screaming mobs, and the like. Such things have happened far too often in recent history to be dismissed as abstractions; they could all too readily shred what little is left of the basic civility any society needs to function at all, but they would not bring us one step closer to a meaningful response to the predicament of industrial society. I can only hope that enough people are willing to step back from today’s rhetoric of partisan hatred to make such things a little less likely, in a future that will be difficult enough without them.
Subscribe to:
Posts (Atom)