Wednesday, October 30, 2013

The Future is a Foreign Country

I’m pleased to note that the conversation about ephemeralization and catabolic collapse launched a few weeks back by futurist Kevin Carson, and continued in several blogs since then, has taken a promising new turn. The vehicle for that sudden swerve was a  essay by Lakis Polycarpou, titled Catabolic Ephemeralization: Carson versus Greer, which set out to find common ground between Carson’s standpoint and mine. In the process, to his credit, Polycarpou touched on a crucial point that has been too often neglected in recent discussions about the future.

That’s not to say that his take on the future couldn’t use some serious second thoughts. I noted in my original response to Carson’s post the nearly visceral inability to think in terms of whole systems that pervades today’s geek culture, and that curious blindness is well represented in Polycarpou’s essay. He argues, for example, that since a small part of Somalia has cell phone service, and cell phone service is more widely available today than grid electricity or clean drinking water, cutting-edge technology ought to be viable in a postpetroleum world. “If Greer is right that modern telecommunications is full of hidden embodied energy and capital costs,” he wonders aloud, “how is this possible?”

As it happens, that’s an easy question to answer. Somalia, even in its present turbulent condition, is part of a global economy fueled by the recklessly rapid extraction of half a billion years of fossil sunlight and equally unsustainable amounts of other irreplaceable natural resources.  It speaks well of the resourcefulness of the Somalian people that they’ve been able to tap into some of those resource flows, in the teeth of a global economy that’s so heavily tilted against them; that said, it bears remembering that the cell phone towers in Somalia are not being manufactured in Somalian factories from Somalian resources using Somalian energy sources. A sprawling global industrial network of immensely complex manufacturing facilities and world-spanning supply chains forms the whole system that lies behind those towers, and without that network or some equivalent capable of mobilizing equivalent resources and maintaining comparable facilities, those towers would not exist.

It’s easy to make dubious generalizations based on cell phone service, mind you, because all that’s being measured by that metric is whether a given group of people are within range of a bit of stray microwave radiation—not whether they have access to cell phones, or whether the infrastructure could handle the traffic if they did. That’s the kind of blindness to whole systems that pervades so much contemporary thinking. A microwave signal fluttering through the air above an impoverished Somalian neighborhood does not equal a sustainable technological economy; only when you can account for every requirement of the whole system that produces that signal can you begin to talk about whether that system can be preserved in working order through a harsh era of global economic contraction and political turmoil.

Polycarpou dodges this, and several other awkward points of this nature. He insists, for example, that nobody actually knows whether the early 19th century technology needed to lay and operate undersea cables is really less complex than a space program capable of building, orbiting, and operating communications satellites. Since the technologies in question are a matter of public knowledge, and a few minutes of online research is all that’s needed to put them side by side, this is breathtakingly ingenuous. Still, I’d encourage my readers to keep reading past this bit, and also past the ad hominem handwaving about the energy costs of the internet that follows it. It’s in the last part of Polycarpou’s essay, where he begins to talk about alternatives and the broader shape of the future, that he begins to speak in a language familiar to regular readers of The Archdruid Report.

What he’s suggesting in this final part of his essay, if I’m reading it correctly, is that the infrastructure of the modern industrial world is unsustainable, and will have to be replaced by local production of essential goods and services on a scale that will seem impoverished by modern standards. With this claim I have no disagreements at all, and indeed it’s what I’ve been suggesting here on The Archdruid Report for the last seven and a half years. The points at issue between my view of the future and Polycarpou’s are what technologies will be best suited to the deindustrial world, and just how much more impoverished things are going to be by the time we finish the transition. These are questions of detail, not of substance.

Furthermore, they’re not questions that can be settled conclusively in advance. Mind you, it’s almost certainly a safe assumption that the kind of computer hardware we use today will no longer be manufactured once today’s industrial infrastructure stops being a paying proposition economically; current integrated-circuit technology requires a suite of extraordinarily complex technologies and a dizzying assortment of raw materials from the far corners of the globe, which will not be available to village-scale workshops dependent on local economies. The point that too rarely gets noticed is that the kind of information processing technology we have now isn’t necessarily the only way that the same principles can be put to work. I’ve fielded claims here several times that mechanical computers capable of tolerably complex calculations can be made of such simple materials as plywood disks; I have yet to see a working example, but I’m open to the possibility that something of the sort could be done.

Polycarpou comments, along the same lines, that people in a variety of countries these days are setting up parallel internets using rooftop wifi antennas, and he suggests that this is one direction in which a future internet might run, at least in the short term. He’s almost certainly right, provided that those last six words are kept in mind. It’s vanishingly unlikely that anybody will be able to keep manufacturing the necessary hardware for wifi systems through the twilight years of the industrial age, but while the hardware exists, it will certainly be used, and it might buy enough time for something else, something that can be locally manufactured from local resources, to be invented and deployed. My guess is that it’ll look much more like a ham radio message net than the internet as we currently know it, but that’s a question the future will have to settle.

The same point can be made—and has been made here more than once—about solar photovoltaic technology. Lose track of  whole systems and it’s easy to claim, as Polycarpou does, that because solar cells have become less expensive recently, vast acreages of solar photovoltaic cells will surely bail us out of the consequences of fossil fuel depletion. All you have to do is forget that the drop in PV cell costs has much less to do with the production and resource costs of the technology than with China’s familiar practice of undercutting its competitors to seize control of export markets, and pay no attention at all to the complex and finicky technical basis for modern PV cell manufacture or the sheer scale of the supply chains needed to keep chip plants stocked with raw materials, spare parts, solvents, and all the other requirements of the manufacturing process.

Does this mean that solar PV power is useless? Not at all. Buy and install PV panels now, while Chinese trade policy and an inflated dollar make them cheap, and you’ll still have electricity coming out of them decades from now, when they will be hugely expensive if they can be purchased at all. Anyone who’s actually lived with a homescale PV system can tell you that the trickle of electricity you can get that way is no substitute for 120 volts of grid power from big central power plants, but once expectations nurtured by the grid get replaced by a less extravagant sense of how electricity ought to be used, that trickle of electricity can be put to many good uses.

Meanwhile, in the window of opportunity opened up by those solar panels, other ways of producing modest amounts of electricity by way of sunlight, wind, and other renewable sources can be tested and deployed. My guess is that thermoelectric generators heated by parabolic mirrors will turn out to be the wave of the future, keeping the shortwave radios, refrigerators, and closed-loop solar water heaters of the ecotechnic future supplied with power; still, that’s just a guess. There are many ways to produce modest amounts of direct-current electricity with very simple technologies, and highly useful electrical and electronic equipment can readily be made with locally available materials and hand tools. The result won’t be anything you would expect to see in a high-tech geek future, granted, but it’s equally a far cry from the Middle Ages.

This last detail is the crucial point that Polycarpou grasps at the end of his essay, and his comment is important enough that it deserves quotation in full:

“Putting these and other elements together – hi-tech, distributed communications, distributed energy and manufacturing, local sustainable food systems, appropriate technology and tactical urbanism among others – sets the stage for a future that looks quite a bit different than the present one. One might describe it as a kind of postmodern pastiche that looks neither like the antiquated futurisms we once imagined nor an idyllic return to preindustrial peasant society.”

The future, in other words, is not going to be a linear extrapolation from the present—that’s the source of the “antiquated futurisms” he rightly criticizes—or a simple rehash of the past. The future is a foreign country, and things are different there.

That realization is the specter that haunts contemporary industrial society. For all our civilization’s vaunted openness to change, the only changes most people nowadays are willing to contemplate are those that take us further in the direction we’re already going. We’ve got fast transportation today, so there has to be something even faster tomorrow—that’s basically the justification Elon Musk gave for the Hyperloop, his own venture into antiquated futurism; we’ve got the internet today, so we’ve got to have some kind of uber-internet tomorrow. It’s a peculiar sort of blindness, and one that civilizations of the past don’t seem to have shared; as far as I know, for example, the designers of ancient Roman vomitoriums didn’t insist that their technology was the wave of the future, and claim that future societies would inevitably build bigger and better places to throw up after banquets. (Those of my readers who find this comparison questionable might want to take a closer look at internet content.)

The future is a foreign country, and people do things differently there. It’s hard to think of anything that flies so comprehensively in the face of today’s conventional wisdom, or contradicts so many of the unquestioned assumptions of our time; thus it’s not surprising that Polycarpou, in suggesting it, seems to think that he’s disagreeing with me. Quite the contrary; there’s a reason why my most popular peak oil book is titled The Ecotechnic Future, rather than The Idyllic Peasant Future or some such twaddle.  For that matter, I’m not at all sure that he realizes I would agree with his characterization of the near- to mid-range future as a “postmodern pastiche;” I’d suggest that the distributed communication will likely be much less high-tech than he thinks, and that hand tools and simple machinery will play a much larger role in the distributed manufacturing than 3D printers, but again, those are matters of detail.

It’s in the longer run, I suspect, that our visions of the future diverge most sharply. Technological pastiche and bricolage, the piecing together of jerry-rigged systems out of scraps of surviving equipment and lore, are common features of ages of decline; it’s as the decline nears bottom that the first steps get taken toward a new synthesis, one that inevitably rejects many of the legacy technologies of the past and begins working on its own distinct projects. Vomitoriums weren’t the only familiar  technology to land in history’s compost heap in the post-Roman dark ages; chariots dropped out of use, too, along with a great many more elements of everyday Roman life. New values and new ideologies directed collective effort toward goals no Roman would have understood, and the harsh limits on resource availability in the radically relocalized post-Roman world also left their mark.

What often gets forgotten in reviewing the dark ages of the past is that they were not lapses into the past but gropings forward into an unknown future. There was a dark age before the Roman world and a dark age after it; the two had plenty of parallels, some of them remarkably exact, but the technologies were not the same, and Greek and Roman innovations in information processing and storage—classical logic and philosophy, widespread literacy, and the use of parchment as a readily available and reusable writing medium—were preserved and transmitted in various forms, opening up possibilities in the post-Roman dark ages that were absent in the centuries that followed the fall of Mycenae.

In the same way, the deindustrial future ahead of us will not be a rehash of the past, any more than it will be a linear extrapolation of the present. I’ve suggested, for reasons I’ve covered in a good many previous posts here, that we face a Long Descent of one to three centuries followed by a dark age very broadly parallel to the ones that followed Rome, Mycenae, and so many other dead civilizations of the past. That’s the normal result when catabolic collapse hits a society dependent on nonrenewable resources, but the way the process unfolds is powerfully shaped by contextual and historical factors, and no two passes through that process are identical.

That’s common enough in the universe of human experience. For example, it’s tolerably likely that you, dear reader, will have the experience of growing old, if you haven’t done so already.  It’s likely that at least some of your grandparents did exactly the same thing—but the parallel doesn’t mean that growing old will somehow transport you back to their era, much less to their lifestyles. Nor, I trust, would you be likely to believe somebody who claimed that getting old was by definition a matter of going back in time to your grandparents’ day and trading in your hybrid car for a Model T.

Some dimensions of growing old are hardwired into the experience itself—the wrinkles, the graying hair, and the slow buildup of physical dysfunctions with their inevitable end are among them. Other dimensions are up to you. In the same way, some of what happens when a civilization tips over in decline are reliable consequences of the mechanisms of catabolic collapse, or of the way those mechanisms interact with the ordinary workings of human collective psychology. The stairstep rhythm of crisis, stabilization, partial recovery, and renewed crisis, the spiral of conflict between centralizing and decentralizing forces, which eventually ends in the latter’s triumph; the rise of warband culture in the no man’s land outside the increasingly baroque and ineffective frontier defenses—you could set your watch by these, if its hands tracked decades, centuries and millennia.

Other aspects of the process of decline and fall are far less predictable. The radical relocalization that’s standard in eras of contraction and collapse means, among other things, that dark ages aren’t evenly distributed in space or time, and the disintegration of large-scale systems means, among other things, that minor twists of fate and individual decisions very often have much more dramatic consequences in dark ages than they do when the settled habits of a mature civilization constrain the impact of single events. Furthermore, the cultural, historical, and technological legacies of the former civilization always have a massive impact—it’s entirely possible, for example, that the dark age societies of deindustrial America will have such things as radio communication, solar water heaters, offroad bicycles, and ultralight aircraft—and so do the values and belief systems that reliably emerge as a civilization crashes slowly into ruin, and those who witness each stage of the process try to understand the experience and learn the lessons of its fall.

This is why I’ve spent most of the last year exploring the civil religion of progress, the core ideology of contemporary industrial society, and sketching out some of the ways it distorts our view of history and the future. There’s a pleasant irony in the way that Polycarpou ends his essay with the standard ritual invocation of progress, insisting that even though the future will be impoverished by our standards, it will still be better according to some other measure. That sort of apologetic rhetoric will no doubt see plenty of use in the years ahead: as progress fails to happen on schedule, it’ll be tempting to keep on moving the goalposts so that the failure is a little less visible and the faithful can continue to believe.

Eventually, though, such exercises will be recognized as the charades they are. As the worship of progress loses its grip on the imagination of our age, we’ll see sweeping changes in what people value, what they want to accomplish, and thus inevitably what technologies they’ll find worth preserving or developing. The court of Charlemagne could certainly have had vomitoriums if anyone had wanted them; the technical ability was there, but the values of the age had shifted away from anything that made vomitoriums make sense. In the same way, even if our descendants have the technical ability to produce something like today’s internet, it’s entirely possible that they’ll find other uses for those technologies, or simply shake their heads, wonder why anybody would ever have wanted something like that, and put resources available to them into some completely different project.

How that unfolds is a matter for the far future, and thus nothing we need to worry about now. As I wind up this sequence of posts, I want to talk instead about the roles that religion is likely to play in the near and middle future as the next round of catabolic collapse begins to bite. We’ll discuss that next week.

Wednesday, October 23, 2013

Reinventing Square Wheels

The issues raised in last week’s post here on The Archdruid Report cut far closer to the heart of our predicament than many of my readers may realize. It’s not just that the production of every kind of good or service in an industrial economy depends on the expectation that somebody or other can make a profit out of the deal, though of course that’s of huge importance. Every other mechanism we’ve got for circulating wealth depends directly or indirectly on the same expectation: taxes, fees, rent, interest on loans, and much else assumes that every business ought to produce enough profit that its banker, landlord, local and state governments, and so on can all be supported off a share of the proceeds without bankrupting the business.

Nowadays, that’s not necessarily a safe assumption. If you walk down the main streets in the old red brick Appalachian mill town where I live, for example, you’ll see any number of empty storefronts in excellent locations. Most of them haven’t had a tenant in forty years. Now and then a new business opens in one of these spots, or makes the attempt, but it’s very rare for one of these to be around a year after its opening. Cumberland, Maryland is a classic Rust Belt town, and the factories that were once its main employers all shut down in the early 1970s, pushing it through the same deindustrial transition that most of the rest of the United States is facing right now.

Businesses here have to survive on a very thin profit margin. The difficulty is that bankers, landlords, local and state officials, and so on still want their accustomed cut, which is substantially more than that margin will usually cover. This isn’t mere greed—they all have their own bills to pay, and an equal or larger number of people and institutions clamoring for a share of their own take.  The result, though, is that storefronts stay empty despite an abundance of unmet economic needs and an equal abundance of people who would be happy to work if they had the chance, because the businesses that could meet those needs and employ those people can’t make enough of a profit to keep their doors open. The inability of most economic activities to turn a profit in an age of contraction is already an issue at many levels, and one with which contemporary economic thought is almost completely unprepared to deal.

The one reliable way of dealing with that inability, of course, is to move productive activities into spheres that don’t depend on the profit motive. The household economy is one obvious place, and so is that expansion of the household economy—pretty much universal in dark ages—in which each family cultivates grains, legumes, and other bulk food crops for its own use on village farmland. Gift economies hardwired into the social hierarchy are also standard in such ages.  There’s a good reason why the English word “lord” comes from an  Anglo-Saxon term meaning “giver of loaves,” and it’s the same reason why generosity is always one of the most praised virtues in feudal and protofeudal societies:  from Saxon England to Vedic India to the First Nations of the Pacific Northwest coast, giving lavish gifts to all and sundry is the most important economic activity of the aristocrat.

Still, as discussed last week, religion is among other things a critically important source of motivation for unprofitable but necessary activities, and it tends to become paramount in ages of decline. There are certain complexities here, though, and the best way to grasp them is an indirect one. As so often in these essays, that route leads along unexpected byways—in this case, past two communes in rural Massachusetts, of all places, during the first half of the nineteenth century.

The first of them will be familiar only to who know the history of the Transcendentalist movement, America’s first homegrown counterculture. Its name was Brook Farm; it was founded with high hopes and much fanfare in 1841 by a bunch of Boston hippies—for all practical purposes, that’s what the Transcendentalists were—and it imploded in a series of crises in 1846 and 1847, having followed a trajectory that will be painfully familiar to anybody who watched the aftermath of the Sixties. As usual, the central conflict pitted middle-class expectations of adequate leisure, comfort, and personal independence against the hard economic realities of subsistence farming, and both sides lost; unable to support its members’ lifestyles solely from the results of their labor, the community ran through its savings and as much money as it could borrow, and when the money ran out, it went under.

One of the commune’s founding members, Nathaniel Hawthorne, set his novel The Blithedale Romance at a lightly fictionalized Brook Farm. Back when American public schools still had students read literature, you could count on getting assigned to read Hawthorne’s The Scarlet Letter or The House of the Seven Gables , but not The Blithedale Romance. Partly, I suspect, that was because its edgy portrayal of the psychology of the true believer, via the character of Hollingsworth the social reformer, cuts far too close to the bone; partly, though, it’s because America has long suffered from a frantic terror of all the most creative dimensions of its own past.

Brook Farm, in point of fact, was one of several hundred known communes in the United States in the 19th century, which built on a tradition going back to the late 17th; the first American commune I know of, Johannes Kelpius’ “Woman in the Wilderness” community of Rosicrucian ascetics, was founded in the trackless forests of Pennsylvania in1694. There are dozens of good books on these communities from 19th and early 20th century historians—Charles Nordhoff’s 1875 survey The Communistic Societies of the United States is a classic—and a flurry of research on them in academic circles more recently, but you’ll rarely hear about any of that in any public venue these days.

There’s a post to be written about the spectacular falsification of the American past that unfolds from our efforts to flee from our own history, but that’s a subject for another week. The point relevant now is that, during the years when Brook Farm was stumbling through its short and troubled life, there were several other communes in the neighborhood, and one that deserves special mention here was only thirty-five miles away by road.  Long before Brook Farm’s rise and long after its fall, it was a prosperous egalitarian community with around 200 members, growing all its own food and meeting all its financial needs with room to spare. The Harvard Village, as it was called, was everything Brook Farm’s founders claimed they wanted their commune to be, and it was also everything Brook Farm’s founders rejected most heatedly in practice: that is, it was a Shaker community.

Those of my readers from outside the United States may never have heard of the United Society of Believers in Christ’s Second Appearing, better known then and now as the Shakers. To most Americans, for that matter, those words might at most stir some vague memory of spare but elegant handicrafts and a few notes of the classic Shaker hymn “Simple Gifts.” I have yet to see a discussion of the United Society find its way into the ongoing conversation about lifeboat ecovillages. That’s a pity, because the Shaker experience has important and highly relevant lessons to teach in that context.

The United Society had its start in Manchester, England, where Ann Lee—“Mother Ann” to Shakers ever after—led a tiny group that spun off from the Quakers over the course of the 1760s. In 1774, Mother Ann and eight followers came to what was about to become the United States, and found no shortage of interest in their new gospel. From 1787 on, the United Society was run by its American converts, and these latter proceeded to accomplish one of the most remarkable feats of convergent evolution in human history: starting wholly from first principles, and with apparently no knowledge of the historical parallels, they reinvented almost every detail of classic monasticism.

Like monks and nuns everywhere, Shakers were celibate—one of Mother Ann’s revelations was that sexuality is the most potent source of human sinfulness—and lived communally, owning no personal property. Men and women lived in separate dormitories in a Shaker village and led essentially separate lives; Shaker meetinghouses, where Sunday services were held, had two doors, one for Brothers and one for Sisters, and the two sexes sat on opposite sides. Despite the rule of celibacy, there were always children in Shaker villages, some of them orphans, some left by destitute or deadbeat parents, some brought into the United Society when their parents joined; they had their own dormitories, one for girls and one for boys. The children could choose to leave or stay when they reached adulthood. To stay was to give up sexuality, family, property, and a great deal of autonomy, but it was a choice many made.

There were consolations, to be sure, and not all of them were wholly spiritual. The opportunities open to women among the Shakers were much more substantial than what was available in what Shakers called “the World.” Both sexes had equal roles in each village, each regional bishopric, and the entire Society—the Central Ministry, the body of four senior Shakers who ran the organization, always consisted of two Elders and two Eldresses. Ordinary members could expect a life of of hard work, but that was common enough in 19th century America, and the lifelong support of a close-knit community provided by the Shaker system was something most people in that rough-and-tumble era did not have.

The Shakers thus had to be on the watch for freeloaders—“bread-and-butter Shakers” was the usual term—who wanted to join the Society for the sake of free meals and medical care, but had no interest in the religion or, for that matter, in putting in an honest day’s work. Still, the United Society found so many sincere converts that new Shaker communities were being founded into the 1830s, and membership declines didn’t become an issue until well after the Civil War.  Nor is the United Society quite gone even today; there are still a few members left at the last remaining Shaker village at Sabbathday Lake, Maine, supporting themselves by the utterly traditional Shaker trade of drying and selling herbs.

Compare a Shaker community with a Christian or Buddhist monastery and the parallels are impossible to miss. The classic Christian monastic virtues of poverty, chastity, and obedience are all present and accounted for; so is the commitment to labor as a form of religious service. For that matter, the clean, simple, spare esthetic that the Shakers made famous in their furnishings and architecture has a great deal in common with the elegant simplicity that Zen Buddhism introduced to Japanese art and design, or the close equivalents in Christian monastic traditions. The parallels aren’t accidental: set out to live on a monastic plan, and the kind of simplicity prized by Shakers, Cistercians, Zen monks, and so many of their fellow monastics, has a great many practical advantages.

Whether it takes the form of a Shaker village, a Catholic abbey, a Zen Buddhist monastery, or what have you, the monastic life is one of the few consistent success stories in the history of communal living. That’s exactly the problem, of course, because the vast majority of people who imagine themselves living in a communal setting these days aren’t willing to push their enthusiasm to the point of giving up sex, family personal property, and personal autonomy.

The result, predictably enough, has been a steady stream of projects, proposals, and actual communities launched by people who want to have their cake and eat it too: that is, to live in a self-supporting communal setting while still retaining comfortable middle-class values of ownership, autonomy, and unhindered sexual activity. Brook Farm was one of those attempts, and it lasted longer than most. Throughout American history, the average lifespan of a commune has been around two years. There are counterexamples, to be sure, but it’s remarkable how often those have ended up adopting at least some of the traditional monastic values, such as community ownership of property.

There’s a wholly pragmatic reason for those failures. The kind of lifestyle most people consider normal in industrial societies is only possible if you’re helping to burn through half a billion years of stored sunlight via a modern industrial economy. You can’t support such a lifestyle with hand labor on a rural commune. Even the much less resource-intensive lifestyles of the Brook Farm era were too costly to be met by working the land, which is why Brook Farm went broke in five years. Most of the communes of the Sixties crashed and burned sooner than that: unless there was a sugar daddy available to cover the bills, the amount and kind of work the residents were willing to do usually failed to meet the costs of the lifestyles they wanted to lead, and collapse followed promptly.

The exceptions have almost all been religious communities. In the Sixties, that usually meant that they clustered around a spiritual leader who was charismatic or convincing enough to get away with telling his followers what to do. Some religious communities launched by the Sixties counterculture thus are still around; others got past the economic problems that wrecked most of their secular counterparts, only to crash headlong into one of the other standard traps that face communal projects. You can see all these same patterns traced out  in the communes launched by earlier American countercultures, from the Transcendentalists on—and by and large, the only ones that survive their founding generation are those that conform more or less precisely to the model of classic monasticism sketched out above.

I spoke of convergent evolution earlier, and the borrowing from biology is a deliberate one. Darwin’s theory, at its core, is simply a recognition that some things work better than others, and those organisms who stumble across something that works better—in physical structure, in behavior, or what have you—are more likely to survive whatever challenges they and their less gifted counterparts face together. Why one thing works better than another may or may not be obvious, even after close study, but when one set of adaptations consistently results in survival, while a different set just as consistently results in collapse, it seems quite reasonable to draw the obvious lesson from that experience.

That inference seems reasonable to me, at least. I’m quite aware that most other people in the modern industrial world don’t seem to see things that way. I can point out that X has failed every single time it’s been tried, without a single exception, and people will insist that this doesn’t matter unless I can detail the reasons why it has to fail; then, when I do so, they try to argue with the reasons. In the opening lines of his essay The Eighteenth Brumaire of Louis Napoleon, Karl Marx commented acidly: “Hegel says somewhere that all great world-historic facts and personages appear, so to speak, twice. He forgot to add: the first time as tragedy, the second time as farce.” One of the most potent forces driving those repetitions, by turns tragic and farcical, is the endlessly repeated insistence that this time, the lessons of history don’t matter.

Thus it’s an interesting question why celibacy should be so important a factor in the long-term survival of monastic systems. Perhaps celibacy works because it prevents sexual jealousies from spinning out of control, as they so often do in the hothouse environment of communal living. Perhaps celibacy works because pair bonds between lovers are a potent source of the private loyalties that, left to run free, so often distract members of communal groups from their loyalty to the project as a whole. Perhaps celibacy works because it’s only when your followers are willing to give up sex for the cause that you can be sure they’re committed enough.  Perhaps celibacy works because all that creative energy has to go somewhere—the Shakers, like so many other monastic traditions, birthed an astonishing range of artistic and creative endeavors, from a musical corpus of more than 11,000 original hymns, through graphic arts, architecture, and a style of furniture and other crafts that’s widely considered one of the great traditions of American folk culture.

Perhaps it’s some other reason entirely. The point that needs to be kept in mind, though, is that in a monastic setting, celibacy works, and most of the other ways of managing human sexuality in that setting pretty reliably don’t. It’s possible to be utterly correct about the fact that X is the case while being just as utterly clueless about the reasons why X is the case. The fact that medieval philosophers believed that rocks fall because they’re passionately in love with the earth, and desire to be united with their beloved, thus did not prevent those same philosophers from recognizing that rocks do indeed fall when you pick them up and let go.

The value and the limitations of history as a guide to the present and the future come out of the hard fact that history tells us what happens, but not why it happens. More than three centuries of communal experiments in North America, and more than three thousand years of similar experiments elsewhere, have a lot to teach about what works and what doesn’t work when people decide to pursue a communal lifestyle. Those who are willing to learn from those experiences have a much better chance of coming up with something that works than those who insist that the past doesn’t matter and thus repeat its most common mistakes all over again. Reinventing the wheel has its uses, but if your efforts at reinvention consistently turn out square wheels, it may be helpful to look at other examples of the wheelwright’s craft and see if you can figure out what they got right and you got wrong.

Ironically, that’s an insight the Shakers could have used. I mentioned above that they managed to reinvent almost every detail of classic monasticism, but one of the few details they missed turned out to be their Achilles’ heel. Quite a few religious movements have started out wholly monastic; those that survived for more than a few generations figured out that they also needed some mode of participation for people who admired their ideals and believed in their teachings but weren’t ready to take the plunge into a celibate lifestyle. The Jain and Buddhist religions, to name only two examples, started out as wholly monastic traditions, but figured out early on that they needed to make room for laypersons as well—and discovered that the more room they made for lay involvement, the more often it happened that people got sufficiently used to the idea of monastic life to say “You know, I could live with that.” It’s by way of that process that successful monastic systems keep themselves supplied with novices.

If the Shakers had such a community of supportive lay believers as a waystation between the World and the formal Shaker life, it’s quite possible that they would be a major American denomination today. If the lack of that bridge is what finally brings the United Society’s story to an end, as seems likely just now, they won’t be the first victims of that particular trap, and doubtless won’t be the last, either. That’s why it’s important to pay attention to the lessons of history, especially when there’s reason to suspect a mismatch between your expectations and the facts on the ground. The decline and fall of contemporary industrial civilization bids fair to bring a bumper crop of that sort of mismatch into play—and in such times insisting that despite all the experience of the past, wheels ought to have square corners, is not exactly a useful habit. 

***
I also have a couple of announcements my readers might find of interest.. First of all, my next peak oil book, Decline and Fall: The End of Empire and the Future of Democracy in 21st Century America, is now available for preordering, at 20% off the cover price. Those of you who followed last year's posts on the end of America's empire have seen the first drafts; those who haven't – well, I'll let you decide for yourself whether or not you're in for a treat. Visit the publisher's website hereto place an order.

Second, the Green Wizardry forum at www.greenwizards.orgis up and running again, after a somewhat rocky shift to new servers. If you're already a member, you should be able to log on without difficulty. If you're not, because the forum's had all kinds of trouble with spam, you'll need to contact the forum moderators at randomsurfer200 (at) yahoo (dot) com to get an account started.

Wednesday, October 16, 2013

The One Option Left

The recent round of gridlock in Washington DC may seem worlds away from the mythological visions and spiritual perspectives that have been central to this blog over the last few months. Still, there’s a direct connection. The forces that have driven American politicians into their current collection of blind alleys are also the forces that will make religious institutions very nearly the only structures capable of getting anything done in the difficult years to come, as industrial civilization accelerates along the time-honored trajectory of decline and fall.
 
To make sense of the connection, it’s necessary to start with certain facts, rarely mentioned in current media coverage, that put the last few weeks of government shutdown and potential Federal default in their proper context. These days the US government spends about twice as much each year as it takes in from taxes, user fees, and all other revenue sources, and makes up the difference by borrowing money.  Despite a great deal of handwaving, that’s a recipe for disaster.  If you, dear reader, earned US$50,000 a year and spent US$100,000 a year, and made up the difference by taking out loans and running up your credit cards, you could count on a few years of very comfortable living, followed by bankruptcy and a sharply reduced standard of living; the same rule applies on the level of nations.

Were you to pursue so dubious a project, in turn, one way to postpone the day of reckoning for a while would be to find some way to keep the interest rates you paid on your loans as low as possible. This is exactly what the US government has done in recent years. A variety of gimmicks, culminating in the current frenzy of “quantitative easing”—that is to say, printing money at a frantic pace—has forced interest rates down to historically low levels, in order to keep the federal government’s debt payments down to an annual sum that we can pretend to afford. Even a fairly modest increase in interest rates would be enough to push the US government into crisis; an increase on the same scale as those that have clobbered debt-burdened European countries in recent years would force an inevitable default.

Sooner or later, that latter is going to happen.  That’s the unmentioned context of the last few cycles of intractable financial stalemates in Washington. For more than a decade now, increasingly frantic attempts to kick the can further and further down the road have thus been the order of the day.  In order to prevent a steep economic contraction in the wake of the 2000 tech-stock crash, the US government and the Federal Reserve Board—its theoretically independent central bank—flooded the economy with cheap credit and turned a blind eye to what became the biggest speculative delusion in history, the global real estate bubble of 2004-2008.  When that popped, in turn, the US government and the Fed used even more drastic measures to stave off the normal consequences of a huge speculative bust.

None of those measures has a long shelf life. They’re all basically stopgaps, and it’s probably safe to assume that the people who decided to put them into place believed that before the last of the stopgaps stopped working, the US economy would resume its normal trajectory of growth and bail everyone out. That hasn’t happened, and there are good reasons to think that it’s not going to happen—not this year, not this decade, not in our lifetimes. We’ll get to those reasons shortly; the point that needs attention here is what this implies for the federal government here and now.

At some point in the years ahead, the US government is going to have to shut down at least half its current activities, in order to bring its expenditures back in line with its income. At some point in the years ahead, equally, the US government is going to have to default on its $8 trillion dollars of unpayable debt, plus however much more gets added as we proceed. The shutdown and default that have absorbed so much attention in recent weeks, in other words, define the shape of things to come.  This time, as I write these words, a temporary compromise seems to have been slapped together, but we’ll be back here again, and again, and again, until finally the shutdown becomes permanent, the default happens, and we move on into a harsh new economic reality.

It’s probably necessary at this point to remind my readers again that this doesn’t mean we will face the kind of imaginary full stop beloved by a certain class of apocalyptic theorists. Over the last twenty years or so, quite a few countries have slashed their government expenditures and defaulted on their debts. The results have included a great deal of turmoil and human suffering, but the overnight implosion of the global economy so often predicted has failed to occur, and for good reason. Glance back over economic history and you’ll find plenty of cases in which nations had to deal with crises of the same sort the US will soon face. All of them passed through hard times and massive changes, but none of them ceased to exist as organized societies; it’s only the widespread fixation on fantasies of apocalypse that leads some people to insist that for one reason or another, it’s different this time.

I plan on devoting several upcoming posts to what we can realistically expect when the US government has to slash its expenditures and default on its debts, the way that Russia, Argentina, and other nations have done in recent decades. For the moment, though, I want to focus on a different point: why has the US government backed itself into this mess? Yes, I’m aware of the theorists who argue that it’s all a part of some nefarious plan, but let’s please be real: to judge by previous examples, the political and financial leaders who’ve done this are going to have their careers terminated with extreme prejudice, and it’s by no means impossible that a significant number of them will end up dangling from lampposts. It’s safe to assume that the people who have made these decisions are aware of these possibilities. Why, then, their pursuit of the self-defeating policies just surveyed?

That pursuit makes sense only if the people responsible for the policies assumed they were temporary expedients, meant to keep business as usual afloat until a temporary crisis was over. From within the worldview of contemporary mainstream economics, it’s hard to see any other assumption they could have made. It’s axiomatic in today’s economic thought that economic growth is the normal state of affairs, and any interruption in growth is simply a temporary problem that will inevitably give way to renewed growth sooner or later. When an economic crisis happens, then, the first thought of political and financial leaders alike these days is to figure out how to keep business as usual running until the economy returns to its normal condition of growth.

The rising spiral of economic troubles around the world in the last decade or so, I suggest, has caught political and financial officials flatfooted, precisely because that “normal condition of growth” is no longer normal. After the tech-stock bubble imploded in 2000, central banks in the US and elsewhere forced down interest rates and flooded the global economy with a torrent of cheap credit. Under normal conditions, this would have driven an investment boom in productive capital of various kinds: new factories would have been built, new technologies brought to the market, and so on, resulting in a surge in employment, tax revenues, and so on. While a modest amount of productive capital did come out of the process, the primary result was a speculative bubble even more gargantuan than the tech boom.

That was a warning sign too few people heeded. Speculative bubbles are a routine reality in market economies, but under ordinary circumstances they’re self-limiting in scale, because there are so many other less risky places to earn a decent return on investment. It’s only when an economy has run out of other profitable investment opportunities that speculative bubbles grow to gargantuan size. In the late 1920s, the mismatch between vast investment in industrial capital and a wildly unbalanced distribution of income meant that American citizens could not afford to buy all the products of American industry, and this pushed the country into a classic overproduction crisis. Further investment in productive capital no longer brought in the expected rate of return, and so money flooded into speculative vehicles, driving the huge 1929 bubble and bust.

The parallel bubble-and-bust economy that we’ve seen since 2000 or so followed similar patterns on an even more extreme scale. Once again, income distribution in the United States got skewed drastically in favor of the well-to-do, so that a growing fraction of Americans could no longer support the consumer economy with their purchases. Once again, returns on productive investment sank to embarrassing lows, leaving speculative paper of various kinds as the only option in town. It wasn’t overproduction that made productive capital a waste of investment funds, though—it was something considerably more dangerous, and also less easy for political and financial elites to recognize.

The dogma that holds that growth is the normal state of economic affairs, after all, did not come about by accident. It was the result of three centuries of experience in the economies of Europe and the more successful nations of the European diaspora. Those three centuries, of course, happened to take place during the most colossal economic boom in all of recorded history. Two factors discussed in earlier posts drove that boom:  first, the global expansion of European empires in the 17th, 18th, and 19th centuries and the systematic looting of overseas colonies that resulted; second, the exploitation of half a billion years of stored sunlight in the form of coal, petroleum, and natural gas.

Both those driving forces remained in place through the twentieth century; the European empires gave way to a network of US client states that were plundered just as thoroughly as old-fashioned imperial colonies once were, while the exploitation of the world’s fossil fuel reserves went on at ever-increasing rates. The peaking of US petroleum production in 1972 threw a good-sized monkey wrench into the gears of the system and brought a decade of crisis, but a variety of short-term gimmicks postponed the crisis temporarily and opened the way to the final extravagant blowoff of the age of cheap energy.

The peaking of conventional petroleum production in 2005 marked the end of that era, and the coming of a new economic reality that no one in politics or business is yet prepared to grasp. Claims that the peak would be promptly followed by plunging production, mass panic, and apocalyptic social collapse proved to be just as inaccurate as such claims always are.  What happened instead was that a growing fraction of the world’s total energy supply has had to be diverted, directly or indirectly, to the task of maintaining fossil fuel production.  Not all that long ago, all things considered, a few thousand dollars was enough to drill an oil well that can still be producing hundreds of barrels a day decades later; these days, a fracked well in oil-bearing shale can cost $5 to 10 million to drill and hydrofracture, and three years down the road it’ll be yielding less than 10 barrels of oil a day.

These increased costs and diminished returns don’t take place in a vacuum. The energy and products of energy that have to be put into the task of maintaining energy production, after all, aren’t available for other economic uses.  In monetary terms—money, remember, is the system of tokens we use to keep track of the value and manage the distribution of goods and services—oil prices upwards of $100 a barrel, and comparable prices for petroleum products, provide some measure of the tax on all economic activity that’s being imposed by the diversion of energy, resources, and other goods and services into petroleum production. Meanwhile fewer businesses are hiring, less new productive capital gets built, new technologies languish on the shelves:  the traditional drivers of growth aren’t coming into play, because the surplus of real wealth needed to make them function isn’t there any more, having had to be diverted to keep drilling more and more short-lived wells in the Bakken Shale.

The broader pattern behind all these shifts is easy to state, though people raised in a growth economy often find it almost impossible to grasp. Sustained economic growth is made possible by sustained increases in the availability of energy and other resources for purposes other than their own production. The only reason economic growth seems normal to us is that we’ve just passed through an era three hundred years long in which, for the fraction of humanity living in western Europe, North America, and a few other corners of the world, the supply of energy and other resources soared well past any increases in the cost of production. That era is now over, and so is sustained economic growth.

The end of growth, though, has implications of its own, and some of these conflict sharply with expectations nurtured by the era of growth economics. It’s only when economic growth is normal, for example, that the average investment can be counted on to earn a profit. An investment is a microcosm of the whole economy; it’s because the total economy can be expected to gain value that investments, which represent ownership of a minute portion of the whole economy, can be expected to do the same thing. On paper, at least, investment in a growing economy is a positive-sum game; everyone can profit to one degree or another, and the goal of competition is to profit more than the other guy.

In a steady-state economy, by contrast, investment is a zero-sum game; since the economy neither grows nor contracts from year to year, the average investment breaks even, and for one investment to make a profit, another must suffer a loss. In a contracting economy, by the same logic, investment is a negative-sum game, the average investment loses money, and an investment that merely succeeds in breaking even can do so only if steeper losses are inflicted on other investments.

It’s precisely because the conditions for economic growth are over, and have been over for some time now, that the US political and financial establishment finds itself clinging to the ragged end of a bridge to nowhere, with an assortment of alligators gazing up hungrily from the waters below.  The stopgap policies that were meant to keep business as usual running until growth resumed have done their job, but economic growth has gone missing in action, and the supply of gimmicks is running very short. I don’t claim to know exactly when we’ll see the federal government default on its debt and begin mass layoffs and program cutbacks, but I see no way that these things can be avoided at this point.

Nor is this the only consequence of the end of growth.  In a contracting economy, again, the average investment loses money. That doesn’t simply apply to financial paper; if a business owner in such an economy invests in capital improvements, on average, those improvements will not bring a return sufficient to pay for the investment; if a bank makes a loan, on average, the loan will not be paid back in full, and so on. Every one of the mechanisms that a modern industrial economy uses to encourage people to direct surplus wealth back into the production of goods and services depends on the idea that investments normally make a profit. Lacking those, the process of economic contraction becomes self-reinforcing because disinvestment and hoarding becomes the best available strategy, the sole effective way to cling to as much as possible of your wealth for as long as possible.

This isn’t merely a theoretical possibility, by the way. It occurs reliably in the twilight years of other civilizations. The late Roman world is a case in point:  by the beginning of the fifth century CE, it was so hard for Roman businessmen to make money that the Roman government had laws requiring sons to go into their fathers’ professions, whether they could earn a living that way or not, and there were businessmen who fled across the borders and went to work as scribes, accountants, and translators for barbarian warlords, because the alternative was economic ruin in a collapsing Roman economy. Meanwhile rich landowners converted their available wealth into gold and silver and buried it, rather than cycling it back into the economy, and moneylending became so reliable a source of social ills that lending at interest was a mortal sin in medieval Christianity and remains so in Islam right down to the present. When Dante’s Inferno consigned people who lend money at interest to the lowest part of the seventh circle of Hell, several notches below mass murderers, heretics, and fallen angels, he was reflecting a common belief of his time, and one that had real justification in the not so distant past.

Left to itself, the negative-sum game of economics in a contracting economy has no necessary endpoint short of the complete collapse of all systems of economic exchange. In the real world, it rarely goes quite that far, though it can come uncomfortably close. In the aftermath of the Roman collapse, for example, it wasn’t just lending at interest that went away. Money itself dropped out of use in most of post-Roman Europe—as late as the twelfth century, it was normal for most people to go from one year to the next without ever handling a coin—and market-based economic exchange, which thrived in the Roman world, was replaced by feudal economies in which most goods were produced by those who consumed them, customary payments in kind took care of nearly all the rest, and a man could expect to hold land from his overlord on the same terms his great-grandfather had known.

All through the Long Descent that terminated the bustling centralized economy of the Roman world and replaced it with the decentralized feudal economies of post-Roman Europe, though, there was one reliable source of investment in necessary infrastructure and other social goods. It thrived when all other economic and political institutions failed, because it did not care in the least about the profit motive, and had different ways to motivate and direct human energies to constructive ends. It had precise equivalents in certain other dark age and medieval societies, too, and it’s worth noting that those dark ages that had some such institution in place were considerably less dark, and preserved a substantially larger fraction of the cultural and technological heritage of the previous society, than those in which no institution of the same kind existed.

In late Roman and post-Roman Europe, that institution was the Christian church. In other dark ages,  other religious organizations have filled similar roles—Buddhism, for example, in the dark ages that followed the collapse of Heian Japan, or the Egyptian priesthoods in the several dark ages experienced by ancient Egyptian society. When every other institution fails, in other words, religion is the one option left that provides a framework for organized collective activity. The revival of religion in the twilight of an age of rationalism, and its rise to a position of cultural predominance in the broader twilight of a civilization, thus has a potent economic rationale in addition to any other factors that may support it. How this works in practice will be central to a number of the posts to come.

Wednesday, October 9, 2013

The Renewal of Religion

The new religious sensibility I’ve sketched out here in several posts already, and will be discussing in more detail as we proceed, has implications that go well beyond the sphere assigned to religion in most contemporary industrial societies. One of the most significant of those implications is precisely the idea that religion, in any sense, will have an important impact on the future in the first place.
 
One of the standard tropes of the contemporary faith in progress, after all, insists that religion is an outworn relic sure to be tipped into history’s compost heap sometime very soon. By “religion,” of course, those who make this claim inevitably mean “theist religion,” or more precisely “any religion other than mine”—the civil religion of progress is of course supposed to be exempt from that fate, since its believers insist that it’s not a religion at all.

This sort of insistence is actually quite common in religious life. C.S. Lewis notes in one of his books that really devout people rarely talk about religion as such; instead, they talk about God.  To ordinary, sincere, unreflective believers, “religion” means the odd things that other people believe; in their eyes, their own beliefs are simply the truth, obvious to anyone with plain common sense. It’s for this reason that many languages have no word for religion as such, even though they’re fully stocked with terms for deities, prayers, rituals, temples, and the other paraphernalia of what we in the West call religion; it’s by and large only those societies that have had to confront religious pluralism repeatedly in its most challenging forms that have, or need, a label for the overall category to which these things belong.

The imminent disappearance of all (other) religion that has featured so heavily in rationalist rhetoric for the last century and a half or so thus fills roughly the same role in their faith as the Second Coming in Christianity: the point at which the Church Militant morphs into the Church Triumphant.  So far, at least to the best of my knowledge, nobody in the atheist scene has yet proclaimed the date by which Reason will triumph over Superstition—the initial capitals, again, tell you when an abstraction has turned into a mythic figure—but it’s probably just a matter of time before some rationalist equivalent of Harold Camping gladdens the heart of the faithful by giving them a date on which to pin their hopes.

If the evidence of history is anything to go by, though, those hopes are misplaced. As discussed in an earlier post, the rationalist revolt against religion that’s been so large a factor in Western culture over the last few centuries is far less unique than its publicists like to think. Some such movement rises in every literate civilization in which the art of writing escapes from the control of the priesthood, and a significant secular literate class emerges.  In ancient Egypt, that started around 1500 BCE, in China, around 750 BCE; in India and Greece alike, around 600 BCE; in what Spengler called the Magian culture, the cauldron of competing Middle Eastern monotheisms that finally came under the rule of Islam, about 900 CE.  The equivalent point in the history of the West was reached around 1650.

If you know your way around the history of Western rationalism from 1650 to the present, furthermore, you can track the same patterns straight through these other eras. Each movement began with attempts at constructive criticism of religious traditions no one dreamed of rejecting entirely, and moved step by step toward an absolute rejection of the traditional faith in one way or another:  by replacing it with a rationalized creed stripped of traditional symbolism and theology, as Akhenaten and the Buddha did; by dismissing religion as a habit appropriate to the uneducated, as Confucius and Aristotle did; by denouncing it as evil, as Lucretius did and today’s “angry atheists” do—there aren’t that many changes available, and the rationalist movements of the past have rung them all at one time or another.

Each rationalist movement found an audience early on by offering conclusive answers to questions that had perplexed earlier thinkers, and blossomed in its middle years by combining practical successes in whatever fields mattered most to their society, with a coherent and reasonable worldview that many people found more appealing than the traditional faith. It’s the aftermath, though, that’s relevant here. Down through the centuries, only a minority of people have ever found rationalism satisfactory as a working philosophy of life; the majority can sometimes be bullied or shamed into accepting it for a time, but such tactics don’t have a long shelf life, and commonly backfire on those who use them.

Thus the rationalist war against traditional religion in ancient Greek and Roman society succeeded in crippling the old faith in the gods of Olympus, only to leave the field wide open to religions that were less vulnerable to the favorite arguments of classical rationalism:  first the mystery cults, then a flurry of imported religions from the East, among which Christianity and Islam eventually triumphed. That’s one of the two most common ways for an era of rationalism to terminate itself with extreme prejudice. The other is the straightforward transformation of a rationalist movement into a religion—consider the way that Buddhism, which started off as a rational protest against the riotous complexity of traditional Hindu religion, ended up replacing Hinduism’s crores of gods with an equally numerous collection of bodhisattvas, to whom offerings, mantras, prayers, and so on were thereafter directed.

The Age of Reason currently moving into its twilight years, in other words, is not quite as unique as its contemporary publicists like to think. Rather, it’s one example of a recurring feature in the history of human civilization.  Ages of Reason  usually begin as literate civilizations finish the drawn-out process of emerging from their feudal stage, last varying lengths of time, and then wind down.  Again, the examples cited earlier are worth recalling: the rationalist movement of the Egyptian New Kingdom ended in 1340 BCE with the restoration of the traditional faith under Horemheb; that of China ended with the coming of the Qin dynasty in 221 BCE; that of India faded out amid a renewal of religious philosophy well before 500 CE; that of Greece and Rome ceased to be a living force around the beginning of the Christian era; that of the Muslim world ended around 1200 CE.

In each case, what followed was what Oswald Spengler called the Second Religiosity—a renewal of religion fostered by an alliance between intellectuals convinced that rationalism had failed, and the masses that had never really accepted rationalism in the first place. The coming of the Second Religiosity doesn’t always mean the end of rationalism itself, though this can happen if the backlash is savage enough. What it means is that rationalism is no longer the dominant cultural force it normally is during an Age of Reason, and settles down to become one intellectual option among many others.

What forms a Second Religiosity might take in the contemporary Western world is a fascinating issue, and one that deserves (and will get) a post of its own. The point I’d like to explore this week is that the idea of a rebirth of religion focusing on an ecological sensibility is not original to me. It actually came in for quite a bit of discussion in the late 1970s, in the circle of green intellectuals that formed around Gregory Bateson, Stewart Brand, and The Whole Earth Catalog.  The idea was that the only thing that would really galvanize people into making changes for the sake of an ecologically sane and survivable future was the emergence of an eco-religion that would call forth from its believers the commitment, and indeed the fanaticism, that the transformation would require.

Nor was this just empty talk. I know of several attempts to launch such a religion, and at least one effort to provide it with a set of sacred scriptures. All of them fizzled, and for a very good reason.

To make sense of that reason, a bit of a tangent will be useful here, and so I’d like to glance at a somewhat different attempt to borrow the rhetoric and imagery of religion for secular ends, the Charter for Compassion launched by pop-religion author Karen Armstrong a few years back, which is being marketed by the TED Foundation just now under the slogan “The best idea humanity has ever had.”  Those of my readers who know their way around today’s yuppie culture will doubtless not be surprised by the self-satisfied tone of the slogan, but it’s the dubious thinking that follows that I want to point up here.

Armstrong starts by claiming that “The principle of compassion lies at the heart of all religious, ethical and spiritual traditions,” which is quite simply not true. All religions? There are many in which compassion falls in the middling or minor rank of virtues, and quite a few that don’t value compassion at all. All ethical traditions? Aristotle’s Nichomachean Ethics, widely considered the most influential work on ethics in the Western tradition, doesn’t even mention the concept, and many other ancient, medieval, and modern ethical systems give it less than central billing. All spiritual traditions? That vague and mightily misused word “spirituality” stands for a great many things, many of which have nothing to do with compassion or any other moral virtue.

An earlier post in this sequence talked about the monumental confusions that pop up when values get confused with facts, and this is a good example. Armstrong pretty clearly wants to insist that everyone should put compassion at the center of their religious, ethical, and spiritual lives, but in a society that disparages values, it’s easier to push such an argument using claims of fact—even when, as here, those claims don’t happen to be true. Mind you, Armstrong’s charter also finesses the inevitable conflict between the virtue she favors and other virtues that have at least as good a claim to central status, but that’s a subject for another day.

The deeper falsification I want to address here is contained in the passage already cited, though it pops up elsewhere in the Charter as well: “We therefore call upon all men and women to restore compassion to the centre of morality and religion” is another example. What’s being said here, in so many words, is that a moral virtue either is, or ought to be, at the core of religion: that religion, in other words, is basically a system of ethics dressed up in some set of more or less ornate mythological drag.  That’s a very popular view these days, especially among the liberal intelligentsia from which Armstrong and the TED Foundation draw most of their audiences, and some form of it nearly always becomes a commonplace in ages of rationalism, but it’s still a falsification.

It so happens that a large minority of human beings—up to a third, depending on the survey—report having at least one experience, at some point in their lives, that appears to involve contact with a disembodied intelligent being.  Many of these experiences are spontaneous; others seem to be fostered by religious practices such as prayer, meditation, and ritual.  Any number of causes have been proposed for these experiences, but I’d like to ask my readers to set aside the issue of causation for the moment and pay attention to the raw data of experience. There’s a crucial difference between the question “Does x happen?” and the question “Why does x happen?”—a difference of basic logical categories—and it’s a fruitful source of confusion and error to confuse them.

Whether they are caused by autohypnosis, undiagnosed schizophrenia, archetypes of the collective unconscious, the real existence of gods and spirits, or something else, these experiences happen to a great many people, they have done so as far back as records go, and religion is the traditional human response to them.  If nobody had ever had the experience of encountering a god, an angel, a saint, an ancestor, a totem spirit, or what have you, it’s probably safe to say that we would not have religions. Human beings under ordinary conditions encounter two kinds or, if you will, worlds of experience: one that’s composed of things that can be seen, heard, smelled, tasted, and touched, which we might as well call the biosphere, and one composed of things that can be thought, felt, willed, and imagined, which we can call the noosphere (from Greek nous, “mind”). The core theory held by religions everywhere is that there is a third world, which we can call the theosphere, and that this is what breaks through into human consciousness in religious experience.

It’s important not to make this very broad concept more precise than the data permit, or to assume more agreement among religious traditions than actually exists. The idea of a theosphere—a kind, mode, or world of human experience that appears to be inhabited by disembodied intelligences—is very nearly the only common ground you’ll find, and attempts to hammer the wildly diverse religious experiences of different individuals and cultures into a common tradition inevitably tell you more about the person or people doing the hammering than they do about the raw material being hammered. In particular, the role played by moral virtue in human relationships with the theosphere and its apparent denizens varies drastically from one tradition to another. There are plenty of religious traditions in which ethics play no role at all, and moral thought is assigned to some other sphere of life, while even among those religions that do include moral teaching, there’s no consensus on which virtues are central.  In any case, it’s the relationship to the theosphere that matters, and the moral dimension is there to support the relationship.

This is pretty much the explanation you can expect to get, by the way, if you ask ordinary, sincere, unreflective believers in a theist religion what their religious life is about. They’ll normally use the standard terminology of their tradition—your ordinary churchgoing American Protestant, for example, will likely tell you that it’s about getting right with Jesus, your ordinary Shinto parishioner in Japan will explain that it’s about a proper relationship with the kami, and so on through the diversity of the world’s faiths—but the principle is the same. If morals come into the discussion, the role assigned to them is a subordinate one: the Protestant, for example, will likely explain that following the moral teachings of the Bible is one part of getting right with Jesus, not the other way around.

That’s the thing that rationalist attempts to construct or manipulate religion for some secular purpose always miss, and it explains why such attempts reliably fail. The atheists who point out that it’s not necessary to worship a deity to lead an ethical life, even a life of heroic virtue, are quite correct; the religious person whose object of reverence expects moral behavior may have an additional incentive to ethical living, but no doubt the atheists can come up with an additional incentive or two of their own. It’s religious experience, the personal sense of contact with a realm of being that transcends the ordinary affairs of material and mental life, that’s the missing element; without it, you’re left with yet another set of moral preachments that appeal only to those who already agree with them.

This is what guarantees that Armstrong’s Charter for Compassion will presently slide into oblivion, following a trajectory marked out well in advance by dozens of equally well-meant and equally ineffectual efforts. How many people even remember these days, for example, that nearly all of the world’s major powers actually sat down in 1928 and signed a treaty to end war forever?  The Kellogg-Briand Pact failed because the nations that needed to be restrained by it weren’t willing to accept its strictures, while the nations that were enthusiastic about it weren’t planning to invade anybody in the first place. In the same way, the people who sign the Charter for Compassion, if they really intend to guide their behavior by its precepts, are exactly the ones who don’t need it in the first place, while people who see no value in compassion either won’t sign or won’t let a signature on a document restrain them from doing exactly what they want, however uncompassionate that happens to be.

That’s also what happened to the efforts of green thinkers in the 1970s either to manufacture a green religion, or to manipulate existing religions into following a green agenda. The only people who were interested were those who didn’t need it—those who were already trying to follow ecologically sound lifestyles for other reasons. The theosphere wasn’t brought into the project, or even consulted about it, and so the only source of passionate commitment that could have made the project more than a daydream of Sausalito intellectuals went by the boards. So, in due time, did the project.

What makes the involvement of what I’ve called the theosphere essential to any such program is that the emotional and intellectual energies set in motion by religious experience very often trump all other human motivations. When people step outside the ordinary limits of human behavior in any direction, for good or ill, if love or hate toward another person isn’t the motivating factor, very often what drives them is religious in nature—not ethical, mind you, but the nonrational commitment of the whole self toward an ideal that comes out of religious experience. Every rationalist movement throughout history has embraced the theory that all this can be dispensed with, and should be dispensed with, in order to make a society that makes rational sense; every rationalist movement finally collapsed in frustration and disarray when it turned out that the theory doesn’t work, and a society that makes rational sense won’t function in the real world because, ultimately, human beings don’t make rational sense.

The collapse of the rationalist agenda is thus one of the forces that launches the Second Religiosity. Another is the simple fact that most people never do accept the rationalist agenda, and as polemics against traditional religion from rationalist sources become more extreme, the backlash mentioned earlier becomes a potent and ultimately unstoppable force. Still, there may be more to it than that.

Without getting into the various arguments, religious and antireligious, about just exactly what reality might lie behind what I’ve called the theosphere, it’s probably fair to say that this reality isn’t a passive screen onto which individuals or societies can project whatever fantasies they happen to prefer.  What comes out of the theosphere, in the modest religious experiences of ordinary believers as well as the  world-shaking visions of great prophets, changes from one era to another according to a logic (or illogic) all its own, and such changes correspond closely to what I’ve described in earlier posts as shifts in religious sensibility. In the weeks to come, we’ll talk about what that might imply.

Wednesday, October 2, 2013

The Flight to the Ephemeral

I'd meant to devote this week’s post to exploring the way that new religious movements so often give shape to emerging ideas and social forms during the decline of civilizations, and to sketch out some of the possibilities for action along those lines as industrial society moves further along its own curve of decline and fall. Still, these essays are part of a broader conversation about the future of today’s world, and now and then some other part of that conversation brings up points relevant to the discussion here.
 
That’s as much excuse as there is for this week’s detour. A few weeks ago, the P2P Foundation website hosted a piece by Kevin Carson titled When Ephemeralization is Hard to Tell from Catabolic Collapse. Carson’s piece got some attention recently in the peak oil blogosphere, not to mention some pointed and by no means unjustified criticism. It seems to me, though, that there’s a valid point tucked away in Carson’s essay; he’s got it by the wrong end, and it doesn’t imply what he thinks it does, but the point is nonetheless there, and important.

Getting to it, though, requires a certain tolerance for intellectual sloppiness of a kind embarrassingly common in today’s culture. When Carson talks about “the Jared Diamond/John Michael Greer/William Kunstler theory of ‘catabolic collapse,’” for example, it’s hard to escape the conclusion that he simply hasn’t taken the time to learn much about his subject. “Catabolic collapse,” after all, isn’t a generic label for collapse in general; it’s the name for a specific theory about how civilizations fall—those who are interested can download a PDF here—which I developed between 2001 and 2004 and published online in a 2005 essay, and the other two names he cited had nothing to do with it.

Mind you, I would be delighted to hear that Jared Diamond supports the theory of catabolic collapse, but as far as I know, he’s never mentioned it in print, and the modes of collapse he discusses in his book Collapse: How Societies Choose to Fail or Succeed differ significantly from my model. As for the third author, presumably Carson means James Howard Kunstler, the author of The Long Emergency and Too Much Magic—very solid books about the approaching end of the industrial age, though once again based on a different theory of collapse—rather than William Kunstler, the late civil rights lawyer who defended the Chicago Seven back in 1969, and who to the best of my knowledge never discussed the collapse of civilizations at all.

This same somewhat casual relationship to matters of fact pops up elsewhere in Carson’s essay, and leaves his argument rather the worse for wear.  Carson’s claim is that the accelerating breakdown of the existing infrastructure of industrial society isn’t a problem, because that infrastructure either is being replaced, or is sure to be replaced (he is somewhat vague on this distinction), by newer, better and cheaper high-tech systems. What Buckminster Fuller used to call ephemeralization—defined, with Bucky’s usual vagueness, as “doing more with less”—is, in Carson’s view, “one of the most central distinguishing characteristics of our technology,” and guarantees that new infrastructures will be so much less capital-intensive than the old ones that replacing the latter won’t be a problem.

That’s a claim worth considering. The difficulty, though, is that the example he offers—also borrowed from Fuller—actually makes the opposite case.  Replacing a global network of oceanic cables weighing some very large amount with a few dozen communications satellites weighing a few tons each does look, at first glance, like a dramatic step toward ephemeralization, but that impression remains only as long as it takes to ask whether the satellites are replacing those cables all by themselves. Of course they’re not; putting those satellites up, keeping them in orbit, and replacing them requires an entire space program, with all its subsidiary infrastructure; getting signals to and from the satellites requires a great deal more infrastructure. Pile all those launch gantries, mission control centers, satellite dishes, and other pieces of hardware onto the satellite side, and the total weight on that end of the balance starts looking considerably less ephemeral than it did.  Even if you add a couple of old-fashioned freighters on the cable side—that’s the modest technology needed to lay and maintain cables—it’s far from clear that replacing cables with satellites involves any reduction in capital intensity at all.

All this displays one of the more troubling failures of contemporary intellectual culture, an almost physiological inability to think in terms of whole systems. I’ve long since lost count of the number of times I’ve watched card-carrying members of the geekoisie fail to grasp that their monthly charge for internet service isn’t a good measure of the whole cost of the internet, or skid right past the hard economic fact that the long term survival of the internet depends on its ability to pay for itself.  This blindness to whole systems is all the more startling in that the computer revolution itself was made possible by the creation of systems theory and cybernetics in the 1940s and 1950s, and whole-systems analysis is a central feature of both these disciplines.

To watch the current blindness to whole systems in full gaudy flower, glance over any collection of recent chatter about “cloud computing.” What is this thing we’re calling “the cloud?”  Descend from the airy realms of cyber-abstractions into the grubby underworld of hardware, and it’s an archipelago of huge server farms, each of which uses as much electricity as a small city, each of which has a ravenous hunger for spare parts, skilled labor, and many other inputs, and each of which must be connected to all the others by a physical network of linkages that have their own inescapable resource demands. As with Fuller’s satellite analogy, the ephemeralization of one part of the whole system is accomplished at the cost of massive capital outlays and drastic increases in complexity elsewhere in the system.

All this needs to be understood in order to put ephemeralization into its proper context. Still, Carson’s correct to point out that information technologies have allowed the replacement of relatively inefficient infrastructure, in some contexts, with arrangements that are much more efficient. The best known example is the replacement of old-fashioned systems of distribution, with their warehouses, local jobbers, and the rest, with just-in-time ordering systems that allow products, parts, and raw materials to be delivered as they’re needed, where they’re needed. Since this approach eliminates the need to keep warehouses full of spare parts and the like, it’s certainly a way of doing more with less—but the consequences of doing so are considerably less straightforward than they appear at first glance.

To understand how this works, it’s going to be necessary to spend a little time talking about catabolic collapse, the theory referenced earlier. The basis of that theory is the uncontroversial fact that human societies routinely build more infrastructure than they can afford to maintain. During periods of prosperity, societies invest available resources in major projects—temples, fortifications, canal or road systems, space programs, or whatever else happens to appeal to the collective imagination of the age. As infrastructure increases in scale and complexity, the costs of maintenance rise to equal and exceed the available economic surplus; the period of prosperity ends in political and economic failure, and infrastructure falls into ruin as its maintenance costs are no longer paid.

This last stage in the process is catabolic collapse. Since the mismatch between maintenance costs and economic capacity is the driving force behind the cycle, the collapse of excess infrastructure has a silver lining—in fact, two such linings. First, since ruins require minimal maintenance, the economic output formerly used to maintain infrastructure can be redirected to other uses; second, in many cases, the defunct infrastructure can be torn apart and used as raw materials for something more immediately useful, at a cost considerably lower than fresh production of the same raw materials would require. Thus post-Roman cities in Europe’s most recent round of dark ages could salvage stone from  temples, forums, and coliseums to raise walls against barbarian raiders, just as survivors of the collapse of industrial society will likely thank whatever deities they happen to worship that we dug so much metal out of the belly of the earth and piled it up on the surface in easily accessible ruins.

Given a stable resource base, the long-term economic benefits of catabolic collapse are significant enough that a new period of prosperity normally follows the collapse, resulting in another round of infrastructure buildup and a repetition of the same cycle.  The pulse of anabolic expansion and catabolic collapse thus defines, for example, the history of imperial China. The extraordinary stability of China’s traditional system of village agriculture and local-scale manufacturing put a floor under the process, so that each collapse bottomed out at roughly the same level as the last, and after a century or two another anabolic pulse would get under way. In some places along the Great Wall, it’s possible to see the high-water marks of each anabolic phase practically side by side, as each successful dynasty’s repairs and improvements were added onto the original fabric.

Matters are considerably more troublesome if the resource base lacks the permanence of traditional Chinese rice fields and workshops. A society that bases its economy on nonrenewable resources, in particular, has set itself up for a far more devastating collapse. Nonrenewable resource extraction is always subject to the law of diminishing returns; while one resource can usually be substituted by another, that simply means a faster drawdown of still other resources—the replacement of more concentrated metal ores with ever less concentrated substitutes, the usual example cited these days for resource substitution, required exponential increases in energy inputs per ton of metal produced, and thus hastened the depletion of concentrated fossil fuel reserves.

As the usual costs of infrastructure maintenance mount up, as a result, a society that runs its economy on nonrenewable resources also faces rising costs for resource extraction. Eventually those bills can no longer be paid in full, and the usual pattern of political and economic failure ensues. It’s at this point that the real downside of dependence on nonrenewable resources cuts in; the abandonment of excess infrastructure decreases one set of costs, and frees up some resources, but the ongoing depletion of the  nonrenewable resource base continues implacably, so resource costs keep rising. Instead of bottoming out and setting the stage for renewed prosperity, the aftermath of crisis allows only a temporary breathing space, followed by another round of political and economic failure as resource costs continue to climb. This is what drives the stairstep process of crisis, partial recovery, and renewed crisis, ending eventually in total collapse, that appears so often in the annals of dead civilizations.

Though he’s far from clear about it, I suspect that this is what Carson meant to challenge by claiming that the increased efficiencies and reduced capital intensity of ephemeralized technology make worries about catabolic collapse misplaced. He’s quite correct that increased efficiency, “doing more with less,” is a response to the rising spiral of infrastructure maintenance costs that drive catabolic collapse; in fact, it’s quite a common response, historically speaking. There are at least two difficulties with his claim, though. The first is that efficiency is notoriously subject to the law of diminishing returns; the low hanging fruit of efficiency improvement may be easily harvested, but proceeding beyond that involves steadily increasing difficulty and expense, because in the real world—as distinct from science fiction—you can only do so much more with less and less. That much is widely recognized.  Less often remembered  is that increased efficiency has an inescapable correlate that Carson doesn’t mention: reduced resilience.

It’s only fair to point out that Carson comes by his inattention to this detail honestly. It was among the central themes of the career of Buckminster Fuller, whose ideas give Carson’s essay its basic frame. Fuller had a well-earned reputation in the engineering field of his time as “failure-prone,” and a consistent habit of pursuing efficiency at the expense of resilience was arguably the most important reason why. 

The fiasco surrounding Fuller’s 1933 Dymaxion car is a case in point.  One of the car’s many novel features was a center of mass that was extremely high compared to other cars, which combined with an innovative suspension system to give the car an extremely smooth ride. Unfortunately this same feature turned into a lethal liability when a Dymaxion prototype was sideswiped by another vehicle. Then as now, cars on Chicago’s Lake Shore Drive bump into one another quite often, but few of them flip and roll, killing the driver and seriously injuring everyone else on board. That’s what happened in this case, and Chrysler—which had been considering mass production of the Dymaxion car—withdrew from the project at once, having decided that the car wasn’t safe to drive.

The rise and fall of Fuller’s geodesic dome architecture traces the same story in a less grim manner. Those of my readers who were around in the 1960s will recall the way geodesic domes sprang up like mushrooms in those days. By the early 1970s, they were on their way out, for a telling reason. Fuller’s design was extremely efficient in its use of materials, but unless perfectly caulked—and in the real world, there is no such thing as perfect caulking—geodesic domes consistently leaked in the rain. Famed vernacular architect Lloyd Kahn, author of Domebooks 1 and 2, the bibles of the geodesic-dome fad, marked the end of the road with his 1973 sourcebook Shelter, which subjected the flaws of the geodesic dome to unsparing analysis and helped refocus the attention of the nascent appropriate technology scene onto the less efficient but far more resilient technology of shingled roofs. Nowadays geodesic domes are only used in those few applications where their efficiency is more important than their many practical problems.

The unavoidable tradeoff between efficiency and resilience can be understood easily enough by considering an ordinary bridge. All bridges these days have vastly more structural strength than they need in order to support their ordinary load of traffic. This is inefficient, to be sure, but it makes the bridges resilient; they can withstand high winds, unusually heavy loads, deferred maintenance, and other challenges without collapsing. Since the cost of decreased resilience (a collapsed bridge and potential loss of life) is considerably more serious than the cost of decreased efficiency (more tax revenues spent on construction), inefficiency is accepted—and rightly so.

It’s one of the persistent delusions of contemporary computer culture to claim that this equation doesn’t apply once modern information technology enters into the picture. Nassim Taleb’s widely read The Black Swan is chockfull of counterexamples. As he shows, information networks have proven to be as effective at multiplying vulnerabilities as they are at countering them, and can be blindsided by unexpected challenges just as thoroughly as any other system. The 1998 failure of Long Term Capital Management (LTCM), whose publicists insisted that its computer models could not fail during the lifetime of the universe and several more like it, is just one of many cases in point.

The history of any number of failed civilizations offers its own mocking commentary on the insistence that efficiency is always a good thing. In its final years, for instance, the Roman Empire pursued “doing more with less” to a nearly Fulleresque degree, by allowing the manpower of legionary units along the Rhine and Danube frontiers to decline to a fraction of their paper strength. In peace, this saved tax revenues for critical needs elsewhere; when the barbarian invasions began, though, defenses that had held firm for centuries crumpled, and the collapse of the imperial system duly followed.

In this context, there’s a tremendous irony in the label Fuller used for the pursuit of efficiency.  The word “ephemeral,” after all, has a meaning of its own, unrelated to the one Fuller slapped onto it; it derives from the Greek word ephemeron, “that which lasts for only one day,” and its usual synonyms include “temporary,” “transitory,” and “fragile.”  A society dependent on vulnerable satellite networks in place of the robust reliability of oceanic cables, cloud computing in place of the dispersed security of programs and data spread across millions of separate hard drives, just-in-time ordering in place of warehouses ready to fill in any disruptions in the supply chain, and so on, is indeed more ephemeral—that is to say, considerably more fragile than it would otherwise be. 

In a world facing increasingly serious challenges driven by resource depletion, environmental disruption, and all the other unwelcome consequences of attempting limitless growth on a relentlessly finite planet, increasing the fragility of industrial society is also a good way to see to it that it turns out to be temporary and transitory. In that sense, and only in that sense, Carson’s right; ephemeralization is the wave of the future, and it’s even harder to tell it apart from catabolic collapse than he thinks, because ephemeralization is part of the normal process of collapse, not a way to prevent it.

There’s an equal irony to be observed in the way that Carson presents this preparation for collapse as yet another great leap forward on the allegedly endless march of progress. As discussed earlier in this series of posts, the concept of progress has no content of its own; it’s simply the faith-based assumption that the future will be, or must be, or at least ought to be, better than the present; and today’s passionate popular faith in the inevitability and beneficence of progress makes it embarrassingly easy for believers to convince themselves that any change you care to name, however destructive it turns out to be, must be for the best.  As we continue down the familiar trajectory of decline and fall, we can thus expect any number of people to cheer heartily at the progress, so to speak, that we’re making toward the endpoint of that curve.

Not all such cheering will be branded so obviously by another rehash of the weary 20th-century technofantasy of “a world without want,” or that infallible touchstone of the absurd, the insertion of some scrap of Star Trek’s fictional technology in what purports to be a discussion of a future we might actually inhabit. There will no doubt be any number of attempts in the years ahead to insist that our decline is actually an ascent, or the birth pangs of a new and better world, or what have you, and it may well take an unusual degree of clarity to see past the chorus of reassurances, come to terms with the hard realities of our time, and do something constructive about them.
Related Posts Plugin for WordPress, Blogger...