Serendipity’s a funny thing. When I started planning out this post a couple of days ago, I knew that I was going to have to pull my battered copy of Gregory Bateson’s
Mind and Nature off the bookshelf where I keep basic texts on systems philosophy, since it’s almost impossible to talk about information in any useful way without banking off Bateson’s ideas. I didn’t have any similar intention when I checked out science reporter Charles Seife’s
Sun in a Bottle: The Strange History of Fusion and the Science of Wishful Thinking from the local library, much less when I took a break from writing the other evening to watch “Monty Python and the Holy Grail” for the first time since my teens.
Still, I’m not at all sure I could have chosen better, for both of these latter turned out to have plenty of relevance to the theme of this week’s post. Fifty years of failed research and a minor masterpiece of giddy British absurdity may not seem to have much to do with each other, much less with information, Gregory Bateson, or a “green wizardry” fitted to the hard limits and pressing needs of the end of the industrial age. Yet the connections are there, and the process of tracing them out will help more than a little to make sense of how information works – and also how it fails to work.
Let’s start with a few basics. Information is the third element of the triad of fundamental principles that flow through whole systems of every kind, and thus need to be understood to build viable appropriate tech systems. We have at least one huge advantage in understanding information that people a century ago didn’t have: a science of information flow in whole systems, variously called cybernetics and systems theory, that was one of the great intellectual adventures of the twentieth century and deserves much more attention than most people give it these days.
Unfortunately we also have at least one huge disadvantage in understanding information that people a century ago didn’t have, either. The practical achievements of cybernetics, especially but not only in the field of computer science, have given rise to attitudes toward information in popular culture that impose bizarre distortions on the way most people nowadays approach the subject. You can see these attitudes in an extreme form in the notion, common in some avant-garde circles, that since the amount of information available to industrial civilization is supposedly increasing at an exponential rate, and exponential curves approach infinity asymptotically in a finite time, then at some point not too far in the future, industrial humanity will know everything and achieve something like omnipotence.
I’ve pointed out several times in these essays that this faith in the so-called “singularity” is a rehash of Christian apocalyptic myth in the language of cheap science fiction, complete with a techno-Rapture into a heaven lightly redecorated to make it look like outer space. It might also make a good exhibit A in a discussion of the way that any exponential curve taken far enough results in absurdity. Still, there’s still another point here, which is that the entire notion of the singularity is rooted in a fundamental misunderstanding of what information is and what it does.
Bateson’s work is a good place to start clearing up the mess. He defines information as “a difference that makes a difference.” This is a subtle definition, and it implies much more than it states. Notice in particular that whether a difference “makes a difference” is not an objective quality ; it depends on an observer, to whom the difference makes a difference. To make the same point in the language of philosophy, information can’t be separated from intentionality.
What is intentionality? The easiest way to understand this concept is to turn toward the nearest window. Notice that you can look
through the window and see what’s beyond it, or you can look
at the window and see the window itself. If you want to know what’s happening in the street outside, you look through the window; if you want to know how dirty the window glass is, you look at the window. The window presents you with the same collection of photons in either case; what turns that collection into information of one kind or another, and makes the difference between seeing the street and seeing the glass, is your intentionality.
The torrent of raw difference that deluges every human being during every waking second, in other words, is not information. That torrent is data – a Latin word that means “that which is given.” Only when we approach data with intentionality, looking for differences that make a difference, does data become information – another Latin word that means “that which puts form into something.” Data that isn’t relevant to a given intentionality, such as the dirt on a window when you’re trying to see what’s outside, has a different name, one that doesn’t come from Latin: noise.
Thus the mass production of data in which believers in the singularity place their hope of salvation can very easily have the opposite of the effect they claim for it. Information only comes into being when data is approached from within a given intentionality, so it’s nonsense to speak of it as increasing exponentially in some objective sense. Data can increase exponentially, to be sure, but this simply increases the amount of noise that has to be filtered before information can be made from it. This is particularly true in that a very large fraction of the data that’s exponentially increasing these days consists of such important material as, say, gossip about Kate Hudson’s breast implants.
The need to keep data within bounds to make getting information from it easier explains why the sense organs of living things have been shaped by evolution to restrict, often very sharply, the data they accept. Every species of animal has different information needs, and thus limits its intake of data in a different way. You’re descended from mammals that spent a long time living in trees, for example, which is why your visual system is very good at depth perception and seeing the colors that differentiate ripe from unripe fruit, and very poor at a lot of other things.
A honeybee has different needs for information, and so its senses select different data. It sees colors well up into the ultraviolet, which you can’t, because many flowers use reflectivity in the ultraviolet to signal where the nectar is, and it also sees the polarization angle of light, which you don’t, since this helps it navigate to and from the hive. You don’t “see” heat with a special organ on your face, the way a rattlesnake does, or sense electrical currents the way many fish do; around you at every moment is a world of data that you will never perceive, because your ancestors over millions of generations survived better by excluding that data, so they could extract information from the remainder, than they would have done by including it.
Human social evolution parallels biological evolution, and so it’s not surprising that much of the data processing in human societies consists of excluding most data so that useful information can emerge from the little that’s left over. This is necessary but it’s also problematic, for a set of filters that limit data to what’s useful in one historical or ecological context can screen out exactly the data that might be most useful in a different context, and the filters don’t necessarily change as fast as the context.
The history of fusion power research provides a superb example. For more than half a century now, leading scientists in the world’s industrial nations have insisted repeatedly, and inaccurately, that they were on the brink of opening the door to commercially viable fusion power. Trillions of dollars have gone down what might best be described as a collection of high-tech ratholes as the same handful of devices get rebuilt in bigger and fancier models, and result in bigger and costlier flops. They’re still at it; the money the US government alone is paying to fund the two fusion megaprojects du jour, the National Ignition Facility and the ITER, would very likely buy a solar hot water system for every residence in the United States and thus cut the country’s household energy use by around 10% at a single stroke. Instead, it’s being spent on projects that even their most enthusiastic proponents admit will only be one more inconclusive step toward fusion power.
The information that is being missed here is that fusion power isn’t a viable option. Even if sustained fusion can be done at all outside the heart of a star, and the odds of that don’t look good just now, it’s been shown beyond a doubt that the cost of building enough fusion power plants to make a difference will be so high that no nation on Earth can afford them. There are plenty of reasons why that information is being missed, but an important one is that industrial society learned a long time ago to filter out data that suggested that any given technology wasn’t going to be viable. During the last three centuries, as fossil fuel extraction sent energy per capita soaring to unparalleled heights, that was an adaptive choice; the inevitable failures – and there have been wowsers – were more than outweighed by the long shots that came off, and the steady expansion of economic wealth powered by fossil fuels made covering the costs of failures and long shots alike a minor matter.
We don’t live in that kind of world any longer. With the peak of world conventional petroleum production receding in the rear view mirror, energy per capita is contracting, not expanding. At the same time, most of the low hanging fruit in science and engineering has long since been harvested, and most of what’s left – fusion power here again is a good example – demands investment on a gargantuan scale with no certainty of payback. The assumption that innovation always pays off, and that data contradicting that belief is to be excluded, has become hopelessly maladaptive, but it remains welded in place; consider the number of people who insist that the proper response to peak oil is some massive program that would gamble the future on some technology that hasn’t yet left the drawing boards.
It’s at this point that the sound of clattering coconut hulls can be heard in the distance, for the attempt to create information out of data that won’t fit it is the essence of the absurd, and absurdity was the stock in trade of the crew of British comics who performed under the banner of Monty Python. What makes “Monty Python and the Holy Grail” so funny is the head-on collisions between intentionalities and data deliberately chosen to conflict with them; any given collision may involve the intentionality the audience has been lured into accepting, or the intentionality one of the characters is pursuing, or both at once, but in every scene, cybernetically speaking, that’s what’s happening.
Consider King Arthur’s encounter with the Black Knight. The audience and Arthur both approach the scene with an intentionality borrowed from chivalric romance, in which knightly combat extracts the information of who wins and who loses out of the background data of combat. The Black Knight, by contrast, approaches the fight with an intentionality that excludes any data that would signal his defeat. No matter how many of the Black Knight’s limbs get chopped off – and by the end of the scene, he’s got four bloody stumps – he insists on his invincibility and accuses Arthur of cowardice for refusing to continue the fight. There’s some resemblance here to the community of fusion researchers, whose unchanging response to half a century of utter failure is to keep repeating that fusion power is just twenty (more) years in the future.
Doubtless believers in the singularity will be saying much the same thing fifty years from now, if there are still any believers in the singularity around then. The simple logical mistake they’re making is the same one that fusion researchers have been making for half a century; they’ve forgotten that the words “this can’t be done” also convey information, and a very important kind of information at that. Just as it’s very likely at this point that fusion research will end up discovering that fusion power won’t work on any scale smaller than a star, it’s entirely plausible that even if we did achieve infinite knowledge about the nature of the universe, what we would learn from it is that the science fiction fantasies retailed by believers in the singularity are permanently out of reach, and we simply have to grit our teeth and accept the realities of human existence after all.
All these points, even those involving Black Knights, have to be kept in mind in making sense of the flow of information through whole systems. Every system has its own intentionality, and every functional system filters the data given to it so that it can create the information it needs. Even so simple a system as a thermostat connected to a furnace has an intentionality – it “looks” at the air temperature around the thermostat, and “sees” if that temperature is low enough to justify turning the furnace on, or high enough to justify turning it off. The better the thermostat, the more completely it ignores any data that has no bearing on its intentionality; conversely, most of the faults thermostats can suffer can be understood as ways that other bits of data (for example, the insulating value of the layer of dust on the thermostat) insert themselves where they’re not wanted.
The function of the thermostat-furnace system in the larger system to which it belongs – the system of the house that it’s supposed to keep at a more or less stable temperature – is another matter, and requires a subtly different intentionality. The homeowner, whose job it is to make information out of the available data, monitors the behavior of the thermostat-furnace system and, if something goes wrong, has to figure out where the trouble is and fix it. The thermostat-furnace system’s intentionality is to turn certain ranges of air temperature, as perceived by the thermostat, into certain actions performed by the furnace; the homeowner’s intentionality is to make sure that this intentionality produces the effect that it’s supposed to produce.
One way or another, this same two-level system plays a role in every part of the green wizard’s work. It’s possible to put additional levels between the system on the spot (in the example, the thermostat-furnace system) and the human being who manages the system, but in appropriate tech it’s rarely a good option; the Jetsons fantasy of the house that runs itself is one of the things most worth jettisoning as the age of cheap energy comes to a close. Your goal in crafting systems is to come up with stable, reliable systems that will pursue their own intentionalities without your interference most of the time, while you monitor the overall output of the system and keep tabs on the very small range of data that will let you know if something has gone haywire.
That same two-level system also applies, interestingly enough, to the process of learning to become a green wizard. The material on appropriate technology I’ve asked readers to collect embodies a wealth of data; what prospective green wizards have to do, in turn, is to decide on their own intentionality toward the data they have, and begin turning it into information. This is the exercise for this week.
Here’s how it works. Go through the Master Conserver files you downloaded, and any appropriate tech books you’ve been able to collect. On a sheet of paper, or perhaps in a notebook, note down each project you encounter – for example, weatherstripping your windows, or building a solar greenhouse. Mark any of the projects you’ve already done with a check mark. Then mark each of the projects you haven’t done with one of four numbers and one of four letters:
1 – this is a project that you could do easily with the resources available to you.
2 – this is a project that you could do, though it would take some effort to get the resources.
3 – this is a project that you could do if you really had to, but it would be a serious challenge.
4 – this is a project that, for one reason or another, is out of reach for you.
A – this is a project that is immediately and obviously useful in your life and situation right now.
B – this is a project that could be useful to you given certain changes in your life and situation.
C – this is a project that might be useful if your life and situation were to change drastically.
D – this is a project that, for one reason for another, is useless or irrelevant to you.
This exercise will produce a very rough and general intentionality, to be sure, but you’ll find it tolerably easy to refine from there. Once you decide, let’s say, that weatherstripping the leaky windows of your apartment before winter arrives is a 1-A project – easy as well as immediately useful – you’ve set up an intentionality that allows you to winnow through a great deal of data and find the information you need: for example, what kinds of weatherstripping are available at the local hardware store, and which of those can you use without spending a lot of money or annoying your landlord. Once you decide that building a brand new ecovillage in the middle of nowhere is a 4-D project, equally, you can set aside data relevant to that project and pay attention to things that matter.
Of course you’re going to find 1-D and 4-A projects as well – things that are possible but irrelevant, and things that would be splendidly useful but are out of your reach. Recognizing these limits is part of the goal of the exercise; learning to focus your efforts where they will accomplish the most soonest is another part; recognizing that you’ll be going back over these lists later on, as you learn more, and potentially changing your mind about some of the rankings, is yet another. Give it a try, and see where it takes you.