Evolution 2.0: On the origin of technologies

Evolution 2.0: On the origin of technologies, by W. Brian Arthur, published by New Scientist, 19 August 2009.

W. Brian Arthur, author of The Nature of Technology, reflects upon the concept of evolution applied to technology, reaching the conclusion that technology is autopoietic (i.e., it creates itself, although with the obvious help of human agency).

Barely four years after the publication of Darwin's On the Origin of Species, the Victorian novelist Samuel Butler was calling for a theory of evolution for machines. Since then, a few hardy souls have attempted to oblige him, but none has quite hit the mark. Their reasoning, very much à la Darwin, is that any given technology has many designers with different ideas —which produces many variations. Of these variations, some are selected for their superior performance and pass on their small differences to future designs. The steady accumulation of such differences gives rise to novel technologies and the result is evolution.

This sounds plausible, and it works for already existing technologies —certainly the helicopter and the cellphone progress by variation and selection of better designs. But it doesn't explain the origin of radically novel technologies, the equivalent of novel species in biology. The jet engine, for example, does not arise from the steady accumulation of changes in the piston engine, nor does the computer emerge from accumulated changes in electromechanical calculators. Darwin's mechanism does not apply to technology.

(...)

To start with, we can observe that all technologies have a purpose; all solve some problem. They can only do this by making use of what already exists in the world. That is, they put together existing operations, means, and methods —in other words, existing technologies— to do the job.

(...)

So novel technologies are constructed from combinations of existing technologies. While this moves us forward, it is not yet the full story. Novel technologies (think of radar) are also sometimes created by capturing and harnessing novel phenomena (radio waves are reflected by metal objects). But again, if we look closely, we see that phenomena are always captured by existing technologies —radar used high-frequency radio transmitters, circuits, and receivers to harness its effect. So we are back at the same mechanism: novel technologies are made possible by —are created from— combinations of the old.

In a nutshell, then, evolution in technology works this way: novel technologies form from combinations of existing ones, and in turn they become potential components for the construction of further technologies. Some of these in turn become building blocks for the construction of yet further technologies. Feeding this is the harnessing of novel phenomena, which is made possible by combinations of existing technologies.

This mechanism, which I call combinatorial evolution, has an interesting consequence. Because new technologies arise from existing ones, we can say the collective of technology creates itself out of itself. In systems language, technology is autopoietic (from the Greek for "self-creating"). Of course, technology doesn't create itself from itself all on its own. It creates itself with the agency of human beings, much as a coral reef creates itself from itself with the assistace of small organisms.

Does Language Shape What We Think?

Does Language Shape What We Think?, by Joshua Hartshorne, published on Scientific American, 18 August 20009.

For quite sometime now, we have assumed that Whorfianism (i.e., the idea that our language shapes the way we think) is generally correct. However, as Joshua Hatshorne explains, this is far from clear.

Eskimos, as is commonly reported, have myriads of words for snow, affecting how they perceive frozen precipitation. A popular book on English notes that, unlike English, "French and German can distinguish between knowledge that results from recognition... and knowledge that results from understanding." Politicians try to win the rhetorical battle ("pro-life" vs. "anti-abortion"; "estate-tax" vs. "death tax") in order to gain the political advantage.

For all its social success, Whorfianism has fared less well scientifically. Careful consideration of the examples above shows why. Try calling dry snow "dax" and wet snow "blicket", and see if you notice a change in how you think about snow. I didn't. The English book's statement assumes that if you don't have a word for something, you can't talk about it... a claim that the sentence proves false. Finally, calling the law of October 26, 2001, the "USA Patriot Act" may have done as much to stain the word "patriot" as increase enthusiasm for the law.

Oh, and Eskimods don't have all that many words for snow.

In fact, scientists have had so much difficulty demonstrating that language affects thought that in 1994 renown psychologist Steven Pinker called Whorfianism dead. Since then, Whorfianism has undergone a small resurgence. For instance, Lea Borodistky and colleagues found that speakers of Russian, which treats light blue and dark blue as primary colors, are faster to categorize shades of blue.

While fascinating and important work, these and other similar results are a bit short of showing that "the more words you know, the more thoughts you can have." The recent study that comes closest is an investigation of number.

Although number words and counting are a fixture of life in most cultures from the time we are old enough to play hide-and-go-seek, some languages have only a handful of number words. In a paper published in 2008, MIT cognitive neuroscientist Michael Frank and colleagues demonstrated that Pirahã, a language spoken by a small Amazonian community, has no number words at all. The research team simply asked Piraã speakers to count different numbers of batteries, nuts and other common bjects. Rather than having a word consistently used to describe "one X", a different word for "two Xs" and yet another word for "three Xs", the Pirahã used hói to describe a small number of objects, hoí to describe a slightly larger number, and baágiso for an even larger number. Basically, these words mean "around one", "some" and "many".

The lack of number words had a profound and surprising effect on what the Pirahã could do. In a series of experiments, the researches presented Pirahã participants with some number of spools of thread. The participants' task was simply to give the researcher the same number of balloons. If the participants were allowed to line up the balloons next to the spools of thread one-by-one, they did fine. But if they weren't allowed this crutch —for instance, if the spools of thread were dropped into a bucket one at a time, and then the participant had to produce the same number of balloons— they failed. Although they were generally able to stay in the ballpark —if a lot of spools went into the bucket, they produced a lot of balloons; a small number of spools, a small number of balloons— their responses were basically educated guesses.

Could it be that the Pirahã not understand the concept of "same amount"? That's unlikely. When allowed to match the balloons to spools one-by-one, they succeeded in the task. Instead, it seems that they failed to give the same number of balloons only when they had to rely on memory.

(...)

This suggests a different way of thinking about the influence of language on thought: words are very handy mnemonics. We may not be able to remember what seventeen spools looks like, but we can remember the word seventeen. In his landmark The Language of Thought, philosopher Jerry Fodor argued that many words work like acronyms. French students use the acronym bans to remember which adjectives go before nouns ("Beauty, Age, Number, Goodness, and Size"). Similarly, sometimes it's easier to remember a word (calculos, Estonia) than what the word stands for. We use the word, knwing that should it become necessary, we can search through our minds —or an encyclpedia— and pull up the relevant information (how to calculate an integral; Estonia's population, capital and location on a map). Numbers, it seems, work the same way.

The Real Fantastic Stuff

The Real Fantastic Stuff, by Richard K. Morgan, published on the Suvudu website, 18 February 2009.

Written by Richard K. Morgan, himself a fantasy writer, this essay stirred some controversy among fantasy readers in the past few months. While some agre with the author's criticism of Tolkien's archetypal world, others thought it amounted to little else than self-publicity to promote Morgan's new book.

I am not much of a Tolkien fan —not since I was about twelve or fourteen anyway (which, it strikes me, is about the right age to read and enjoy this stuff). But it would be a foolish writer in the fantasy field who failed to acknowledge the man's overwhelming significance in the canon. And it would ba poor and superficial reader of Tolkien who failed to acknowledge that in amongst all the overwrought prose, the nauseous paeans to class-bound rural England, and the endless bloody elven singing that infests The Lord of the Rings, you can sometimes discern the traces of a bleak underlying human landscape which is completely at odds with the epic fantsay narrative for which the book is better known.

(...)

For me, this is some of the finest, most engaging work in The Lord of the Rings. It feels —perhaps a strange attribute for a fantasy novel— real. Suddenly, I'm interested in these orcs. Gorbag is transformed by that one laconic line about the city, from slavering brutish evil-doer to world-weary (almost noir-ish) hard-bitten survivor. The simplistic archetypes of Evil are stripped away and what lies beneath is —for better or brutal worse— all too human. This is the real meat of the narrative, this is the telling detail (as Bradbury's character Faber from Farenheit 451 would have it), no Good, no Evil, just the messy human realities of a Great War as seen from ground level. And I don't think it's a stretch to say that what you're probably lookin at here are the fossil remnants of Tolkien's first-hand experiences in his own Great War, as he passed through the hellish trenches and the slaughter of the Somme in 1916.

The great shame is, of course, that Tolkien was not able (or inclined) to mine this vein of experience for what it was really worth —in fact he seemed to be in full, panic-stricken flight form it. I suppose it's partially understandable —the generation who fought in the First World War got to watch every archetypal idea they had about Good and Evil collapse in reeking bloody ruin around them. It takes a lot of strength to endure something like that and survive, and then to re-draw your understanding of things to fit the uncomfortable reality you've seen. Far easier to retreat into simplistis nostalgia for the faded or forgotten values you used to believe in. So by the time we get back to Cirith Ungol in The Return of the King, Gorbag and his comrades have been conveniently shorn of their more interesting human character attributes and we're back to the cackling slavering evil out of Mordor from a children's bedtime story. Our glimpse of something more humanly interesting is gone, replaced once more by the ponderous epic tones of Towering Archetypal Evil pitted against Irritatingly Radiant Good (oh —and guess who wins).

Well, I guess it's called fantasy for a reason.

I only wonder why on earth anyone (adult) would want to read something like that.