More than thirty years ago, the last men came home from the moon. If it finishes up taking, as seems likely, more than fifty years from those first brief visits to a permanent lunar settlement, that time will have been well spent addressing the technical challenges of sustaining a habitable enclave and even more so in fleshing out our reasons for doing so.

Around twenty years ago early students of complex systems started to explore the boundary zone between order and chaos. In one of the first systematic explorations, Stephen Wolfram identified archetypical boundary systems he called "Class 4" in a systematic study of 256 simple 1D cellular automata, Class 4 being intermediate between the regular and nested behaviour of his Classes 1 and 2 and the apparently random behaviour of Class 3.

More recently^{[1]}, students of complex systems have been beating a retreat from the border of order-edge of chaos, exemplified by Wolfram's 2002 "Principle of Computational Equivalence". This essay looks at reasons for the retreat and what rationales and technical capabilities might be needed to return to the edge. In particular, it proposes that we should look for signs of the border within active processes more so that within their outcomes.

The reason a few people got excited about the boundary zone in the first place was that it appeared to be a place you could start with simple processes and finish up with complex organisation, a place of self organisation and even potentially of adaptation where the dead hand of the second law of thermodynamics and increasing entropy could be overcome, as they clearly have sometimes been overcome in nature. Otherwise we would not be here to talk about it.

The significantly older, but similarly computer intensive, seemingly on again off again, quest for artificial intelligence may provide some useful insights into the plight of complex systems studies and the border of order-edge of chaos in particular. AI has never tired of teaching us how our minds **do not** work. Similarly complex systems studies have raised and dashed ideas that our universe might be a cellular automata or some other form of computer simulation. Unfortunately way too many prominent scientists still have not got enough of a handle on the reasonably comprehensible idea of countable infinity that they keep reviving the simulation idea, the same way science fiction writers refuse to let go of the idea of machine intelligence leaving behind even our most optimistic overestimates of human intelligence.

Just as it required a vibrant aerospace industry from which to conceive a visit to the moon, so it has required a vibrant computing industry to even start to explore the boundary of order and chaos. Class 4 cellular automata have been shown to exhibit computational universality, which means given the necessary data input (and great patience) that they can compute anything any computer can. Yet that powerful result may be a key reason we seem to have lost sight of the real reasons for exploring the edge. The excitement of theoreticians and the very existence of particular instruments unavoidably bias experimental design and observation.

We are naturally inclined to judge a process by its outcomes because they are accessible whereas the process is at best something we can descibe as an algorithm. The problem here is to understand the relationship between Class 1 through 4 processes and the corresponding outcomes which can really only be described as Class 1/2 (stable, cyclic, nested) or Class 3 (random). There is no corresponding Class 4 outcome because a distinguishing feature of Class 4 processes is that they can each produce many different outcomes given various input. Yet we most easily judge such processes by their outcomes, so the boundary zone can appear elusive.

By way of familiar example, in John Conway's 2D cellular automata, the Game of Life, "gliders" are the 4th most frequently generated persistent compact pattern (behind only blinkers, blocks and beehives). The simple fact that gliders move makes Life Class 4, but it also means that in most experimental situations that gliders interact and thus disappear disproportionately relative to compact stable and oscillating patterns and so they show up a lot less frequently in experimental outcomes.

One well known fact about universal computers is that the final outcome for many data input values is undecidable. In terms originally defined for universal Turing machines, they do not halt. This non outcome is as close as we get to a Class 4 outcome, escaping gliders aside. Undecidability too frustrates exploration of the boundary zone.

It almost goes without saying that all the systems discussed so far are strictly deterministic, that is given the same input they always generate the same output, or non output. Yet they are interesting in that they can often produce outcomes that resemble outcomes produced in nature, and thus they offer potential insight into questions about determinism in nature--questions which persists even in the face of knowledge of relativity, quantum uncertainty and the distinguishing characteristics of open systems.

The systems discussed here are also closed systems. Class 3 (random) systems are also computationally irreducible, so that the only way you can find their outcome is to run them. Class 4 systems tend towards being irreducible, but with islands of predictability which distinguish them from pure Class 3. (Classes 1 and 2 are very reducible.)

There has always been a tendency to think about the complex systems question in terms of how to get from simple to complex, in a large part because of the naive assumption that space time energy matter is inherently simple. However it is now becoming obvious that the ordinary behaviour of space time energy matter is not just far from trivial but also consistently applied across our vast cosmos, putting our cellular automata experiments into the context they should be seen to have--as but minimal models with which we can make a start on some of the fundamentals of complex systems.

Without discounting the origins problem of how to get from nothing at all to a sufficient something from which all else might arise, nor the active analytic philosophy community studying the problematic between our intellectual models and naturally complex systems, there appears to be something to be said for seeing how things look when we stand a couple of traditional priority orderings on their heads:

- Firstly, the world we find ourselves in is clearly complex in the extreme, a Class 4 or hypothetically better system. Within that system, simpler behaviour often arises and is particuarly important to us when we can recognise and predict Class 1 and 2 patterns, or recognise Class 4 randomness underpinning the fundamental resilience that is such an important property of the world but not yet of our models.
- Secondly, we might remember previous lessons bestowed by our natural anthropocentric tendencies and reverse the assumed heirarcy of complexity. Instead of promoting the notion that human consciouness is
*ipso facto*the most complex and mysterious process in the universe, try out the idea that space time energy matter is the most complex and mysterious, biological life next, with consciousness and other human endeavours really being far less complex and mysterious.

Having successfully inverted those traditional prioritisations, what is still needed to enable us to abandon the retreat from Class 4? In the final analysis, all things are as important as the uses to which they are put. There should certainly be useful things to learn from both mature complex systems and from our model complex systems--things which make us more effective in our work, both with and within the rapidly complexifying, but still very immature, systems that are the hallmark of human endeavour.

It appears that the recent deemphasis of the boundary zone, the border of order-edge of chaos, Class 4, may in a large part be because the methods familiar from science and mathematics deal habitually with outcomes and have intrinsic difficulty with examining processes other than through their outcomes. Determining even relatively simple processes from their input and outcomes alone is not just a hard problem, but a problem which often has no unambiguous solution.

So there might be a lot to gain from finding other ways to explore the boundary zone. Here it is valuable to revisit the assumed meanings of "simple" processes and "complex" systems. Cellular automata are a particularly accessible model in which such concepts can be studied--an n dimensional grid of cells which, at each tick of a universal clock, each simultaneously determine their next state as a function of the state of cells in their neighbourhood at the previous tick. That function is simple. The aggregate behaviour of all the cells in the grid is sometimes complex.

In Conway's Life, that function uses the previous live or dead state of a cell and the number of live cells amongst its eight near neighbours to determine the next state of that cell. There are 262,144 possible rules. Life is but one of them. A handful of others are fairly well known amongst the community of interest, but most are less than interesting.

Life is considered complex because its simple rules produce a seemingly endless variety of behaviours and outcomes from input data as simple as just five neighbouring live cells. A glider is actually two such five cell neighbourhoods which generate each other, or at least a diagonal reflection of each other, so that after four ticks the original five cell pattern reappears shifted one cell diagonally. The r-pentonimo, which is only one live cell position different to each of the glider phases, in contrast produces 25 stable/oscilating/travelling patterns with a total of 116 live cells after 1103 ticks.

It only makes sense to interpret the definition of Class 4 inclusively. When at least one rule within a particular cellular automata form is known to be Class 4, it makes sense to say that that cellular automata form is also Class 4 without in any way implying that other rules of that form will be Class 4. By extension, we know that Life is Class 4, not because all or many data input values lead to complex behaviour but because some do. Exploration of Class 4 involves finding data input values which produce complex behaviour and using what is found to hypothesise and test for even more complex behaviour.

In practice we judge complex behaviour to be present when we see a mix of interesting nested behaviour and clearly random behaviour from the same rule. Long repetition cycles and complex growth from compact "seeds" are also indicative. Even stronger indication is provided by the occurence of glider-like travelling signals and ultimately by discovering open ended growth which is not simply repetitive and thus effectively undecidable. (The notion of "undecidable" is best thought of in terms of a question, but it is usually hard to see what words might be used to describe the question which is not being answered by such an open ended outcome.)

There has been a tendency, particular in pursuit of proofs of universality, to extend the hypothesise and test cycle to large scale design and engineering, in the process invoking data input values which are so complex as to call into question whether the significance of having a simple underlying process may have been devalued. By way of contrast, there is some attraction in seeing how far we can take our explorations if each step is clearly simple.

Given that the basic idea of cellular automata is simple, it is a sufficiently simple step to the 2D, two colour (live and dead) eight neighbour totalistic cellular automata rule space. As mentioned, Conway's Life is but one of the 262,144 rules in that rule space. Those steps are clearly simple, in terms of only needing a modicum of data to define them. Exploring even a large number of simple seed patterns maintains the keep it simple approach, but there are also other kinds of productive simple steps which can explore other corners of Life.

2D cellular automata have traditionally been implemented on a notionally infinite grid or on a large torroidal surface formed by joining the opposite edges of a rectangular grid. The basic aim has been to avoid edge effects, but a lot of more complex behaviour has also been identified in long narrow torroidal grids where those edge effects add new possibilities. That particular exploration is being reported elsewhere as "Life in a Tube" and it is certainly not the only simple derivative of Life. The point of mentioning it here is that it exemplifies a different approach to further exploration of the boundary zone.

Ultimately, "simple" serves as a synonym for "could conceivably occur naturally". Despite the aforementioned popular misconceptions about the relationship between a hypothetical infinite universe and the realisation therein of a countable subset of all possibilities, the fact remains that the multiverse we find ourselves in is very large, especially when measured in its fundamental units, and thus big enough that simple doesn't have to be all that simple. (You can add to this the power of symbiosis and its analogues to achieve far more complexity than through simple emergence alone.) So if we want to use cellular automata and the like as models for exploring the border of order-edge of chaos, we can probably afford to explore a step or three beyond even Life in a Tube, adding just some simple data input values at each step. (This could detour into a discussion on a how a finite automata governed by Wolfram's Rule 22 trapped by blockages in a Tube can demonstrate self organisation from irreversible to reversible, but that is a story for another day.)

Despite such productive detours, there is still no guarantee short of Wolfram's proposed Principle of Computational Equivalence that cellular automata is a viable starting point for exploring the full potential of the boundary zone between order and chaos in the multiverse. What cellular automata clearly has going for it is its excellent fit with human visual perception and the human mind. So it is at least an excellent place to hone our exploration technologies.

Ultimately we would like to know whether Class 4 provides a starting path towards something even more complex that might be needed to fully account for the wonders of our world via a path that can be followed in simple enough steps to be credible. Just as the moon base will provide an essential step towards our heirs and successors' deeper exploration and exploitation of space, so our step by step exploration of the border of order-edge of chaos will facilitate our understanding and exploitation of the true nature of space time energy matter.

^{[1]} The retreat was started by a 1993 paper by Melanie Mitchell, Peter Hraber and James Crutchfield which actually questioned Norman Packard's evidence for Chris Langton's problematic proposal for a linear characterisation of CA rule space (which Langton called λ) rather than more general evidence for the edge.

[ Memes index ] [ A simple evolving network ] [ Gravity as aether flow ] [ Balancing life, human, posthuman ]