Metasystem Transitions Create Modelable Complexity

A complex system is present if its global behaviors result from interaction of many small parts. The behavior emerges from the system as a whole and we cannot predict this behavior from just understanding the rules that govern the behavior each part individually. As seen in Chapter 2, transition from a simple to a complex system is neither simple nor discrete.

Complexity and emergence are only linguistic labels for diffuse problems in hemocoel dynamics, economics, artificial life, artificial intelligence, neuroscience, and even cultural change and development. Might we probe deeper to ask of any of these systems why 'something' we observed 'happened'? For our 'explanation' to be adequate at least at some level means we must know a sequence of events and interactions that led up to our 'something.' So we would be asking our old question again: can we 'explain' or infer connections between the initial state of our system and what changes to produce the 'something' we observe?

Asking these questions presupposes we understand on some acceptable level, the tools and techniques we use to define our system's rules, and we also must accept the validity of any theory that 'defines' our tools. But, still, our great open question always necessitates a leap of faith. The question still is: can we predict the outcome from an initial state without having to calculate every interaction?

This means, do we have or can we obtain a sufficiently deep enough understanding of the system that we can imagine some minimal number of symmetries for us to calculate the outcome? Or given the outcome, may we go stepwise in reverse order back to some space of initial states? If we can do this, we now have our mapping from the space of initial states to the space of outcomes. And we'd be almost home.

For if we can simulate the hemocoel system, and our model arrives at the result we expect, may we assume that the information hiding in the intermediate steps of our simulation explains the original sequence? Maybe yes, and maybe no. But if we can reproduce the behavior of the hemocoel and control it at each step, will we understand? Surely if we understand the system, we should not need to simulate its behavior. We would understand both the circumstances necessary for each step to the outcome. We would understand the correlation between the initial state and the outcome.

We have clearly then a continuum of levels of difficulty and a continuum in the complexity of our analysis. This is truly quite a deep issue. There is of course no discontinuous separation between emergence and non-emergence. Emergence then results from a 'phase change' in how much computation we must do to optimally predict outcomes. To imagine this scenario in computational terms, we must compute some minimal amount to predict the outcome.

Ultimately all useful predictive knowledge is in the accumulated interactions and the time required to complete our computations and thus depends on our machines and the time we need for computation. For finite computations, the time required using different Turing machines is related by an arbitrary polynomial. If this phase transition is real, it should not be machine-dependent. We measure the complexity of a step in terms of its Kolmogorov complexity in other words the length of its minimal description. The intuitive idea is that increases in Kolmogorov complexity often offset any decrease in computational steps (Ref: Modeling, Rent's Rule, and Kolmogorov Complexity).

So in an emergent system, our understanding can be at best zero. That we cannot predict emergent properties, stems not from any failure to understand, but from an inherent property of the system, brought about at least in part by the accumulation of interactions. So understanding this, we need no longer deal with any explicit dichotomy between emergent and non-emergent phenomena. Our perceived lack of understanding is really just another way of describing the complexity of the map between the initial state and our final phenomena. In the sense that lacking knowledge of initial conditions usually causes increasingly poor predictions is analogous to a discrete version of chaos. Any single phenomenon may fall anywhere in the spectrum between trivial prediction and emergence.

Remember chaos aids distribution because it aids transport. Transport limits turnover within bodies. Consider again how movement of virus or malarial parasites from a mosquito's gut to its salivary glands depends on transport within hemolymph. Also cooling in devices, bees or animals. Coolants moving within the interiors of systems to be maximally effective must penetrate into both superficial and deep compartments. The more intimately a coolant associates with the internal surfaces producing heat and the external surfaces radiating this heat, the more controlled becomes the transfer of heat (Ref: Chaos and Control).

Chaos works because particles, be they molecules or cells, suspended in blood or hemolymph responding to a chaotic component operating within their transport modality can explore a much wider range of values and potentially enter a wider range of spaces available to them within the body or device than could molecules or cells transported solely by rhythmical oscillations of their transport medium. Chaos introduces plasticity to cope with unpredictable changes in the environment. One direct way to follow particles through hemocoels is in (Ref: Localization Within Hemocoels).

Even though we cannot make an algorithm to optimize a general computer program, especially one containing chaotic elements, practical optimizations are possible because real biological and even computer programs contain redundancies that may lead to efficiencies. Might our local cuticle synthesizing 'programs' or machines optimizing their own code over geological time have figured out the best local way to perform their part in the synthetic function? Or simplistically, suppose the local machinery learns to be five times more efficient, and so the superprogram containing a metaprogram that optimizes and compliments local methods gets to be twenty times more efficient. So then together they then become multiplicative, speeding the process around a hundred times. I probably have to stop now and draw my conclusions.

Chapter eleven

0 0

Post a comment