UP | HOME

Can we understand evolved computational systems?

Let's assume that the brain is responsible for creating complex behaviors, and that the brain, like all other organs, was formed through evolution with natural selection. In what ways might be able to understand what the brain does, and how?

We know that animals successfully navigate around obstacles in complex environments, under varied lighting conditions, chasing prey or evading predators. Our attempts to build systems that mimic these behaviors using computer vision and robotics have turned out to be remarkably modest. However, in the process, we've built systems that implement some of these behaviors, and because these systems were explicitly engineered, we can understand how the implementions actually perform those functions. In rare cases, we may even be able to prove mathematically that the function is computed by that system. (Minsky and Papert proved that some calculations could NOT be performed by a certain class of mathematical functions, called Perceptrons.)

We have an advantage when thinking about engineered systems because we know what problem the system was designed to solve and how it solves it. Much of engineering design involves keeping track of details, and commonly, the details are organized into hierarchies of functional abstraction: a problem is decomposed into parts, each of which are implemented through a similar kind of decomposition. That the lower level details are irrelevant when thinking about a higher level allows us to abstract away from those details. For example, any physical system that implements basic binary logic gates can be used to build computers; that might involve voltages in digital circuits, or movement of rods and levers, or flow of fluids through pipes and valves, as long as those system met the properties and constraints required by the abstraction.

The knowledge about the abstraction levels is a luxury not afforded us in understanding biological systems that implement computational activities. In some cases, we know what problem the biological system solves, something about the components (typically, neurons), and we are left to hypothesize what intervening abstraction levels might support the implementation.

Suppose that instead of engineering a computational system to explicitly compute a specified function, the system was produced by an evolutionary process. Would we be able to understand how the system worked? Such a bottom-up understanding is at least implicit in much the recent work in neuroscience. Here's a simple example to aid our intuition in this area.

Danny Hillis programmed a computer to simulate the evolution of computer programs. The evolved programs were sorting networks, a computation structure that through a set of pairwise exchanges, takes a numerical series of a given length, and puts them into ascending order. The network is fixed and must work for all possible inputs, using only pairwise exchanges to produce the sorted output.

The simplest network would put two input numbers through a single comparator. If the first number were smaller, then those numbers would be passed along to the output unchanged. However, if the first number were larger, then the comparator would output the two numbers exchanged.

The optimization problem is to build a network that uses the least number of exchange units. It is obvious that any successful network would have to inspect each number at least once to insure its correct position in the sorted list; this would provide a lower bound on the number of comparators. More sophisticated reasoning can produce more accurate bounds.

Some sorting networks have a simple, repetitive structure such that inspection of them is sufficient to convince one that the network solves the sorting problem of the required size. However, such networks tend to be redundant and incorporate more exchanges than are necessary. Hillis' program simulated the genetic principles of random error and recombination, and applied selection pressure to minimize the number of comparators. (He also implemented a process analogous to that of infection by parasites as a way to accelerate the evolution process. While interesting, that aspect isn't relevant to the argument here.)

Hillis' simulator produced efficient sorting networks. In principle, we can test the networks to ensure that they do sort correctly. (Try all possible inputs.) They're quite efficient because each was built with a small number of compare/exchange units. However, we lack an understanding of the algorithm by which the sorting network (the implmentation) ensures the sorting function (the theory). That is to say, a complete, detailed description of the network does not lead in any direct way to understanding how the network sorts. The shortest, simplest description of the network may be the rather inscrutable network itself.

In Hillis' words:

One of the interesting things about the sorting programs that evolved in my experiment is that I do not understand how they work. I have carefully examined their instruction sequences, but I do not understand them: I have no simpler explanation of how the programs work than the instruction sequences themselves. It may be that the programs are not understandable—that there is no way to break the operation of the program into a hierarchy of understandable parts. If this is true—if evolution can produce something as simple as a sorting program which is fundamentally incomprehensible—it does not bode well for our prospects of ever understanding the human brain." Hillis, The Pattern On The Stone, pp146-7

Generalizing from sorting to brain function is necessarily only suggestive. Sorting is a rather brittle computation; either it works or it doesn't. Much of what the brain does is less formally algorithmic. But Hillis' result does suggest that a bottom-up approach to understanding the human brain—much more complicated than any sorting network, but "built" by evolution to solve problems in food acquisition, mate selection, and interpersonal communication—may not be enlightening. Of course, this example does not prove this case, but it does serve as an instructive example of the kind of difficulties such a bottom-up approach may encounter.

Author: Steven Bagley

Date: 2013-10-19 Sat