John E. Staddon's Adaptive Dynamics

The introductory chapter in John E. Staddon's Adaptive Dynamics starts with a statement about reductionism:

"I take for granted two things with which no biobehavioral scientist would quarrel: that organisms are machines; and that the physical machinery for behavior is biological – nerves, muscles, and glands. But I also argue for a third thesis, not universally accepted: that an essential step in understanding how all this machinery works is dynamic black-box modeling. I propose that a black-box model will always produce the most compact summary of behavior. And such an account is likely to be /much simpler than a neurophysiological account, if indeed one can be found (we are a long way from understanding the neurophysiological underpinnings for most behavior). I also propose that an accurate model may be essential to discovering the physical processes that underlie behavior, because it tells the neuroscientist just what task the brain is carrying out."

He continues:

"Many behavioral scientists, and almost all neuroscientists, still think that psychology (the study of behavior) is in principle subordinate to neuroscience (the study of the brain). They are wrong, and I'll try to explain why. If the secrets of behavior are not to be found by expanding our understanding of neurons and brain function, how is the job to be done? Through experiment and the creation and testing of simple models, I believe.''

His example, like Danny Hillis, uses evolutionary principles applied to engineering design. Adrian Thompson evolved programmable gate arrays to perform nondigital tasks, in this case, to distinguish a 1 kHz signal from a 10 kHz signal.

"What has Thompson done here? He has taken a relatively simple device, a 100-gate FPGA [field-programmable gate array], and through an evolutionary process caused it to learn a relatively simple task to discriminate between two input frequencies. The machinery looks simple and the behavior corresponds to a very simple model. But the physical details – how the system is actually doing the job – are deeply obscure. This is not an isolated example. Workers on complex systems, artificial as well as natural, increasingly find that these systems may behave in orderly and understandable ways even though the details of how they do it are incomprehensible. My question is Why should the brain be any different?"

Staddon then lists difficulties that the neuroscientist faces – the huge number of neurons (and other cells), and the complexity of their interactions.

"The psychologist also faces special problems posed by the fact that we are dealing with biological material. Biology implies variability …. There are good evolutionary reasons for variability, as we will see later on. But variability means that brains, even the brains of identical twins, unlike computer chips are not identical. Even if the brains of twins were identical at some early point in their history, it is impossible to give two individuals – even individuals rats, nematodes, or sea slugs, much less human beings – identical life experiences. Given different histories, even two initially identical brains will become slightly different – they will be `storing' different memories. This difference, and what it implies – that experiments cannot always be replicated exactly – poses severe difficulties for a researcher who wants to understand how organisms are affected by different histories."

Within-subject experiments repeat manipulations of a single subject. This works for physics, but it's harder to apply in domains in which the subject changes over time – this implicates such phenomena as irreversibility, hysteresis, and learning. The main alternative is the between-subjects method: repeat the manipulation across many different (but very similar) subjects, and then use statistical calculations to produce 'average' results.

"Having more subjects means more work, but the main problem is that the behavior of a group average need not accurately reflect the behavior of any individual in the group…. The average performance of a group of subjects learning some task always improves smoothly, but individual subjects may go from zero to perfect in one trial.''

If there are problems with both within-subjects and between-subjects methods, then what to do? Staddon spends the rest of his introduction arguing for what he calls theoretical behaviorism.

"Fortunately, there is an alternative approach to the study of historical systems that preserves the advantages of the within-subject method. I'll call it the theoretical approach because it uses theoretical exploration to discover the causes of unrepeatability.

"One reason the between-group method is so popular is that it guarantees a result: numerous texts describe exactly how many subjects are needed and what statistical tests are to be used for every possible occasion, and computer programs provide painless calculation. If you follow the prescription, reliable (but not necessarily useful or meaningful) results are guaranteed. The theoretical method provides no such security. What it requires is simply a bright idea: knowing the problem area, knowing the properties of the material you're dealing with; it is up to you, the scientist, to come up with a hypothesis about the causes of unrepeatability. The next step is to test that hypothesis directly. If your guess is wrong, you have to guess again.

"The assumption behind the theoretical method is that a failure to get the same result when all measurable conditions are the same implies the existence of a hidden variable (or variables)."

Some other variable must be operating, and if identified, then:

"… the system has been converted from a historical to an ahistorical one. And an ahistorical system can be studied using within-subject rather than between-subject methods, which are desirable for the reasons just given."

These ideas should be more widely known.

Author: Steven Bagley

Date: 2017-06-02 Fri