We explore the field of biology from 1950-2000.
Without a doubt, the most significant breakthrough in biology during the second half of the twentieth century was the epoch-making discovery of the double-helix structure of DNA, announced to the world in 1953 by Francis Crick (1916–2004) and James Watson (b. 1928).
There is a famous scene described by Watson in his 1968 memoir, The Double Helix. He and Crick burst into a Cambridge pub one afternoon in 1953 and announce they have found the “secret of life.” While one may be tempted to smile at the youthful exuberance of this declaration, the extraordinary importance of Crick’s and Watson’s discovery is in fact difficult to exaggerate.
Because the double helix lies at the heart of, not one, but two, of the most fundamental processes upon which life in all its myriad manifestations depends: metabolism (the ensemble of chemical reactions that transform environmental nutrient molecules into the energy that drives all the innumerable chemical reactions that collectively constitute and maintain life) and reproduction (the creation by an ancestor organism of a descendent organism).
That is to say, the very same double-helix structure that makes the preservation of the life of an individual organism possible (i.e., metabolism) also undergirds the preservation of a species beyond the life span of the individuals that make it up (reproduction).
Given this extraordinary convergence of two of the most important aspects of life, it would be difficult to discuss the history of biology during the second half of the twentieth century in any detail without assuming a knowledge of the basics of how the double helix affects metabolism and reproduction. Therefore, we begin by providing a brief primer on how these two processes work.
Chemically, DNA–and its closely related cousin RNA–are known as “nucleic acids” (“deoxyribonucleic acid” and “ribonucleic acid,” respectively). Nucleic acids are also known as “polynucleotides,” meaning that each one consists of a sugar-phosphate backbone to which are attached a long series of monomeric units called “nucleotides” projecting off to the side, not unlike the rungs of a ladder.
The famous “double helix” consists of a matching pair of parallel nucleic acids, whose backbones have symmetrical twists, tracking each other like the twin rails of a spiral staircase. Each nucleotide “rung” of a nucleic acid has a phosphate group at one end, where it attaches to the backbone, and a nitrogenous “base” at the other end. In the double helix, the nucleotide bases projecting from the two parallel backbones are both directed inward.
There are five different kinds of nucleotide bases. For DNA, these are adenine (A), thymine (T), cytosine (C), and guanine (G). In the case of RNA, uracil (U) substitutes for thymine (T). Each double helix as a whole is stabilized by the hydrogen bonds that the two sets of bases form with each other as they extend from their respective backbones towards the interior and meet in the middle.
However, there’s a catch: each nucleotide base binds with only one of the others (and never with itself). Namely, in the case of DNA, A binds only with T (and vice versa), and C bonds only with G (and vice versa), thus forming two complementary pairs: A-T and C-G. This means that every DNA molecule contains approximately equal amounts of A and T and of C and G–an empirical fact published in 1950 by Erwin Chargaff (1905–2002) and subsequently known as “Chargaff’s rules.” In the case of RNA, the pair A-U takes the place of A-T.
Why is all this important?
In short, the double helix is the chemical basis of nucleotide-base complementarity, which is the chemical foundation for templating, which lies at the heart of metabolism”
Because such complementarity provides the chemical foundation for the template principle. The basic function of DNA is to act as a physical template upon which new nucleic acids and proteins can be constructed. Thus, the double helix and nucleotide-base complementarity underlie the essential metabolic functioning of every nucleic acid–which is to specify the causal steps involved in the construction of a specific protein.
In short, the double helix is the chemical basis of nucleotide-base complementarity, which is the chemical foundation for templating, which lies at the heart of metabolism (as well as of reproduction, as we shall see shortly).
Now, let us look at how this all works in a bit more detail.
Virtually all metabolic work in cells is carried out by enzymes, a specialized class of proteins. From a chemical point of view, proteins comprise one or more polypeptide chains. Proteins are created as needed by the cell by means of a highly complicated process involving an organelle called the ribosome.
Fortunately, most of the astonishing complexity of protein production can be ignored here. From the point of view of the double-helix structure of DNA, there are only a few main points to keep in mind:
The manufacture of a new protein molecule is a highly complex process, but for our purposes it may be broken down into two main steps: transcription and translation.
First, transcription occurs as follows: the two halves of a double-helix, DNA molecule separate from each other–a process sometimes referred to as “unzipping.” This allows an enzyme called RNA polymerase to access the strand of DNA destined for transcription. RNA polymerase creates a new RNA molecule known as “messenger RNA” (mRNA) by moving along the DNA strand and matching each DNA codon with the complementary mRNA codon (with U replacing T).
Following transcription, the mRNA molecule leaves the cell nucleus (in eukaryotic organisms) and relocates to one of a vast swarm of organelles called ribosomes (of which there are as many as ten million in a typical mammalian cell). At the ribosome, another RNA molecule called “transfer RNA” (tRNA) is gripped in such a way that it is juxtaposed to an incoming mRNA molecule at one end and to a growing amino-acid chain at the other.
At the mRNA end, a portion of the mRNA’s nucleotide base sequence serves as a template for the construction of a complementary portion of the tRNA’s base sequence. Since each base-triplet along the latter sequence is complementary to a codon along the former one, the tRNA triplet is known as an “anticodon.” Note that, inasmuch as an mRNA codon is itself complementary to a DNA codon, each tRNA anticodon is identical to an original DNA codon (always U substituting for T).
The tRNA is now ready to perform its function: “translating” the anticodon into an amino acid. This occurs according to a system of correspondences between nucleic acid codon triplets and particular amino acids. The complete pattern of these correspondences is what is meant by the phrase “genetic code.” Nowadays, the entire genetic code is available in the form of handy laminated charts for students.
As can be seen, almost all the steps detailed above (which represent only the tip of the iceberg of protein production!) depend, directly or indirectly, on the double-helix structure of DNA.
If that were all, Crick’s and Watson’s discovery would be astonishing enough, and of absolutely fundamental importance. But that is far from all. As already noted, the double helix also lies at the heart of biological reproduction.
How does this work?
To reproduce themselves, all living things must first duplicate their genetic material, that is, their DNA. The latter either floats in the cytosol in the form of naked DNA molecules (as in prokaryotes, such as bacteria) or else is sequestered in the cell nucleus (as in eukaryotes, including all higher life forms). In the latter case, the DNA is tightly coiled into densely packed arrays called “chromosomes.”
In all cases, reproduction–whether binary fission, mitosis, or meiosis–begins by means of a process known as “gene duplication.” It is conceptually easy to see why this is necessary. In whatever form it may take, the reproduction of a given organism requires the production of a new collection of DNA molecules that (1) is chemically identical to the DNA of the original (ancestor) organism, and (2) is destined for inclusion in the new (descendent) organism.
It is of great interest that many of the same conceptual steps found in protein transcription and translation are also found in genome duplication (even if the chemical details are often quite different). For example,
With these elementary concepts and terms under our belt, let us now go back and examine both the historical setting and the far-reaching consequences of Crick’s and Watson’s celebrated discovery.
Already in 1944 Oswald Avery (1877–1955) had demonstrated that the genetic material must lie within the DNA fraction of the chromosome. Previously, there had been a lively debate between supporters of DNA and supporters of proteins as the bearers of heredity. After Avery’s work, this debate was resolved conclusively in favor of DNA.
Avery’s discovery was independently confirmed in 1952 by Alfred Hershey (1908–1997) and Martha Chase (1927–2003), two members of the so-called “Phage Group”–a far-flung community of geneticists who studied bacteria using a virus known as bacteriophage as an experimental vector.
But it is one thing to know that the genetic material consists of DNA, and it is another to understand how it is possible for the DNA molecule to serve as the means of inheritance. Once Chargaff announced his rules in 1950, the race was on to uncover the precise chemical structure of DNA.
Most biologists understood the great stakes involved in the race to “solve” DNA. It seemed to many at the time that the smart money backed the team of Linus Pauling (1901–1994) at Caltech. One of the most famous scientists in the world, Pauling was a chemist who had already won a Nobel Prize for his elucidation of the quantum mechanical nature of the chemical bond. However, it was a British team, bilocated at King’s College London and the Cavendish Laboratory at the University of Cambridge, that ended up winning the gold.
The work of both teams relied on the investigative technique known as X-ray diffraction crystallography, originally developed by Lawrence Bragg (1890–1971) just before the First World War.
In London, the X-ray crystallographer Rosalind Franklin (1920–1958), working under the physicist Maurice Wilkins (1916–2004), provided superb images of the diffraction patterns created by exposing the lattice of crystalline DNA to X-rays from different angles. Her images turned out to supply the key information required by her theoretician-colleagues in Cambridge to successfully infer the three-dimensional structure of the lattice.
But it is one thing to know that the genetic material consists of DNA, and it is another to understand how it is possible for the DNA molecule to serve as the means of inheritance.”
That 3-D structure was, of course, the double helix, and those colleagues were none other than the English physicist-turned-biologist, Francis Crick, and a young American molecular biologist visiting the Cavendish on a postdoc, James D. Watson. Using scale models of the DNA sugar-phosphate backbone and the four nucleotide bases–made to order by the Lab’s machine shop–Crick and Watson managed to deduce the double-helix structure described above as the best fit for the diffraction patterns supplied to them by Franklin.
The landmark paper announcing their result was published in April of 1953. Crick and Watson had won the race to solve DNA–but the real contest was only just beginning. The double helix raised a hundred new questions for each one that it answered. The result was a burst of creative energy over the ensuing couple of decades that is perhaps without parallel in the history of biological science.
Around the same time that Crick and Watson were working on DNA, Salvador Luria (1912–1991)–a member of the Phage Group who had collaborated with Max Delbrück (1906–1981) on path-breaking work in bacterial genetics as far back as 1943–discovered the phenomenon of host-controlled restriction of viruses by bacteria. A little later, other investigators found that this phenomenon is due to a class of enzymes produced by bacteria known as “restriction enzymes.” A way was soon found to manipulate restriction enzymes in the laboratory, providing molecular biologists with another important new investigative tool.
Further work on bacterial genetics in relation to evolution has continued apace, culminating in the 32-year-long longitudinal studies of Richard Lenski (b. 1956), beginning in 1988 and continuing until the present, in which successive generations of E. coli have been subjected to all manner of environmental stresses, with the resulting mutations carefully tracked.
Let us now turn to the metabolic ramifications of Crick’s and Watson’s historic discovery, as discussed above.
First, the basic structure of proteins–which were already known to consist of one or more polypeptide chains–had to be elucidated at the level of specific amino-acid sequences. Several experimentalists made notable contributions to this basic background understanding.
For example, the X-ray crystallographer Dorothy Hodgkin (1910–1994) determined the sequential amino-acid structure of penicillin in 1945 and that of vitamin B12 in 1948, followed by Frederick Sanger (1918–2013), who solved the hormone insulin in 1951.
In addition to their linear, amino-acid structure, polypeptide chains also fold into various three-dimensional structures–which were known to be important for the way proteins work. Thus, knowledge of proteins’ folded states was also needed to fully understand their functioning.
In 1957, John Kendrew (1917–1997) published the first realistic model of the small globular protein, myoglobin (which later became the “model system” for protein studies). In 1959, Max Perutz (1914–2002) solved the 3-D structure of the much-larger globular protein, hemoglobin.
Kendrew’s and Perutz’s work raised the issue of how the three-dimensional structure of a protein is specified by its linear polypeptide chain or chains. In 1969, Cyrus Levinthal (1922–1990) noted that the enormous number of degrees of freedom of an unfolded, linear polypeptide chain–together with the very short time spans actually needed for achieving the folded structure–meant that the folding process could not possibly proceed by means of a “random walk.” This difficulty became known as “Levinthal’s paradox.”
Levinthal’s paradox still lies at the heart of what is now known as the “protein folding problem.” Numerous non-random models (based on minimum-energy principles) have been proposed over the past five decades, but in its essentials the question remains an open one to this day.
Returning to the ramifications of the double helix, another enormous problem raised by Crick and Watson was the issue of the genetic code: linking DNA sequences to proteins. Francis Crick himself was one of the first to turn his attention to this problem. Crick collaborated closely with several prominent biologists, notably Sydney Brenner (1927–2019).
The next great breakthrough in our modern understanding of metabolism involved the gradual realization that some sets of codons, or “genes”, code for proteins which assert control over various aspects of transcription and translation.”
In 1958, Marshall Warren Nirenberg (1927–2010), using radioactive amino acids, demonstrated the first known correspondence rule: the mRNA codon triplet UUU (three uracil bases in a row) specifies the amino acid phenylalanine. Over the next decade, Nirenberg–as well as Har Gobind Khorana (1922–2011), Robert W. Holley (1922–1993), and others, all working separately–established the entire pattern of mRNA codon–amino acid correspondences that constitutes the genetic code.
This work on the genetic code, as impressive as it was, still only laid the foundation for understanding the fantastically complicated metabolic activity of cells. Following another line of research, Severo Ochoa (1905–1993), Arthur Kornberg (1918–2007), and others began filling in many more details concerning the processes by means of which DNA is built up from nucleotide bases with the help of enzymes.
The next great breakthrough in our modern understanding of metabolism involved the gradual realization that some sets of codons, or “genes,” code for proteins which assert control over various aspects of transcription and translation. This feedback of certain gene products upon genetic activity itself provides a vital link between the moment-to-moment requirements of the cell and the production of needed enzymes and other proteins.
The most important contribution to our understanding of this type of genetic regulation of metabolism came in the early 1960s, in France. Two teams working down the hall from each other at the Pasteur Institute in Paris were initially focused on different problems involving bacterial genetics and metabolism. One lab was headed by André Michel Lwoff (1902–1994) and included his assistant, François Jacob (1920–2013). The other group was led by Jacques Monod (1910–1976).
Eventually, the two teams pooled their resources, and focused on the way in which E. coli switches between two different metabolic states in response to the presence or absence of a certain nutrient (the sugar, lactose). Eventually, in 1961, they were able to demonstrate the existence of a control system, which came to be known as the “lac operon.” The lac operon is a set of genes and gene products (regulatory proteins), which–in the presence of environmental lactose–collectively turn on the production of the enzymes needed to metabolize that type of sugar.
The discovery of the lac operon system by Jacob and Monod was important because it provided a conceptual model for understanding gene regulation and protein production in response to need throughout all living systems. Surprisingly, we now know that approximately 98% of all genes in any given organism have a regulatory function (they code for regulatory proteins). The lac operon also explained, at least in broad outline, the abiding mystery of the functional differentiation of the many different cell types in higher organisms, even though all cells contain precisely the same set of genes.
There is a great deal more to the cell than just the synthesis of proteins, but before turning to a quick survey of other late 20th century advances in physiology, let us take the story of the double helix and the genetic code up to the end of our period.
It was not long before the events previously described aroused the interest of entrepreneurs who saw great medical, agricultural, and other practical potential for manipulating the genomes of various organisms through a budding technology known as “recombinant DNA.” It was in one such early biotechnology firm, Cetus Corporation, that in 1983 a procedure was invented that would eventually have an incalculable impact, not only on the industrial applications of genetics, but also on biological research itself.
In that year, a Cetus employee, Kary Mullis (1944–2019), invented the polymerase chain reaction (PCR), a means of speeding up the copying of targeted DNA codon sequences by many orders of magnitude. It is often said that biological research in the second half of the 20th century may be divided into the pre-PCR and post-PCR epochs.
The “big data” concept familiar to us in the twenty-first century is a direct consequence of Mullis’s groundbreaking invention. Perhaps the most famous big data undertaking so far has been the federally sponsored Human Genome Project (HGP), which lasted from 1990 until 2003, and whose aim was to determine the linear sequence of all nucleotide base pairs in human DNA. Among the original team, which included James D. Watson, perhaps the best-known scientists were Craig Venter (b. 1946), who eventually left the HGP to found Celera Genomics, and Francis Collins (b. 1950), now head of the National Institutes of Health (NIH).
Another important practical application of genetics has been in the field of cancer research. This story begins in the early 1950s, when Renato Dulbecco (1914–2012) demonstrated that certain cancers are linked by alterations in their DNA caused by a species of viruses dubbed “oncoviruses.”
However, the mechanism underlying such deleterious DNA alterations was only elucidated a decade later, by two of Dulbecco’s students, Howard Martin Temin (1934–1994) and David Baltimore (b. 1938). Working independently, Temin and Baltimore made the wholly unexpected discovery that under certain circumstances RNA–with the help of an enzyme later called reverse transcriptase–can serve as a template for manufacturing DNA, thus allowing an oncovirus to insert its own DNA amidst the DNA of a host organism.
During the 1970s, Harold E. Varmus (b. 1939) and J. J. Michael Bishop (b. 1936) supplied important further details of this process, which came to be known as “reverse transcription.” Reverse transcription is now known to be a much broader phenomenon and oncoviruses are accounted as only one species of a larger genus of “retroviruses.”
Among many other outstanding discoveries made during the second half of the twentieth century, the chemiosmotic theory of Peter D. Mitchell (1920–1992), developed over a period of time beginning in the 1960s, must be mentioned. The chemiosmotic theory was the first convincing account of how adenosine triphosphate (ATP) synthesis occurs during photosynthesis or respiration–namely, by means of an electrochemical gradient established across the lipid membrane of an organelle (for example, a chloroplast or a mitochondrion).
Given the centrality of ATP to all the energetic interactions in cells, the significance of Mitchell’s theory can scarcely be exaggerated. The chemiosmotic theory was hugely controversial when first introduced. However, it was confirmed during the 1980s by the discovery and elucidation of the operation of the membrane-bound enzyme, ATP synthase, in mitochondria by John E. Walker (b. 1941), Paul D. Boyer (1918–2018), and Jens Christian Skou (1918–2018).
Another important advance was our gradual understanding of the immensely complex working of the immune system in humans and other mammals. Among the many contributors to this outstanding achievement, we may mention Frank Macfarlane Burnet (1899–1985), Niels Kaj Jerne (1911–1994), Gerald Edelman (1929–2014), and Susumu Tonegawa (b. 1939).
Physical chemists also contributed greatly to our understanding of the functioning of biological macromolecules during this time period, notably Linus Pauling and his student, Martin Karplus (b. 1930), as well as Jeffries Wyman (1901–1995), Roald Hoffmann (b. 1937), and others. Since the 1950s, Karplus had been a pioneer in the development of the new technology of nuclear magnetic resonance (NMR) spectroscopy. Beginning in the 1970s, he began to use NMR techniques extensively to study protein dynamics, including the protein-folding problem.
Another important advance was our gradual understanding of the immensely complex working of the immune system in humans and other mammals.”
Yet another area of tremendous progress that we cannot fail to mention is neuroscience. The basic elements of neuroanatomy had already been discovered during the nineteenth century, but the first cell-level details of brain functioning were not revealed until the 1890s, particularly through the studies of Charles Scott Sherrington (1857–1952) on the “reflex arc” between the brain, the spinal column, and the peripheral nervous system. Sherrington was also one of the first to recognize the importance of the synapses for communication between neurons.
While knowledge of neuroanatomy continued to steadily advance, a workable theory of how the activity of neurons is generated and controlled at the cellular and molecular levels was still lacking. In 1949, a major conceptual breakthrough was made by Donald O. Hebb (1904–1985) in his landmark book, The Organization of Behavior. In it, he articulated what came to be known as “Hebb’s law”: the idea that when one neuron repeatedly influences the timing of the firing of another neuron, a physical alteration occurs at the molecular level that makes more probable the future joint firing of both neurons. Hebb’s law is sometimes summarized by the phrase: “cells that fire together wire together.”
At the time that Hebb stated the law that came to be named for him, it was largely conjectural. However, empirical confirmation of its validity was obtained in the 1970s by Eric Kandel (b. 1929) during his studies of the primitive nervous system of the model organism Aplysia californica, a species of sea slug.
There is space here to mention only a few of the most important advances in our knowledge of brain function during the second half of the twentieth century. Beginning in the late 1940s, Alan Hodgkin (1914–1998) and Andrew Huxley (1917–2012) studied electrical conduction in the giant axon of the squid, developing a detailed theory of the action potential in neurons. In the 1950s, the Hodgkin-Huxley model was refined in various important respects by John Eccles (1903–1997).
Working at the level of the whole brain–and building on earlier work by Karl Lashley (1890–1958) on the way learning is embodied in the cerebral cortex–Karl H. Pribram (1919–2015) did pioneering work, beginning in the 1950s, on the relationship between the frontal cortex and the limbic system, identifying sensory-specific “association” centers in the parietal and temporal lobes.
A little later, beginning in the 1970s, Walter Jackson Freeman III (1927–2016) and his associates demonstrated that cognition in higher animals involves long-range correlations in cerebral cortex among co-firing groups of neurons called “nerve cell assemblies.”
Turning to more foundational issues, one problem that came into prominence during the second half of the twentieth century hearkened back to the old materialist-reductionist vs. vitalism debate from decades earlier (see “Biology: 1900–1950”)–namely, the problem of understanding how life began.
In 1924, the Russian chemist Alexander Oparin had published The Origin of Life, the first sustained scientific reflection upon the problem, in which he proposed four basic principles that ought to guide disciplined speculation about the origin of life:
In 1952, two chemists at the University of Chicago–a graduate student, Stanley Miller (1930–2007), working under his advisor, Harold Urey (1893–1981)–set out to demonstrate the validity of Oparin’s second principle. Miller and Urey constructed an experimental apparatus comprising a sealed flask containing only hydrogen gas, ammonia, methane, and water. They then applied continuous electrical sparks to the contents of the flask to simulate the lightning discharges they imagined occurring on the primitive earth. After a day, the contents of the flask were analyzed and found to contain several amino acids. This startling result was hailed around the world as a final victory for materialist reductionists in their perennial quarrel with vitalists (see A Brief History of Biology: Before 1900).
Unfortunately, the Miller-Urey experiment was never able to demonstrate Oparin’s third principle: the slow accumulation of more complex organic systems by purely abiotic means. All progress towards this goal seemed dependent upon manipulation of the contents of the experimental apparatus specifically targeted to obtain the desired result. Debate raged as to whether the fourth principle–prebiotic natural selection–could have substituted on the early earth for the intentionality of the experimenter. In any case, even with all the ingenuity that chemists could summon, not much further progress was made along the lines of the Miller-Urey experiment.
Moreover, after the double helix and all the discoveries pertaining thereto, it had become obvious that origin of life research was beset by a serious conceptual problem of the chicken-and-egg variety: namely, in all living systems nucleic acids are required to produce proteins, while proteins are required to produce nucleic acids.
For some time, origin of life researchers had concluded that DNA must have evolved from the slightly less chemically complex RNA. The pre-DNA stage of life was given the name “RNA World.” Then, in 1982, Thomas Cech (b. 1947) dropped a bombshell by announcing the discovery of the first “ribozymes”–RNA molecules that are capable of acting like normal (proteinaceous) enzymes under some limited circumstances. The solution to the chicken-and-egg problem seemed at hand.
Moreover, after the double helix and all the discoveries pertaining thereto, it had become obvious that origin of life research was beset by a serious conceptual problem of the chicken-and-egg variety”
Unfortunately, there remained the problem of accounting for the origin of RNA itself. In the nearly 40 years since Cech’s discovery, it is safe to say that we are no nearer to understanding how the RNA World came into existence (assuming it did). The origin of life field still flourishes, replete with interesting and detailed speculation. However, we are no nearer today to a genuine scientific understanding of the origin of life than we were in Oparin’s day, or, indeed, in the days of Darwin’s speculations about a “warm little pond.” Could the reason be that Oparin’s first principle is the one that is false? Life is not physically identical to non-life?
Towards the end of the twentieth century, several proposals were made to rethink the fundamental nature of life–or the “living state (or phase) of matter,” as it came to be called–from first principles. These include (but are not limited to) the following:
However, none of these proposals has so far made a lasting impact on mainstream biological thinking.
One problem that has plagued biology from its inception through the first half of the twentieth century, but which was relegated from the spotlight during the period after 1950- is the problem of teleology (see our articles A Brief History of Biology Before 1900 and A Brief History of Biology: 1900–1950).
One of the main reasons for this was the development of the digital computer and especially its application to the engineering problem of controlling manmade systems. Gradually, a whole new area of engineering known as “systems science” came into existence, whose task was to develop increasingly sophisticated control systems for machines, utilizing what came to be known as “negative feedback.”
Negative feedback occurs whenever a manmade system uses real-time information about its own current status to alter its own activity in order to better attain the goal for which it has been designed. An example would be a guided missile that uses radar to keep track of its current position in order to make any necessary corrections to its trajectory to ensure that it hits its target.
The phenomenon of negative feedback–control had been around at least since 1788, when James Watt (1736–1818) invented the centrifugal governor for steam engines. The term itself was introduced into the scientific literature by the electrical engineer, Harold Stephen Black (1898–1983), in 1927. Beginning in the 1940s, the Austria-born biologist and philosopher, Ludwig von Bertalanffy (1901–1972), pioneered a system-theoretic framework for understanding life itself.
However, the main theoretical application of negative feedback to the problem of teleology in biology dates to 1943, when an important paper entitled “Behavior, Purpose, and Teleology” was published by Arturo Rosenblueth (1900–1970), Norbert Wiener (1894–1964), and Julian Bigelow (1913–2003). This paper maintained that, since systems controlled through negative feedback are equally goal-directed and, seemingly, purely mechanical, they allegedly constitute an existence proof of the reducibility of teleology to mechanistic interactions.
Henceforth, systems controlled by negative feedback would be known as “cybernetic” systems (coined by Wiener from the Greek word for “governor”). And from the perspective of the new systems science, all biological organisms, including human beings, could be seen to be cybernetic systems. This obviated the need for any special principle to account for the goal-directedness, or purpose, that appears to characterize all living things. Since living beings are cybernetic systems, they need not be considered “teleological” in any scientifically objectionable sense.
This way of thinking swept through, not just biology, and not just all the natural sciences, but the whole of the Western intelligentsia, and even beyond. It could be found, for example, in best-selling books aimed at a broad popular audience, such as Mechanical Man, published in 1968 by Dean Wooldridge (1913–2006), and Chance and Necessity, originally published in French in 1970 by Jacques Monod .
It must be pointed out that the analogy cyberneticists draw between manmade and biological systems breaks down in at least one crucial respect. Namely, the purpose, or goal, of a manmade machine is imposed on the system externally by a human intentional agent. Machines are indifferent to those of their own internal states which get designated as “goals” by their human designers or users.
Biological organisms, on the other hand, possess their very own purposes or goals, which are internal to the organisms themselves–namely, those states which support the organisms’ continuous existence. Living things are decidedly not indifferent to whether they go on living.
Supporters of materialist reductionism typically respond to this critique by invoking natural selection, the idea being that an organism’s goal states are its phenotypic traits that have been “selected for” by the putatively mechanistic process of natural selection.
To help evaluate this claim, let us now turn to some of the many new developments in evolutionary theory which occurred during the period after 1950.
The ascendancy of the Modern Synthesis (see A Brief History of Biology: 1900–1950) prevailed throughout the first half of our period, at least until the late 1970s. Perhaps the apogee of the influence of neo-Darwinism may be represented by two books published during the middle of that decade, a year apart.
In 1975, the myrmecologist Edward O. Wilson (b. 1929) published a large textbook entitled Sociobiology: The New Synthesis, which attempted to bring the behavior of highly cooperative (“eusocial”) species–such as ants, termites, bees, and wasps–firmly inside the Modern Synthesis.
The theory underlying Sociobiology was principally the notion of “kin selection,” originally derived from the mathematics of population biology by Ronald Fisher (1890–1962) and J. B. S. Haldane (1892–1964) in the 1930s, and summarized by the latter’s famous quip: “I would willingly die for two brothers or eight cousins.”
During the 1960s, the basic idea of kin selection was further elaborated by the population biologists George R. Price (1922–1975), W. D. Hamilton (1936–2000), and John Maynard Smith (1920–2004). In addition, in the early 1970s, the young evolutionary theorist Robert Trivers (b. 1943) came up with the concept of “reciprocal altruism,” which added yet another arrow to the quiver of the budding “sociobiology” movement.
Wilson’s mammoth textbook would have been an important event in the history of biology no matter what. However, what made Sociobiology a landmark in the broader intellectual history of its time was the final chapter, in which the author extended his theoretical musings from ants to human beings.
This gambit transformed a staid, academic tome into a best-selling succès de scandale, while Wilson found himself an overnight media celebrity. At one point, feelings ran so high that the Harvard professor was physically attacked: as he stood at a podium giving a public lecture, someone poured a pitcher of ice water on his head!
Many of the ideas summarized in the previous paragraphs were repackaged for a broad, general audience by the brilliantly written book, The Selfish Gene, published in 1976 by the British zoologist and ethologist, Richard Dawkins (b. 1941). Eventually, “sociobiology” became rebranded as “evolutionary psychology,” which has by now become firmly established as a normal academic field of study.
In retrospect, the celebrated books of Wilson and Dawkins can be seen to represent the high-water mark of neo-Darwinism. The last quarter of the twentieth century was distinguished by a series of discoveries which collectively watered down the Modern Synthesis to a considerable extent.
While the latter is far from dead—it remains the received theoretical framework of nearly all mainstream biologists—it is often discussed at present in terms of a so-called “extended synthesis,” a sobriquet which nods to the importance of a host of new discoveries that sit awkwardly with the Modern Synthesis.
In retrospect, the celebrated books of Wilson and Dawkins can be seen to represent the high-water mark of neo-Darwinism.”
First and foremost among these new discoveries was the “fluid genome.” Originally reported by Barbara McClintock (1902–1992) in 1948 in connection with her work on inheritance in the maize plant, the existence of mobile genetic units capable of relocating to different positions within the genome was highly unexpected, to say the least.
McClintock’s claims regarding “jumping genes”–as they were initially called (they are now known as “transposable elements,” or “transposons”)–were initially met with extreme skepticism, if not outrage, by the neo-Darwinian community. It is difficult to believe that her gender did not play a role in the cold reception accorded her work, though the challenge it posed to evolutionary orthodoxy surely played a role, as well.
Be that as it may, McClintock was undeterred, content to work quietly, out of the limelight, for decades. Her long labors were rewarded in 1983, when she was awarded the Nobel Prize for Physiology or Medicine.
The deeper significance of McClintock’s epoch-making discoveries was that they made it impossible to argue any longer that genetic mutation is wholly random, for the more that molecular biologists learned about the details of DNA and RNA function, the clearer it became that unusual phenomena like transposable elements do not occur haphazardly, but rather are under some form of functional regulation.
This moral was also brought home through a variety of other discoveries. One was the fact that bacteria exchange DNA fragments (“plasmids”) in response to need, via a process known as “horizontal gene transfer.” Another is that targeted, rapid, genetic mutation can be initiated in response to environmental need (“stress-induced mutagenesis”).
Once again, these quite unexpected results were widely discounted when first announced (mostly during the early 1970s). However, through the meticulous work of many researchers–including John Cairns (1922–2018), Miroslav Radman (b. 1944), Barry G. Hall (b. 1942), James A. Shapiro (b. 1943), and others–all of the aforementioned, as well as several other similar phenomena, were empirically well established by the end of the twentieth century.
Perhaps most surprising of all from the mainstream neo-Darwinian perspective was the discovery in 1998 of RNA interference–essentially a form of control of DNA expression by specialized RNA molecules (RNAi)–by Andrew Fire (b. 1959) and Craig Mello (b. 1960). This breakthrough led on to the burgeoning new field of “epigenetics” (for details, see Biology: 2000–2020“).
Several other theoretical developments after 1950 also required adjusting our understanding of evolution, including (but not limited to) the following:
While the Modern Synthesis persists faute de mieux, there is little doubt that the discoveries mentioned above represent a body blow to the theory. The idea that natural selection is based on purely random variation–and thus explains away all appearance of teleology in living systems–becomes less and less credible with the revelation of each new layer of the mind-boggling complexity of organisms.
This fact was stressed at the end of the century by two controversial books: Darwin’s Black Box, published in 1996 by the biochemist Michael Behe (b. 1952), and The Design Inference, published in 1998 by the mathematician and philosopher, William A. Dembski (b. 1960). Both books concluded that the principle underlying life must be some form of intelligence.
For this reason, Behe, Dembski, and the many followers who soon joined them became known as the “intelligent design” (ID) movement. The burgeoning and highly contentious literature surrounding ID–both pro and con, both academic and popular–quickly became an inescapable feature of the intellectual landscape of the first decades of the 21st century (for further discussion, see Brief History of Biology: 2000-2020).
In summary, by the year 2000, many signs pointed to the reawakening of the venerable problem of teleology in biology from its long, post–World War II dormancy.
Find out which influencers have most contributed to advancing the field of biology over the last two decades with a look at The Most Influential People in Biology, for the years 2000 – 2020.
And to find out which schools are driving the biology field forward today, check out The Most Influential Schools in Biology for the years 2000-2020.
Or, continue exploring the fascinating history of the biology discipline with a look at a Brief History of Biology: 2000-2020.
Want to be an Academic Influence Insider?