Cover image for Defending the cavewoman : and other tales of evolutionary neurology
Defending the cavewoman : and other tales of evolutionary neurology
Klawans, Harold L.
Personal Author:
First edition.
Publication Information:
New York : W.W. Norton, [2000]

Physical Description:
256 pages ; 22 cm
Format :


Call Number
Material Type
Home Location
Item Holds
RC359 .K578 2000 Adult Non-Fiction Central Closed Stacks

On Order



During the neurologist Harold Klawan's lifetime, patients came to him from all over America, exhibiting a huge array of troubles, all of which boiled down to one complaint: something was wrong with their brains. As a sympathetic brain detective, Klawans deduced a great deal from his patients, not only about the immediate causes of their ailments but also about the evolutionary underpinnings of their behaviour.

Author Notes

Harold Klawans, M.D., practiced neurology in Chicago, Illinois, until his death last year
Harold Klawans, M.D., practiced neurology in Chicago, Illinois, until his death.

Reviews 3

Booklist Review

Klawans, recently deceased, was an inspired neurologist and a prolific author of fiction and nonfiction based on his unique vision of the workings of the brain. He infused this cycle of case study^-based essays with suspense and wonder. In the title essay, for instance, he turns a moving account of an abused six-year-old girl who could not speak into a gratifying debunking of the old theory that male hunting skills stimulated the evolution of our higher brain functions. Klawans suggests instead that complex language gave our species the edge, and that women were responsible for its development. In a vivid tale about a young woman suffering from epileptic seizures, Klawans explains how handedness, like language, is a key to unlocking the spectacularly complex workings of the brain. He also illuminates the neural mechanics of reading, and educates readers about various forms of Parkinson's and mad cow disease. Klawans' eloquence, compassion, and ability to make neurology come alive with his confident leaps from the particular to the universal will be missed. --Donna Seaman

Publisher's Weekly Review

Much in the manner of Oliver Sacks, neurologist Klawans (Why Michael Couldn't Hit, etc.) uses stories from his clinical practice as jumping-off points for discussion of how the brain works, and of how and why it evolved as it did. Klawans explains how doctors find out which half of your brain controls your speech, and why they might need to know; how a professor's stroke cost him his ability to read, and how he regained it. Later chapters lay out "how literacy changes the brain" (among other things, it teaches us to use abstract categories) and how mad cow disease alters it (by means of contagious proteins called prions). Bringing in modern European history, Klawans connects an obscure nerve disease to conditions in Nazi-occupied Norway. Straying into evolutionary genetics, he describes Cheddar Man, a specimen of early Homo sapiens found in England; his DNA matches that of a modern-day history teacher still living in Cheddar. The difference between the two Cheddar men shows how much human life has been controlled by cultural, rather than biological, evolution. Klawans strikes an admirable balance between breezy narrative and serious exposition, between clinician's anecdote and broad biological overview. His decision to build each chapter around a single patient gives some of his work the feel of short stories, each with a single scientific punch line. Readers familiar with similar science writers will zip through Klawans's work with pleasure; those new to the genre will learn lots of neuroscience, nontechnically and without pain. (Jan.) (c) Copyright PWxyz, LLC. All rights reserved

Library Journal Review

Neurologist Klawans (Why Michael Couldn't Hit) contends that the unique qualities that make us human evolved because of our extended childhood under women's care--which allowed for continuing brain development, language, and learning--rather than from men's hunting and tool use. He uses fascinating clinical anecdotes to lead into explanations of how our brains work and how they got that way. His methodology resembles that of Oliver Sacks, but Klawans concentrates more on the process of learning how our brains function, while Sacks is also interested in the philosophical and literary implications of neurology. Klawans has a wonderfully clear, entertaining style that makes him a pleasure to read while giving the reader important insights into how our brains work. Highly recommended for all types of libraries.--Marit MacArthur, Auraria Lib., Denver (c) Copyright 2010. Library Journals LLC, a wholly owned subsidiary of Media Source, Inc. No redistribution permitted.



Chapter One DEFENDING THE CAVEWOMAN * * * The Window of Opportunity for Learning I didn't know the child's name or if she even had been given one. She was about six years old when I was asked to see her in consultation. She had been admitted to the hospital after being discovered locked in a closet in a dilapidated apartment building that was about to be demolished. The inspector who found her had tried to talk to her, but she had said nothing to him, nor made any sounds or other attempts to communicate with him.     By the time I saw her, we knew a little more. We knew that she was about six years old because the radiologists were able to estimate her age by assessing the apparent age of her bones. Estimates of bone age are based upon the pattern of age-related development of the various growth centers of the bones as well as the age-related closures of certain lines of fusion between bones. The bones of the skull, for instance, are separate at birth. There are even two large openings, or fontanels, between the bones that cover the brain. This arrangement allows the skull to expand as the brain grows. But these openings quickly begin to close. The smaller fontanel is gone within a month or two, and the larger fontanel is fused shut at around a year and a half, almost always by two years. These are estimates and depend upon many factors including nutritional status and the presence of diseases.     The girl's nutritional status was not very good. Her weight was in the fifth percentile for six-year-old girls meaning that 95 percent of all six year olds weighed more than her. Her height was in the tenth percentile. Taken together these observations suggested that she might have been underfed over most of the six years of her life (though she may also have been the child of two short, thin parents). Although undernourished when admitted to the hospital, she had not been starved. And she was clean. True, she had been naked when found, but she was neither soiled nor dirty. Someone--we never learned who--had been doing more than just locking her into a closet.     I was asked to see her in order to answer a very complicated question: why couldn't she speak? The possibilities boiled down to two alternatives. It was the classic dichotomy, nature versus nurture, the oldest question of them all. For Young Girl Doe, the name that had been put on her hospital wristband, this was the issue that had to be addressed first. Was it nature? Did she have a neurological disability that prevented her from speaking? Or was it nurture? Had she just never been exposed to language? With exposure to language, the normal brain and even most abnormal brains will acquire language. Thus what I was looking for was not some subtle neurological anomaly but a significant degree of neurological abnormality that would be sufficient to account for a total inability to say even a single word. Or was her brain relatively normal--that is, within the range of function, where exposure to language at the right age (Young Girl Doe was well within this range) would result in the automatic flowering of language?     I suspected the answer as soon as I walked into the room and said, "Hi."     For she looked up at me from her bed and answered, "Hi."     Her brain had acquired at least one word in less than two days. Actually it had acquired several, including "milk," "TV," and "Lacey," the name of the nurse's aide who had all but adopted her. She also made other sounds, one of which clearly meant "Big Bird," although it was hard for most of the adults to distinguish this from her name for Bert of Bert and Ernie fame. Thank God for Sesame Street . It saved us from a long extended neurological workup. Her basic examination was normal other than her language deficit, and she was acquiring language even as I examined her. She loved the little pocket flashlight I used, and laughed each time I shined it into her eyes. By the time I sat down next to her bed to write my consult note, she could say "light," and did so each time she pressed the flashlight's button.     Clearly she could acquire language. That meant that it was far more likely that her inability to speak was the result of nurture, not nature. I was certain of this. As I was that she would quickly overcome her deficits, even though I had never personally observed a child deprived of language to this degree. My certainty came from understanding something about how the brain worked and more specifically about its window of opportunity for the acquisition of language. And by the time Young Girl Doe (who by now was called Lacey) left the hospital two weeks later, I had given so much thought to these issues that I even understood why the entire concept of "man the hunter" was little more than a myth and that the triumph of Man, of Homo sapiens , was due entirely to the females of our species. The males did little more than supply the seeds which the females nurtured into modern man. The males' behavior was hardly more than the answer to a basic biological urge. Left to their own devices the males of the species would still be living in caves and scraping the same crude flint blades that they had been scraping for hundreds of thousands of years. It was the females' behavior that made man unique because it led to the development of language.     Lacey immediately brought to my mind the French film The Wild Child (or L'enfant sauvage ). It was not a movie I would have chosen to see on my own since I hate films that have to be read. And this one was a black-and-white film in French with English subtitles. I had been forced to watch it because one of my best friends, a neuropsychologist named David Garron, had shown it during a party at his home. I was too polite to leave. And then it was too fascinating not to watch--not as a movie but as a scientific document, a veritable roman à clef of the formative years of clinical neurology and neuropsychology. It was directed by the legendary French director François Truffaut, who also starred in the role of Dr. Jean Marc Gaspard Itard, the man who first described what is now known as Tourette's syndrome. The movie was based on a true story, one that Itard had experienced and then reported to the scientific community.     The subject of this clinical tale was an adolescent boy who, like Lacey, was without either name or language. He had been known as the Wild Boy of Aveyron because he was found living alone in the woods near Aveyron, France, toward the end of the eighteenth century. Given the name Victor by Itard, he was thought to be about twelve years old when captured. There were no radiologists to estimate the age of his bones, so the estimate was based on the overall maturity of his body, an estimate that is fraught with even more errors than one based on X-rays. He was certainly at least ten but could have been fourteen or fifteen. At the time he was discovered, Victor could neither speak nor understand language. In fact, he appeared not even to possess the slightest notion of using words for the purpose of communication.     This is where Professor Itard entered into Victor's life. Itard was a physician interested in the study of human behavior. He could be considered either a neurologist long before neurologists were invented, or a neuropsychologist even longer before that field emerged, or a cross between the two. (One of the great advantages of living before the field in which you labor has been defined is that your interests and pursuits are not constrained by any artificial borders.) Itard had already published the first case of what later became known as Tourette's syndrome. He was both well known and well respected in his field, whatever that was. He took complete charge of Victor. For over five years, Itard tried to teach Victor to speak, to get him to incorporate even the rudiments of language. But simple words proved to be beyond him. Yet Lacey absorbed words like a sponge. In the short time it took for me to examine her, she learned "light" and used it correctly. She both learned words and understood that they were to be used in communication. After several years of effort, Victor was able to understand only a small number of words and phrases; a few utterances like lait (milk) and oh Dieu (oh God) were all he ever said, and these he often said incorrectly. By the time I stopped by to see Lacey a second time, she had clearly differentiated between Big Bird and Bert, and could say their names well enough that even casual passersby could tell which Sesame Street character she meant. By the end of the five years that Itard and Victor spent together, the Wild Boy, though tamed, never came close to acquiring the use of language.     Why was Lacey able to learn words as if that was precisely what her brain was designed to do while for Victor it was a process as far beyond the capabilities of his brain as Windows 95? It was the same old question: nature versus nurture. True, we do not have all of the details of Victor's case history. While it is possible that he may have been retarded or had some other type of neurological disorder, that is unlikely. Itard was a trained observer of neurological function and would have noted such an abnormality. Furthermore, the young boy had learned to survive on his own in the forest of Aveyron, not exactly rocket science, but something that even some rocket scientists might find very difficult.     Most likely the cause of Victor's absence of language was nurture. Or the lack of it. He had been living on his own in that forest for years, and during that time most likely had not been exposed to language--neither to words nor the concept of words as a form of communication. In Victor this lack of exposure to language before puberty translated into a life during which he would never be able to acquire language despite the best efforts of one of the best minds of his era. True he was without television. He had no Sesame Street to watch. But neither did any of his contemporaries, all of whom learned to speak French fluently and even to pronounce it correctly.     Yet Lacey was learning English with a speed that was astounding everyone. Not quite everyone. From the moment I first heard her say "light" and watched her smile as she flicked on the flashlight, I knew she would do very well. Language would be the least of her problems. For speech is a skill that the developing brain is built to acquire. This statement is, of course, all backward. The brain wasn't designed to acquire language: that's not how evolution works. The process of our evolution resulted in a brain that automatically acquires language if that brain is exposed to it. Lacey was exposed to words immediately after her discovery. So had Victor been. She acquired language; Victor never did. What was the difference between them? The ages at which speech was first introduced: Lacy was six, Victor about twice her age. The initial acquisition of language can occur only while the brain is still developing, reaching its full potential. Once that process is complete, the window of opportunity for learning a language as a form of symbolic communication is lost.     To realize why such windows of opportunity exist within the human brain, it is necessary to understand how our brain got to be the way it is and how it actually learns and acquires skills. The human brain did not just appear fully developed within our skulls. It evolved as part of the process of classical Darwinian evolution.     The most surprising element of this "ascent of man" is that the absolute increase in both the size and the complexity that characterizes the human brain has been achieved with remarkably little genetic change. There is an embarrassingly close similarity between our genetic makeup and that of the gorilla or the chimpanzee. More than that, the total amount of genetic information coded in the double helixes of DNA has remained fairly constant throughout all of mammalian evolution, whether shrews or kangaroos or dolphins or humans. It is thought that there are about 1 million separate genes. They are divided up into a different number of chromosomes in different species, but the total number of genes is relatively stable, pretty much the same in mouse as in human. In all humans the number is, of course, identical, though the number of active genes is far less than 1 million. It is closer to half that since 40 percent or more of all chromosomal DNA appears to be redundant and plays no active role in development at all. That is, half of our genes have not evolved through evolution.     The best estimates suggest that about ten thousand genes, which is 1 percent of the total gene pool (or approximately 2 percent of the active gene pool), play an active part in the design and construction of the brain and the rest of the nervous system. This is true for humans and chimpanzees and walruses, even our pet gerbils.     This number seems more than adequate for the gerbil or the ordinary house cat, or maybe even a chimpanzee. But for humans? Our brains are made up of about 10 billion cells, and the size and complexity does not end there. There are [10.sup.14] synapses, or active connections between nerve cells, where messages can be relayed or interrupted. That is one hundred trillion. How can a mere ten thousand genes manage to control so many synapses? How can these relatively few genes do so much more for us than they do for other species?     The fact is that most of what these genes do for us and our brains is not that different from what they do in other species. Any survey of comparative anatomy of the nervous systems of mammals resoundingly supports that conclusion. The major structures are all the same whether the brain belongs to a sheep or a human. The visual cortex is always in the back of the brain. Sheep have the same thalamus buried within the two hemispheres, the same hypothalamus integrating the brain and the endocrine system. Most of the major pathways are the same. The motor cortex is at the back end of the frontal lobe in all vertebrate species. The same pathway of neuron fibers descends through the spinal cord to control the muscles on the opposite half of the body. That crossing is as universal as the dependence of muscles on the brain. In essence, the hard wiring laid down by the genes is pretty much the same no matter what the species, so the general structure is far more similar than dissimilar. Yet it is only in humans that the brain keeps developing and growing after birth.     If the hard wiring and the basic structure of the human brain are so similar to those of other species, why do our brains function so differently? Because our evolution and how we function have not entirely been the product of biological evolution, our genetic heritage; that is, the "hard wiring" in our brains is not the end of the story. Adding to the complexity of our brain are social, cultural, and environmental changes. Our genetic coding allows the brain to grow and develop while interacting with the environment. What separates us is that window (remember those spaces in a newborn's skull?), which give the brain the opportunity to grow and learn--for example, to acquire language.     Human infants are underdeveloped and helpless at birth and remain far more dependent for far longer than the offspring of any other species. We are born with an immature, almost embryonic brain, which continues to grow and evolve in relation to its environment to a degree and for a duration of time not found in any other species.     The brains of most other species are fully formed by birth: even the brains of the other primates continue to grow only for a brief, early postnatal period. The brains of humans continue to grow at rapid fetal growth rates long after birth. That is why final closure of our skulls occurs far later than in any other animal. This process of brain development, some of which takes place after the skull closes, extends for many years. The duration is different in different systems of the brain and in some even continues into what we consider adult life. This post-birth prolonged development of the human brain is often referred to as the juvenilization of the brain. The chimpanzee has a gestation period of seven and a half months, which is a month and a half shorter than that of the average human. The chimp, along with its brain, reaches adulthood at nine months of age, far more rapidly than even the most precocious of humans. The newborn chimp can hold its head up within two weeks of birth, while the average human baby takes ten times that long (twenty weeks) to accomplish this same feat. The chimp, having leapt ahead in development, increases the gap, walking by the end of the fourth week of life. Our biped offspring do not accomplish this until the chimp is already cavorting.     At birth, the human brain is only about one quarter of its eventual adult size and weight. The neonatal (newborn) chimp brain is approximately 350 cubic centimeters. As an adult, the size will reach 450 centimeters, an increase of 100 cubic centimeters, a bit over three ounces, and a total expansion of some 28 percent. The newborn baby, in contrast, has a brain capacity similar to that of the newborn chimp--about 350 cubic centimeters. That is where the similarity ends. While the chimp is out being a chimp, the human brain just keeps on growing, reaching a volume of approximately 1,400 cubic centimeters, or four times the average size at birth. This represents an increase of 300 percent, well over ten times the increase managed by our close relatives the chimpanzees. In other words, most of the human brain develops after birth. All of that growth occurs while the brain is functioning at some level, and the vast majority of these brain functions directly relate to the environment. This means that environmental influences can help shape all postnatal development. It is during this prolonged period of dependency, of growth and development of the brain, that the brain is most plastic and thereby most susceptible to environmental influence. It is not just the ten thousand genes that figure out how all those synapses are to interact. The environment helps write the software. It is also during this period that most environmentally dependent skills are acquired by the brain. In essence, then, the brain of the chimp and all other newborns of all other species develops almost entirely within the womb, while the human brain develops primarily outside the womb. (The dolphin actually comes closest to man in this regard, its brain doubling in size after birth.) Man's brain is enriched by environmental input, while the chimp's brain, in contrast, could be said to be environmentally deprived. And no intentionally added postnatal enrichment can ever really close this gap.     Man, during his ascent, must have been selected for such postnatal development; in other words, those few genetic differences which make us Homo sapiens must relate primarily to prolonged or extended postnatal expansion and development of the brain. The limiting factor for the size of the head at birth is the size of the adult female pelvis. Dolphins solved this by making their pelvis vestigial, so there is no pelvic limitation on head size at birth. Humans solved the problem by transferring most of the development of the brain to the postnatal period. The sequence of human evolution has followed this order: First came upright posture and bipedal gait without any change in the size of the pelvis and thereby no change in the volume of the newborn brain. (That still hasn't changed: the human brain is about the same size at birth as that of the chimp.) Then came the postnatal development of the brain. But what allowed this to be selected for over the last several million years?     The period of dependency in humans is not brief. It is one of the drawbacks of juvenilization. Neither the infant nor the young child can survive on its own. This lasts many years. Until early adolescence, the human left to his or her own resources would find life all but impossible. This dependency is both a result of the lack of learned functions by the brain, and a sign that the process of acquiring such adaptive skills is still proceeding. The human brain is distinguished from the brains of other species by its postnatal capacity for learning, its apparent plasticity, but there are limits. There are critical periods or windows of opportunity for learning. Certain skills can only be acquired at specific times. If a skill is not acquired during that critical period, then the acquisition of that skill in later life will be harder, if not impossible. Nothing typifies this more than the acquisition of language.     Man is not unique in having such windows of opportunity that are open only for specific parts of the life cycle. Birds learn their specific songs by imitating the songs of others of their kind. To do this, almost every species of birds must hear these songs in the first couple months of their lives. If the songs are not heard during this critical period, they are never learned. Birds deprived of this input remain songless. The one exception to this rule is canaries. It appears that each season canaries can learn new songs. It is almost as if they can recapture their youth. This annual rebirth of a critical period for learning is accompanied by an annual crop of new neurons which makes the acquisition possible.     Human infants acquire a bewildering number of different skills as their brains mature. They learn to sit up, to stand, to crawl, to walk. None of these skills require any teaching. They do not even require any external input. These abilities are not grounded in teaching or even based on mimicry. The parents walk in one morning and the baby is standing up in the crib. A blind infant masters these skills, as does a deaf or neglected infant. It is as if the acquisition of these skills is built into the nervous system. The brain at birth cannot direct these activities, but as it matures it always acquires these functions, and when such abilities are acquired, they automatically are performed. It's called maturation.     Other potential skills are there but depend on environmental input. The ability to acquire songs is ingrained genetically within the brains of birds. This ability is thus waiting to be activated by the environment. The specific song a specific brain will acquire depends on the environmental input. The bird's brain can only acquire a song that it hears. This is analogous to the human acquisition of language. The ability to learn language is genetically encoded in our brains. Given an environmental exposure to language, a normal brain will always acquire language. If a brain gets exposure and does not develop language, then that brain is abnormal.     What specific language a particular brain will learn depends on which language or languages the brain is exposed to during its critical period. So Victor should have learned French, while Lacey should have learned English, had they been merely given the exposure. Children acquire language with very little assistance from anyone else. It is primarily self-taught, almost like learning to walk, the way Lacey learned to say "light" and "Big Bird" and "Bert." Our brains appear to be innately equipped with systems that are able to acquire language. But this innate capability is both governed and limited by the maturation of the brain.     As the human brain matures and acquires specific self-taught "hard-wired" skills, it passes through different stages in its openness to acquiring language. When the child is one year old and capable of standing up alone, that child is also able to duplicate a few syllables and can also understand some words. Six months later he or she is creeping backward and downstairs and can walk forward. The child now has a repertoire of anywhere from a few to fifty or more words. These are not just sounds that are echoed but words that are used and understood. At this stage they are only single words, not phrases. By two years of age, the child is running, falling fairly frequently, but nonetheless running. He or she now uses a greater number of words and uses them as part of short phrases. Babbling, which had begun at about six months when the child began sitting up, disappears. All of these changes, hopefully, are occurring while the child is in a protective environment which supplies language that the growing and developing brain responds to and integrates into its pattern of growth and development. And so it goes until age four, by which time language has become well established.     The process is the same for all children assuming lack of disease; it is the language that differs from culture to culture. Those differences are found not merely in vocabulary. There is great variety among languages in how phrases and sentences, and even the simplest thoughts beyond simple nouns and verbs, are organized. All of these become ingrained. A brain that acquires language and then uses it for internal thought processes may give up the use of spoken language because of deprivation, but the language of internal thought is never lost.     And no matter what culture the human infant is raised in, no matter what language he is exposed to, acquisition of language can only occur during a critical period of development if it is ever to be acquired at all. For language this critical period, or window of opportunity, appears to end around early adolescence.     Victor was the original basis for our understanding that such a critical period exists for the acquisition of language. Since then, other far better documented instances of children deprived of environmental language input have been studied and reported on, all of which seem to demonstrate this same point. A girl dubbed "Genie" is one of the best known. From the age of twenty months she had been kept in an isolated room and deprived of virtually all human contact. By that time she was well into her window of opportunity and should have been able to say a number of words and understand many more. She should have been able to repeat words, know how to use them and may even have begun to string them together into short, but meaningful phrases. Her enforced isolation and linguistic deprivation was continued until she reached the age of thirteen and had already passed puberty. Her brain had reached adult size without the benefit of environmental language. No Big Bird was talking to her. Nor was anyone else. Her obviously schizophrenic father treated her like an animal. He even barked at her instead of talking to her.     When Genie was finally rescued, she was totally without language. Like Victor and Lacey, she could neither speak nor understand. Whatever she may have learned early in her life had been lost. She could not say anything. It was at this point that language exposure and instruction were initiated. Genie did far better than Victor. She did learn to comprehend language reasonably well, but her speech lagged far behind and she never mastered even the rudiments of grammar. According to her mother, she had learned single words prior to her incarceration. So during her "critical period" she had already started to learn language. She had an advantage over Victor in that some key elements of her language learning after puberty actually represented relearning. As a result Genie was able to reattain at least a fair measure of comprehension.     If Victor is the original model of what happens when children are environmentally deprived during their language window, then Genie teaches us that the acquisition process isn't all-or-nothing. Genie was able to learn, though her achievement level was poor. She filled the gap between Victor and Lacey in our understanding of the neurophysiological basis of acquiring language. She shed light on the parameters for how the brain learns language. She showed us that the window of opportunity is not an all-or-nothing proposition. But no matter what the "mechanics" are, the process is bound by a critical period, a window of opportunity.     One can argue that Victor's and Genie's stories are nothing more than peculiar aberrations. Such children were deprived of far more than just speech; both were socially and emotionally deprived as well. Are they the only evidence we have for a critical period during which speech and language must be acquired?     Of course not. Victor was merely the starting place for our understanding. The best support for the concept of a critical period comes from what neurologists have learned by studying patients who have lost the ability to use language, a condition neurologists call aphasia. The sudden onset of aphasia is most commonly caused by stroke. While we commonly think of strokes as a disorder of the older population, the fact is that strokes can occur at any age (though there are different causes at different ages), and they cause the same problems in three year olds as they do in that three year old's grandparent. However, though the nature of the neurological deficit is pretty much the same, what happens after the stroke is not. Our observations on the nature of recovery from aphasia have contributed to our notion of a critical period for speech acquisition, which in turn remains pivotal to our understanding that critical periods are part and parcel of normal brain function and development.     The basic neuroanatomy of loss of acquired speech can be summarized as some general rules which I have derived from generations of neurological observation:     Rule One: The sudden loss of speech implies that something has gone amiss in one hemisphere. We learned about this hemisphere by studying aphasia; therefore we learned "backward" that one hemisphere dominates our ability to speak.     Rule Two: If a patient is aphasic, that patient has a lesion in his or her dominant hemisphere for speech.     Rule Three: In right-handers the dominant hemisphere is invariably the left hemisphere. So if a right-hander is aphasic, he has a problem in his left hemisphere. In left-handers, the situation is not as clear, even though in most left-handers the left hemisphere is dominant for speech.     Rule Four: Not all aphasias are identical; if a patient has more trouble speaking than understanding, the lesion is toward the front of the dominant hemisphere. If the patient has more trouble understanding than speaking, the lesion is toward the back of the hemisphere.     Rule Five: All other theories concerning the localization of speech within the brain are just talk.     What does aphasia have to do with how our brain has evolved to learn language?     When a patient suffers a sudden loss of speech, we know there is an injury to the specific areas of the left hemisphere. Speech is lost. Is that the end of it? That depends on the age of the patient. Hence, aphasia gives us a view into the window of opportunity for learning or relearning speech. Every neurologist knows this from treating brain-injured patients. In childhood, traumas are often the result of inflammation of the cartid artery that supplies blood to one of the cerebral hemispheres. This results in a significant injury of that hemisphere and a hemiplegia, paralysis of the limbs on the opposite side of the body (also called "infantile hemiplegia"). The child very well might not recover from the paralysis, since the hemisphere that contains movement from birth--or even before--is hard-wired. If the dominant hemisphere for speech is affected by this loss of blood supply, the child also develops aphasia and can no longer speak.     If the stroke occurs at age three or four, speech becomes severely impaired, but after a brief period of time, normal speech is rapidly, and almost invariably fully, reacquired. But it is not reacquired by the injured dominant hemisphere. It now becomes housed in what had been the nondominant hemisphere for speech. During normal development, the left hemisphere is selected to become dominant. (At the risk of repeating myself, in left-handers either hemisphere can be selected, but more often the left is chosen.) In a very young child, however, this selection is not yet rigidly set, so that if the left hemisphere is injured, switching dominance is carried out almost without skipping a beat. In fact, once recovery begins, these brain-injured children pass their language milestones at a remarkably accelerated rate until they catch up to exactly where they should be. They then move on as if nothing had ever happened to them. The other side of the brain not only does the job; it does it every bit as well as the first side ever did. So both sides must originally have had equal structural and functional potential as far as the control of speech.     But the ability to switch hemispheres for speech dominance does not go on forever. If it did, adults with strokes would all recover their speech, which we know too well is not what happens. Most children recover speech fully if they are struck before the age of nine. Puberty is the turning point. That is what we learned from Genie. Children who become aphasic between nine and their midteens fall in between. They rarely fully reacquire speech, but they recover more than adults do. By the age of fifteen or sixteen, the prognosis for recovery from an acute aphasia is the same as for adults. Recovery from aphasia is not impossible in adults. But the recovery that occurs is quite different. Most of it occurs rapidly and is due neither to adaptation by the brain in finding a new area on the opposite hemisphere to assume the function of the injured speech region nor to a process of relearning. Rather, it represents healing and "shrinkage" of the initial stroke. Such rapid recovery suggests that the initial "loss" was due to loss of function of areas of the brain that were partially injured but not permanently destroyed. The symptoms remaining after a few weeks tend to be permanent.     So much for the neurologic observations on the windows of learning. But we also have important observations from other fields of study. In fact, some elements of support for the notion of a critical period for language acquisition is well within the experience of almost every American, whether or not that person has ever heard of a child with infantile hemiplegia, or run across a Genie locked in the basement or a Victor wandering the back woods subsisting on his own in the wild. We have all been able to observe the struggles of normal brains to acquire new language skills. We have seen this in parents or grandparents of our own or of our friends, and in wave after wave, generation after generation, of immigrants. We ourselves may have struggled to acquire a new "first" language. The original language of immigrants may change, but the brain still thinks in the original language.     All of us are aware of the problems faced in learning a second language. What adult tourist hasn't returned from France amazed that four-year-old French children have mastered the skill of speaking French complete with correct accents, one that has eluded the adult. Second languages are far easier to acquire during childhood than during adolescence or adulthood, and no matter how much we study the process of learning language, the results will be the same.     Currently, researchers are observing Chinese and Korean immigrants' progress in acquiring English. And guess what? Children learned English quickly and correctly, with no accents at all, right up to the age of puberty. The major element of teaching is immersion into an environment in which English is spoken.     People can and do learn a second language after puberty and even during adult life. But it takes considerable effort, far more than it does prior to puberty. Furthermore, a second language learned after puberty is always that--a second language grafted onto the first rather than a natural language fully and easily acquired.     There is nothing new in this. My grandparents had accents. My father learned a second language quite easily as a child because his parents spoke to each other in that language. It was in his environment, and so his brain acquired it. No formal teaching was required, no second language classes, merely exposure.     What bothers me is that despite neurological studies, clinical observation, and personal experience, which all teach us that acquiring language is directly related to a window of opportunity in a child's development, the teaching of a second language in our schools is not seriously begun until high school, precisely the wrong time to start. Language is not the only system within the brain that is influenced by the environment and for which there is a critical period during which this environmental influence can be expressed. Such is the rule for brain functions, not the exception. That is the way the juvenilized human brain evolves into the adult human brain. It develops in conjunction with environmental influences (nurturing), and without such influences the human brain does not develop normally. This general rule derives its best support from the work of two American neuroscientists, David H. Hubel and Torsten N. Wiesel, who shared the 1981 Nobel Prize for physiology and medicine. Their work did not involve speech but vision. The visual system is truly hard-wired. It is set at birth. If it is injured, no other part of the brain can take over. And if the brain is not injured, it works automatically. So we thought.     The visual cortex of cats and monkeys, and it is assumed also of humans, contain neurons that respond selectively to specific features of the environment such as color or spatial relationships. This selectivity is always there in the normal brain, but it's not as fully automatic as it seems. It depends entirely on the environment. This is what Hubel and Wiesel demonstrated. When a kitten was reared in an environment made up entirely of vertical stripes, then the neurons which responded to visual and spatial orientation "learned" to respond primarily to vertical lines. When horizontal lines were introduced into the cats' environment late in life, these neurons did not respond. In a sense, their brains were not able to see these lines. The "bias" of brain cells to preferentially respond to certain lines and not to others could only be acquired during a critical period of growth in newborn kittens and is an example of a selective rather than an instructive process.     I should step back here and explain that there are two opposing views concerning how the brain learns acquired skills; one is referred to as instructive or constructive, the other selective. The instructive model is the older view: networks of cells are "instructed" by experiences to form certain synapses (pathways) which, once formed and reinforced, become permanent. In this view, the window of learning closes when the nerve cells, either through age or through injury, can no longer reinforce the pathways.     The selective process works just the opposite way. All the kitten's pathways for seeing color and observing spatial relationships exist at birth, waiting to be used. As the juvenilized brain matures, those pathways that do not get used (selected) are eliminated: they atrophy and disappear forever. Most scientists agree that the selective process is most probably the way our brain learns. This process is attractive. It means the more exposure the brain receives, especially at critical times, the more it learns: it can add on languages or colors or sounds, even if there is no formal instruction. But if the brain is not exposed to a nurturing environment, the pathways close down--end of ballgame. In other words, the kitten's system was initially capable of responding to both vertical and horizontal lines, but the artificial limitation of exposure to only vertical lines resulted in vertical bias. This limited the neurophysiological function of the brain. The cats learned the "language" of only vertical lines.     So, like the cats that were never exposed to horizontal lines in early life, Victor had never had language in his environment. The cats never learned to see horizontal lines, and Victor never learned language.     But Lacey did. Her window of opportunity had not passed her by. Her brain could still acquire language, and it did. Not with a vengeance, but with the normal skills of a normal brain interacting with its environment. No teaching was required. Just input. Just an environment with more than vertical lines. And Genie seems to have been rescued just in time to get her part way there.     How did humans get this way? How did language develop? In order to understand we have to shed a few myths ingrained in our culture. We have to turn our attention to the cavewoman. God created man in his image. At least that is the tradition according to Western religious beliefs. That tradition developed and thrived within male-dominated cultures. Our concept of the biological ascent of man arose in that very same cultural milieu. In the century and a half since the publication of Darwin's Origin of Species , white male anthropologists and paleontologists designed a blueprint of human evolution in which the success of hunting was viewed as the primary factor in the expansion and dominance of humankind.     According to this conceptualization, "man the hunter" developed better and better tools for hunting prey and as a direct result gained supremacy over the beasts. It was these tools--designed by males, used by males, in the male occupation of hunting--that gave man his great biological advantage in the struggle to survive. Caveman and the wooly mammoth had both struggled to survive the Ice Age. Man had made it. So had the wooly mammoths. But then man devised a secret weapon and bingo, no more wooly mammoths. By making better tools, man changed the playing field. He then pulled woman along behind him, perhaps not by her hair, but certainly not as an equal partner in the ascent. This model was created by men, about men, for men. It proclaims that the success men achieved with their newly created tools resulted in the cultural advances that allowed the Egyptians to build the pyramids and Homer to compose the Iliad . Once that was accomplished, the flowering of a literate culture was just around the corner. We are asked to believe that all of this came about because males figured out how to make better flint blades and spears.     The problem is that this "flow diagram" just doesn't stand up to rigorous scrutiny. The fact is that hunting prey for survival is fairly easy. Lions do it quite well. They also teach their cubs how to do it equally successfully. And, of course, lions still can't talk and have not yet developed a literate society. Wolves hunt in coordinated packs and do it quite successfully, generation after generation. Last time I checked they too were without language.     While improved hunting implements could assure a better supply of food, and therefore a decrease in infant mortality (the key to true Darwinian biological superiority), it is difficult to ascribe to such a technological advance any changes other than a mere increase in numbers. Certainly not man's cultural explosion, nor the development of language.     So if it wasn't "man the hunter" who was responsible for the explosive biological advantage of modern humans, what was?     Our advantages over other species are most probably due to the development of a complex language. And women are far more likely to have played the more significant role in this than men. Women were the ones who did the tough job: raising the juvenilized children in caves or any other environment and teaching those children what they needed to know to survive in the world while they were still dependent, weak and slow. Teaching survival to the juvenilized infant depended on language. Language gave the human the distinct advantage for survival. And over a million years or two, the result was the evolution of brains selected for acquisition of language and other skills during the period of prolonged juvenilization.     How long did that take? (Continues...) Copyright © 2000 Estate of Harold Klawans, M.D.. All rights reserved.

Table of Contents

Prefacep. 9
Part 1 The Ascent of Cognitive Function
1. Defending the Cavewoman: The Window of Opportunity for Learningp. 15
2. A Lucy of My Very Own: Locating Handedness and Speechp. 37
3. The Gift of Speech: Frank Morrell and the Treatment of Acquired Epileptic Aphasiap. 56
4. Manganese Miners: Hard Wiring for Movementp. 73
5. I Never Read a Movie I Liked: The Architecture of Readingp. 88
6. One of These Things Is Not Like the Others: How Literacy Changes the Brainp. 110
7. The Music Goes Round and Round: But It Comes in Where?p. 121
Part 2 The Brain's Soft Spots: Programmed Cell Death, Prions, and Pain
8. My Lunch with Oliver: Why That Morning Was Different from All the Other Morningsp. 143
9. Two Sets of Brains: Something Old, Something "New"p. 154
10. Anticipation: Unto the Third Generation and Beyondp. 165
11. The Hermit of Thief River Falls: On First Meeting an Eponymp. 192
12. Mad Cows and Mad Markets: Ice-Nine and the Non-Darwinian Evolution of Man and Diseasep. 212
13. Whatever Happened to Baby Neanderthal? An Afterthoughtp. 237
Indexp. 245