The Story Maker

The Biology of Memory

The science of learning is, at bottom, a study of the mental muscle doing the work—the living brain—and how it manages the streaming sights, sounds, and scents of daily life. That it does so at all is miracle enough. That it does so routinely is beyond extraordinary.
Think of the waves of information rushing in every waking moment, the hiss of the kettle, the flicker of movement in the hall, the twinge of back pain, the tang of smoke. Then add the demands of a typical layer of multitasking—say, preparing a meal while monitoring a preschooler, periodically returning work emails, and picking up the phone to catch up with a friend.
Insane.
The machine that can do all that at once is more than merely complex. It’s a cauldron of activity. It’s churning like a kicked beehive.
Consider several numbers. The average human brain contains 100 billion neurons, the cells that make up its gray matter. Most of these cells link to thousands of other neurons, forming a universe of intertwining networks that communicate in a ceaseless, silent electrical storm with a storage capacity, in digital terms, of a million gigabytes. That’s enough to hold three million TV shows. This biological machine hums along even when it’s “at rest,” staring blankly at the bird feeder or some island daydream, using about 90 percent of the energy it burns while doing a crossword puzzle. Parts of the brain are highly active during sleep, too.
The brain is a dark, mostly featureless planet, and it helps to have a map. A simple one will do, to start. The sketch below shows several areas that are central to learning: the entorhinal cortex, which acts as a kind of filter for incoming information; the hippocampus, where memory formation begins; and the neocortex, where conscious memories are stored once they’re flagged as keepers.

This diagram is more than a snapshot. It hints at how the brain operates. The brain has modules, specialized components that divide the labor. The entorhinal cortex does one thing, and the hippocampus does another. The right hemisphere performs different functions from the left one. There are dedicated sensory areas, too, processing what you see, hear, and feel. Each does its own job and together they generate a coherent whole, a continually updating record of past, present, and possible future.
In a way, the brain’s modules are like specialists in a movie production crew. The cinematographer is framing shots, zooming in tight, dropping back, stockpiling footage. The sound engineer is recording, fiddling with volume, filtering background noise. There are editors and writers, a graphics person, a prop stylist, a composer working to supply tone, feeling—the emotional content—as well as someone keeping the books, tracking invoices, the facts and figures. And there’s a director, deciding which pieces go where, braiding all these elements together to tell a story that holds up. Not just any story, of course, but the one that best explains the “material” pouring through the senses. The brain interprets scenes in the instants after they happen, inserting judgments, meaning, and context on the fly. It also reconstructs them later on—what exactly did the boss mean by that comment?—scrutinizing the original footage to see how and where it fits into the larger movie.
It’s a story of a life—our own private documentary—and the film “crew” serves as an animating metaphor for what’s happening behind the scenes. How a memory forms. How it’s retrieved. Why it seems to fade, change, or grow more lucid over time. And how we might manipulate each step, to make the details richer, more vivid, clearer.
Remember, the director of this documentary is not some film school graduate, or a Hollywood prince with an entourage. It’s you.
• • •

Before wading into brain biology, I want to say a word about metaphors. They are imprecise, practically by definition. They obscure as much as they reveal. And they’re often self-serving,* crafted to serve some pet purpose—in the way that the “chemical imbalance” theory of depression supports the use of antidepressant medication. (No one knows what causes depression or why the drugs have the effects they do.)
Fair enough, all around. Our film crew metaphor is a loose one, to be sure—but then so is scientists’ understanding of the biology of memory, to put it mildly. The best we can do is dramatize what matters most to learning, and the film crew does that just fine.
To see how, let’s track down a specific memory in our own brain.
Let’s make it an interesting one, too, not the capital of Ohio or a friend’s phone number or the name of the actor who played Frodo. No, let’s make it the first day of high school. Those tentative steps into the main hallway, the leering presence of the older kids, the gunmetal thump of slamming lockers. Everyone over age fourteen remembers some detail from that day, and usually an entire video clip.
That memory exists in the brain as a network of linked cells. Those cells activate—or “fire”—together, like a net of lights in a department store Christmas display. When the blue lights blink on, the image of a sleigh appears; when the reds come on, it’s a snowflake. In much the same way, our neural networks produce patterns that the brain reads as images, thoughts, and feelings.
The cells that link to form these networks are called neurons. A neuron is essentially a biological switch. It receives signals from one side and—when it “flips” or fires—sends a signal out the other, to the neurons to which it’s linked.
The neuron network that forms a specific memory is not a random collection. It includes many of the same cells that flared when a specific memory was first formed—when we first heard that gunmetal thump of lockers. It’s as if these cells are bound in collective witness of that experience. The connections between the cells, called synapses, thicken with repeated use, facilitating faster transmission of signals.

Intuitively, this makes some sense; many remembered experiences feel like mental reenactments. But not until 2008 did scientists capture memory formation and retrieval directly, in individual human brain cells. In an experiment, doctors at the University of California, Los Angeles, threaded filament-like electrodes deep into the brains of thirteen people with epilepsy who were awaiting surgery.
This is routine practice. Epilepsy is not well understood; the tiny hurricanes of electrical activity that cause seizures seem to come out of the blue. These squalls often originate in the same neighborhood of the brain for any one individual, yet the location varies from person to person. Surgeons can remove these small epicenters of activity but first they have to find them, by witnessing and recording a seizure. That’s what the electrodes are for, pinpointing location. And it takes time. Patients may lie in the hospital with electrode implants for days on end before a seizure strikes. The UCLA team took advantage of this waiting period to answer a fundamental question.
Each patient watched a series of five- to ten-second video clips of well-known shows like Seinfeld and The Simpsons, celebrities like Elvis, or familiar landmarks. After a short break, the researchers asked each person to freely recall as many of the videos as possible, calling them out as they came to mind. During the initial viewing of the videos, a computer had recorded the firing of about one hundred neurons. The firing pattern was different for each clip; some neurons fired furiously and others were quiet. When a patient later recalled one of the clips, say of Homer Simpson, the brain showed exactly the same pattern as it had originally, as if replaying the experience.
“It’s astounding to see this in a single trial; the phenomenon is strong, and we knew we were listening in the right place,” the senior author of the study, Itzhak Fried, a professor of neurosurgery at UCLA and Tel Aviv University, told me.
There the experiment ended, and it’s not clear what happened to the memory of those brief clips over time. If a person had seen hundreds of Simpsons episodes, then this five-second clip of Homer might not stand out for long. But it could. If some element of participating in the experiment was especially striking—for example, the sight of a man in a white coat fiddling with wires coming out of your exposed brain as Homer belly-laughed—then that memory could leap to mind easily, for life.
My first day of high school was in September 1974. I can still see the face of the teacher I approached in the hallway when the bell rang for the first class. I was lost, the hallway was swarmed, my head racing with the idea that I might be late, might miss something. I can still see streams of dusty morning light in that hallway, the ugly teal walls, an older kid at his locker, stashing a pack of Winstons. I swerved beside the teacher and said, “Excuse me” in a voice that was louder than I wanted. He stopped, looked down at my schedule: a kind face, wire-rimmed glasses, wispy red hair.
“You can follow me,” he said, with a half smile. “You’re in my class.”
Saved.
I have not thought about that for more than thirty-five years, and yet there it is. Not only does it come back but it does so in rich detail, and it keeps filling itself out the longer I inhabit the moment: here’s the sensation of my backpack slipping off my shoulder as I held out my schedule; now the hesitation in my step, not wanting to walk with a teacher. I trailed a few steps behind.
This kind of time travel is what scientists call episodic, or autobiographical memory, for obvious reasons. It has some of the same sensual texture as the original experience, the same narrative structure. Not so with the capital of Ohio, or a friend’s phone number: We don’t remember exactly when or where we learned those things. Those are what researchers call semantic memories, embedded not in narrative scenes but in a web of associations. The capital of Ohio, Columbus, may bring to mind images from a visit there, the face of a friend who moved to Ohio, or the grade school riddle, “What’s round on both sides and high in the middle?” This network is factual, not scenic. Yet it, too, “fills in” as the brain retrieves “Columbus” from memory.
In a universe full of wonders, this has to be on the short list: Some molecular bookmark keeps those neuron networks available for life and gives us nothing less than our history, our identity.
Scientists do not yet know how such a bookmark could work. It’s nothing like a digital link on a computer screen. Neural networks are continually in flux, and the one that formed back in 1974 is far different from the one I have now. I’ve lost some detail and color, and I have undoubtedly done a little editing in retrospect, maybe a lot.
It’s like writing about a terrifying summer camp adventure in eighth grade, the morning after it happened, and then writing about it again, six years later, in college. The second essay is much different. You have changed, so has your brain, and the biology of this change is shrouded in mystery and colored by personal experience. Still, the scene itself—the plot—is fundamentally intact, and researchers do have an idea of where that memory must live and why. It’s strangely reassuring, too. If that first day of high school feels like it’s right there on the top of your head, it’s a nice coincidence of language. Because, in a sense, that’s exactly where it is.
• • •

For much of the twentieth century scientists believed that memories were diffuse, distributed through the areas of the brain that support thinking, like pulp in an orange. Any two neurons look more or less the same, for one thing; and they either fire or they don’t. No single brain area looked essential for memory formation.
Scientists had known since the nineteenth century that some skills, like language, are concentrated in specific brain regions. Yet those seemed to be exceptions. In the 1940s, the neuroscientist Karl Lashley showed that rats that learned to navigate a maze were largely unfazed when given surgical injuries in a variety of brain areas. If there was some single memory center, then at least one of those incisions should have caused severe deficits. Lashley concluded that virtually any area of the thinking brain was capable of supporting memory; if one area was injured, another could pick up the slack.
In the 1950s, however, this theory began to fall apart. Brain scientists began to discover, first, that developing nerve cells—baby neurons, so to speak—are coded to congregate in specific locations in the brain, as if preassigned a job. “You’re a visual cell, go to the back of the brain.” “You, over there, you’re a motor neuron, go straight to the motor area.” This discovery undermined the “interchangeable parts” hypothesis.
The knockout punch fell when an English psychologist named Brenda Milner met a Hartford, Connecticut, man named Henry Molaison. Molaison was a tinkerer and machine repairman who had trouble keeping a job because he suffered devastating seizures, as many as two or three a day, which came with little warning and often knocked him down, out cold. Life had become impossible to manage, a daily minefield. In 1953, at the age of twenty-seven, he arrived at the office of William Beecher Scoville, a neurosurgeon at Hartford Hospital, hoping for relief.
Molaison probably had a form of epilepsy, but he did not do well on antiseizure drugs, the only standard treatment available at the time. Scoville, a well-known and highly skilled surgeon, suspected that whatever their cause the seizures originated in the medial temporal lobes. Each of these lobes—there’s one in each hemisphere, mirroring one another, like the core of a split apple—contains a structure called the hippocampus, which was implicated in many seizure disorders.
Scoville decided that the best option was to surgically remove from Molaison’s brain two finger-shaped slivers of tissue, each including the hippocampus. It was a gamble; it was also an era when many doctors, Scoville prominent among them, considered brain surgery a promising treatment for a wide variety of mental disorders, including schizophrenia and severe depression. And sure enough, postop, Molaison had far fewer seizures.
He also lost his ability to form new memories.
Every time he had breakfast, every time he met a friend, every time he walked the dog in the park, it was as if he was doing so for the first time. He still had some memories from before the surgery, of his parents, his childhood home, of hikes in the woods as a kid. He had excellent short-term memory, the ability to keep a phone number or name in mind for thirty seconds or so by rehearsing it, and he could make small talk. He was as alert and sensitive as any other young man, despite his loss. Yet he could not hold a job and lived, more so than any mystic, in the moment.
In 1953, Scoville described his patient’s struggles to a pair of doctors in Montreal, Wilder Penfield and Brenda Milner, a young researcher who worked with him. Milner soon began taking the night train down to Hartford every few months to spend time with Molaison and explore his memory. It was the start of a most unusual, decade-long partnership, with Milner continually introducing Molaison to novel experiments and he cooperating, nodding his head and fully understanding their purpose—for as long as his short-term memory could hold on. In those fleeting moments they were collaborators, Milner said, and that collaboration would quickly and forever alter the understanding of learning and memory.
In her first experiment, conducted in Scoville’s office, Milner had Molaison try to remember the numbers 5, 8, and 4. She then left the office to have coffee and returned twenty minutes later, asking “What were the numbers?” He’d remembered them by mentally rehearsing while she was gone.
“Well, that’s very good,” Milner said. “And do you remember my name?”
“No, I’m sorry,” he said. “My trouble is my memory.”
“I’m Dr. Milner, and I come from Montreal.”
“Oh, Montreal, Canada—I was in Canada once, I went to Toronto.”
“Oh. Do you still remember the number?”
“Number?” Molaison said. “Was there a number?”
“He was a very gracious man, very patient, always willing to try the tasks I would give him,” Milner, now a professor of cognitive neuroscience at the Montreal Neurological Institute and McGill University, told me. “And yet every time I walked in the room, it was like we’d never met.”
In 1962, Milner presented a landmark study in which she and Molaison—now known as H.M. to protect his privacy—demonstrated that a part of his memory was fully intact. In a series of trials, she had him draw a five-point star on a piece of paper while he watched his drawing hand in a mirror. This is awkward, and Milner made it more so. She had him practice tracing the star between borders, as if working his way through a star-shaped maze. Every time H.M. tried this, it struck him as an entirely new experience. He had no memory of doing it before. Yet with practice he became proficient. “At one point after many of these trials, he said to me, ‘Huh, this was easier than I thought it would be,’ ” Milner said.
The implications of Milner’s research took some time to sink in. Molaison could not remember new names, faces, facts, or experiences. His brain could register the new information but, without a hippocampus, could not hold on to it. This structure and others nearby—which had been removed in the surgery—are clearly necessary to form such memories.
He could develop new physical skills, however, like tracing the star and later, in his old age, using a walker. This ability, called motor learning, is not dependent on the hippocampus. Milner’s work showed that there were at least two systems in the brain to handle memory, one conscious and the other subconscious. We can track and write down what we learned today in history class, or in geometry, but not in soccer practice or gymnastics, not in anything like the same way. Those kinds of physical skills accumulate without our having to think much about them. We may be able to name the day of the week when we first rode a bike at age six, but we cannot point to the exact physical abilities that led up to that accomplishment. Those skills—the balance, the steering, the pedal motion—refined themselves and came together suddenly, without our having to track or “study” them.
The theory that memory was uniformly distributed, then, was wrong. The brain had specific areas that handled different types of memory formation.
Henry Molaison’s story didn’t end there. One of Milner’s students, Suzanne Corkin, later carried on the work with him at the Massachusetts Institute of Technology. In the course of hundreds of studies spanning more than forty years, she showed that he had many presurgery memories, of the war, of FDR, of the layout of his childhood house. “Gist memories, we call them,” Dr. Corkin told me. “He had the memories, but he couldn’t place them in time exactly; he couldn’t give you a narrative.”
Studies done in others with injuries in the same areas of the brain showed a similar before/after pattern. Without a functioning hippocampus, people cannot form new, conscious memories. Virtually all of the names, facts, faces, and experiences they do remember predate their injury. Those memories, once formed, must therefore reside elsewhere, outside the hippocampus.
The only viable candidate, scientists knew, was the brain’s thin outer layer, the neocortex. The neocortex is the seat of human consciousness, an intricate quilt of tissue in which each patch has a specialized purpose. Visual patches are in the back. Motor control areas are on the side, near the ears. One patch on the left side helps interpret language; another nearby handles spoken language, as well as written.

This layer—the “top” of the brain, as it were—is the only area with the tools capable of re-creating the rich sensory texture of an autobiographical memory, or the assortment of factual associations for the word “Ohio” or the number 12. The first-day-of-high-school network (or networks; there likely are many) must be contained there, largely if not entirely. My first-day memory is predominantly visual (the red hair, the glasses, the teal walls) and auditory (the hallway noise, the slamming lockers, the teacher’s voice)—so the network has plenty of neurons in the visual and audio cortex. Yours may include the smell of the cafeteria, the deadweight feel of your backpack, with plenty of cells in those cortical patches.
To the extent that it’s possible to locate a memory in the brain, that’s where it resides: in neighborhoods along the neocortex primarily, not at any single address.
That the brain can find this thing and bring it to life so fast—instantaneously, for most of us, complete with emotion, and layers of detail—defies easy explanation. No one knows how that happens. And it’s this instant access that creates what to me is the brain’s grandest illusion: that memories are “filed away” like video scenes that can be opened with a neural click, and snapped closed again.
The truth is stranger—and far more useful.
• • •

The risk of peering too closely inside the brain is that you can lose track of what’s on the outside—i.e., the person. Not some generic human, either, but a real one. Someone who drinks milk straight from the carton, forgets friends’ birthdays, and who can’t find the house keys, never mind calculate the surface area of a pyramid.
Let’s take a moment to review. The close-up of the brain has provided a glimpse of what cells do to form a memory. They fire together during an experience. Then they stabilize as a network through the hippocampus. Finally, they consolidate along the neocortex in a shifting array that preserves the basic plot points. Nonetheless, to grasp what people do to retrieve a memory—to remember—requires stepping back for a wide shot. We’ve zoomed in, à la Google Maps, to see cells at street level; it’s time to zoom out and have a look at the larger organism: at people whose perceptions reveal the secrets of memory retrieval.
The people in question are, again, epilepsy patients (to whom brain science owes debts without end).
In some epilepsy cases, the flares of brain activity spread like a chemical fire, sweeping across wide stretches of the brain and causing the kind of full-body, blackout seizures that struck H.M. as a young man. Those seizures are so hard to live with, and often so resistant to drug treatment, that people consider brain surgery. No one has the same procedure H.M. underwent, of course, but there are other options. One of those is called split brain surgery. The surgeon severs the connections between the left and right hemispheres of the brain, so the storms of activity are confined to one side.
This quiets the seizures, all right. But at what cost? The brain’s left and right halves cannot “talk” to each other at all; split brain surgery must cause serious damage, drastically altering someone’s personality, or at least their perceptions. Yet it doesn’t. The changes are so subtle, in fact, that the first studies of these so-called split brain patients in the 1950s found no differences in thinking or perception at all. No slip in IQ; no deficits in analytical thinking.
The changes had to be there—the brain was effectively cut in half—but it would take some very clever experiments to reveal them.
In the early 1960s, a trio of scientists at the California Institute of Technology finally did so, by devising a way to flash pictures to one hemisphere at a time. Bingo. When split brain patients saw a picture of a fork with only their right hemisphere, they couldn’t say what it was. They couldn’t name it. Due to the severed connection, their left hemisphere, where language is centered, received no information from the right side. And the right hemisphere—which “saw” the fork—had no language to name it.
And here was the kicker: The right hemisphere could direct the hand it controls to draw the fork.
The Caltech trio didn’t stop there. In a series of experiments with these patients, the group showed that the right hemisphere could also identify objects by touch, correctly selecting a mug or a pair of scissors by feel after seeing the image of one.
The implications were clear. The left hemisphere was the intellectual, the wordsmith, and it could be severed from the right without any significant loss of IQ. The right side was the artist, the visual-spatial expert. The two worked together, like copilots.
This work percolated into the common language and fast, as shorthand for types of skills and types of people: “He’s a right brain guy, she’s more left brain.” It felt right, too: Our aesthetic sensibility, open and sensual, must come from a different place than cool logic.
What does any of this have to do with memory?
It took another quarter century to find out. And it wouldn’t happen until scientists posed a more fundamental question: Why don’t we feel two-brained, if we have these two copilots?
“That was the question, ultimately,” said Michael Gazzaniga, who coauthored the Caltech studies with Roger Sperry and Joseph Bogen in the 1960s. “Why, if we have these separate systems, is it that the brain has a sense of unity?”
That question hung over the field, unanswered, for decades. The deeper that scientists probed, the more confounding the mystery seemed to be. The left brain/right brain differences revealed a clear, and fascinating, division of labor. Yet scientists kept finding other, more intricate, divisions. The brain has thousands, perhaps millions, of specialized modules, each performing a special skill—one calculates a change in light, for instance, another parses a voice tone, a third detects changes in facial expression. The more experiments that scientists did, the more specializing they found, and all of these mini-programs run at the same time, often across both hemispheres. That is, the brain sustains a sense of unity not only in the presence of its left and right copilots. It does so amid a cacophony of competing voices coming from all quarters, the neural equivalent of open outcry at the Chicago Board of Trade.
How?
The split brain surgery would again provide an answer.
In the early 1980s, Dr. Gazzaniga performed more of his signature experiments with split brain patients—this time with an added twist. In one, for example, he flashed a patient two pictures: The man’s left hemisphere saw a chicken foot, and his right saw a snow scene. (Remember, the left is where language skills are centered, and the right is holistic, sensual; it has no words for what it sees.) Dr. Gazzaniga then had the man choose related images for each picture from an array visible to both hemispheres, say, a fork, a shovel, a chicken, and a toothbrush. The man chose a chicken to go with the foot, and a shovel to go with the snow. So far, so good.
Then Dr. Gazzaniga asked him why he chose those items—and got a surprise. The man had a ready answer for one choice: The chicken goes with the foot. His left hemisphere had seen the foot. It had words to describe it and a good rationale for connecting it to the chicken.
Yet his left brain had not seen the picture of the snow, only the shovel. He had chosen the shovel on instinct but had no conscious explanation for doing so. Now, asked to explain the connection, he searched his left brain for the symbolic representation of the snow and found nothing. Looking down at the picture of the shovel, the man said, “And you need a shovel to clean out the chicken shed.”
The left hemisphere was just throwing out an explanation based on what it could see: the shovel. “It was just making up any old BS,” Gazzaniga told me, laughing at the memory of the experiment. “Making up a story.”
In subsequent studies he and others showed that the pattern was consistent. The left hemisphere takes whatever information it gets and tells a tale to conscious awareness. It does this continually in daily life, and we’ve all caught it in the act—overhearing our name being whispered, for example, and filling in the blanks with assumptions about what people are gossiping about.
The brain’s cacophony of voices feels coherent because some module or network is providing a running narration. “It only took me twenty-five years to ask the right question to figure it out,” Gazzaniga said, “which was why? Why did you pick the shovel?”
All we know about this module is it resides somewhere in the left hemisphere. No one has any idea how it works, or how it strings together so much information so fast. It does have a name. Gazzaniga decided to call our left brain narrating system “the interpreter.”
This is our director, in the film crew metaphor. The one who makes sense of each scene, seeking patterns and inserting judgments based on the material; the one who fits loose facts into a larger whole to understand a subject. Not only makes sense but makes up a story, as Gazzaniga put it—creating meaning, narrative, cause and effect.
It’s more than an interpreter. It’s a story maker.
This module is vital to forming a memory in the first place. It’s busy answering the question “What just happened?” in the moment, and those judgments are encoded through the hippocampus. That’s only part of the job, however. It also answers the questions “What happened yesterday?” “What did I make for dinner last night?” And, for global religions class, “What were the four founding truths of Buddhism, again?”
Here, too, it gathers the available evidence, only this time it gets the sensory or factual cues from inside the brain, not from outside. Think. To recall the Buddha’s truths, start with just one, or a fragment of one. Anguish. The Buddha talked about anguish. He said anguish was … to be understood. That’s right, that’s truth number one. The second truth had to do with meditation, with not acting, with letting go. Let go of anguish? That’s it; or close. Another truth brings to mind a nature trail, a monk padding along in robes—the path. Walking the path? Follow the path?
So it goes. Each time we run the tape back, a new detail seems to emerge: The smell of smoke in the kitchen; the phone call from a telemarketer. The feeling of calmness when reading “let go of anguish”—no, it was let go of the sources of anguish. Not walk the path, but cultivate the path. These details seem “new” in part because the brain absorbs a lot more information in the moment than we’re consciously aware of, and those perceptions can surface during remembering. That is to say: The brain does not store facts, ideas, and experiences like a computer does, as a file that is clicked open, always displaying the identical image. It embeds them in networks of perceptions, facts, and thoughts, slightly different combinations of which bubble up each time. And that just retrieved memory does not overwrite the previous one but intertwines and overlaps with it. Nothing is completely lost, but the memory trace is altered and for good.
As scientists put it, using our memories changes our memories.
After all the discussion of neurons and cell networks; after Lashley’s rats and H.M.; after the hippocampus, split brain patients, and the story maker, this seems elementary, even mundane.
It’s not.

Comments

Popular posts from this blog

ft

gillian tett 1