Sushi and the science of synapses
What is the molecular basis of learning? Here LMU biochemist Michael Kiebler shares his insights into how associative learning is encoded in the brain.
A visit to a Japanese restaurant in the company of Michael Kiebler, Professor of Cell Biology at LMU’s Center for Biomedical Research, might be very enlightening. Biochemist Kiebler is interested in elemental operations, and he often mentions “belt sushi” – not with reference to the food, but to the logistics of self-service. Sushi lovers are often tempted by a whole range of makis and sashimis sedately moving by on a conveyor belt, and each guest makes his own selection. The sushi belt, Kiebler says, provides an apt metaphor for the problem he works on: He wants to know how the connections between nerve cells are modified during the process of learning. – And one aspect of how such modifications are targeted to the correct locations in neural networks indeed recalls how sushi restaurants deliver delicacies to their guests.
The analogy may sound far-fetched, but it may nevertheless help make sense of a highly complex process, for how the human brain really works and what happens when we learn something new is not understood in detail. One of the central problems in modern neuroscience is how the associations of ideas that underlie associative learning are actually represented in the brain. That new impressions and experiences are linked to long-lived changes in the functional architecture of the brain is now an established fact. “In the course of a conversation for example,” Kiebler remarks, “the brain is remodeled. It takes on a different form from before.”
One trillion connections
The brain is always being restructured: New patterns of neural connectivity, mediated by structures called synapses, are set up between its constituent nerve cells, existing networks are extended or otherwise modified, links are forged to already stored information, obsolete contacts are eliminated. Repeated stimulation of sets of neurons is associated with enhanced responsiveness of the synaptic contacts between them – the phenomenon of synaptic plasticity. In basic – and admittedly reductionist – neurobiological terms, learning involves no more than the storage of novel patterns of synaptic connections between different parts of the brain for later use. How is this remodeling done? What kinds of command-and-control systems underlie this process? What is the molecular basis of learning?
To say that the brain is highly complex is a gross simplification: Its 100 billion or so nerve cells and the myriad connections between them constitute an impenetrable maze: Each neuron is linked to as many as 10,000 others, giving a grand total of some 1013 connections. A generic nerve cell consists of a cell body that contains the nucleus, a long primary fiber called an axon that acts as a transmission cable, and a forest of shorter processes called dendrites that detect incoming signals. Each dendrite itself bears a multiplicity of synapses, many of them in the form of short protrusions called dendritic spines. The nerves are the brain’s interface with its environment. When something “out there” touches an arm, for instance, “ion channels” in sensory nerves in the skin at that point are activated. This causes a change in the electrical potential across their cell membranes (‘depolarization’) that propagates as an “action potential” along their axons. When it reaches the synapse, the excitatory impulse is passed to the neighboring cell. In fact, the physical gap between synapses on adjacent neurons is bridged by the release of a “neurotransmitter”, a chemical messenger which depolarizes the post-synaptic cell, sending the signal on to the somatic sensory cortex in the brain.
For whom the bell tolls
Transmission of information between nerve cells during the processes of learning and recall is accomplished by this same electrophysiological principle of alternating electrical and chemical signals, but the circuitry involved follows its own specific logic. What exactly happens when we, for example, learn to associate a certain face with a telephone number or an English word with the meaning of its German counterpart? Even in the case of classical conditioning, this sort of learning is at work. Pavlov’s dogs not only salivated when confronted with a bowl of food, they learned to associate its appearance with a prior signal and began to salivate when a bell rang. In other words, associative learning involves the linkage, storage and recall of at least two different snippets of information. Initially, the ringing bell has no semantic connotation with feeding. Only when it is associatively coupled to the expectation of food does it acquire such a meaning.
So how are new neuronal connections formed during the process of learning, and how are they stabilized? Neurobiologists believe that the phenomenon of long-term potentiation (LTP) plays a central role. Normally, the receiving (‘postsynaptic’) cell requires a relatively strong stimulus from the presynaptic cell to initiate an action potential and fire in its turn. However, if a stimulus is repeated in quick succession, the evoked potential can subsequently – even hours later – be triggered by a much weaker impulse: Repeated stimulation makes synaptic transmission more efficient, and if these neurons also fire in phase with each other, synaptic efficiency is further enhanced.
An “extremely cool” mechanism
Much of what we know about LTP at the physiological level comes from the work done by Nobel Laureate Eric Kandel at Columbia University in New York (Kiebler was a postdoc in his group). The excitatory neurotransmitter (in this case glutamate) is secreted in a succession of discrete packets. It turns out that LTP occurs if the pre-synaptic cell fires at a time when the postsynaptic cell is already fully depolarized. Then – and only then – a special glutamate receptor in the membrane of the post-synaptic cell – the NMDA receptor – comes into play. In its resting state, the pore of this receptor, through which charged ions would otherwise flow into the cell to initiate an action potential, is blocked by a magnesium ion. However, prior depolarization of the cell, mediated by a glutamate-gated ion channel called AMPA, simultaneously “unplugs” the NMDA channel, allowing calcium ions to flow into the post-synaptic cell. This in turn induces the insertion of further AMPA receptors into the dendrites, making the post-synaptic cell more sensitive to excitation – an “extremely cool” mechanism, as Kiebler points out.
One further step is required to fix this change: Continuing stimulation sets a cascade of chemical reactions in train, and a second intracellular messenger triggers the synthesis of specific proteins that “permanently” enhance the synapse’s responsiveness: The potentiated synapse has now “learned” to react to very weak (and uncoupled) stimuli.