Two papers today because they’re relatively quick reads, were published at the same time, are very similar and are equally awesome (I’m also 5 behind on the whole 366 thing). First, a word about Dox. In 1992, a paper was published outlining how to drive or repress the transcription of certain genes with what’s called a tetracycline-controlled transactivator (tTA). In what’s called the Tet-Off version of this system, the presence of doxycycline will inhibit specific genes from being transcribed. However, without Dox, those same genes are free to be transcribed. Both of these papers used this system in combination with c-fos promoter to manipulate memory traces in a very interesting way. Without Dox, neurons that were actively transcribing c-fos, that is, highly active cells, would synthesize a certain protein of interest.
Garner et al. (2012) took advantage of this system to express hM3Dq, a synthetic receptor that responds only to a synthetic drug called CNO. They did this by including Dox in the animals’ diets until experiments began. Once Dox was removed, animals were placed into a novel context, where c-fos expression in neurons encoding this new memory also drove expression of hM3Dq. By injecting CNO, the authors could now activate that memory trace specifically.
After labeling the memory trace for context A, the authors injected the mice with Dox to suppress any further labeling and 24hr later, introduced them to a fearful context B, where they would receive a foot shock. What I find really cool about this study is that during the fear learning in context B, CNO was injected, so that the memory trace for the fearful context B consisted of the population naturally driven by sensation, and the neurons of context A’s trace. This was confirmed by re-exposing animals to context B either with or without CNO. BOOM. Animals re-exposed to context B without CNO, i.e. with half of their memory trace for B inactive, demonstrated an impaired fear memory for this context. Injecting CNO during re-exposure resulted in fear memory that was comparable to controls.
Second, the authors performed a highly similar experiment, except rather than activating the trace for A while learning context B, the authors allowed for B’s trace to form naturally. Now, with two exclusive traces for A and B, activation of memory trace A in the learned context B resulted in an impaired memory, as the two traces competed to drive the animal’s behavior.
Third, the authors repeated the first experiment, except this time added a new synthetic for context C, which was not incorporated into context B. While injecting CNO during re-exposure to B, all three memory traces (the naturally occurring trace for B, the trace for A that was incorporated into B, and the trace for C which wasn’t incorporated into B) were activated. This resulted in an impaired contextual memory as again, multiple memory traces competed.
What I really like about this paper, as well as the next paper are the controls. The authors’ last experiment involved labeling the trace for context B without fear conditioning. The next day, they fear conditioned in B with a CNO injection. Thus, the fear memory trace for B consisted of the labeled trace for B without shock and the naturally occurring trace for B with shock. Then, on testing, animals were either injected with CNO and exposed to B (full trace activated), or were not injected, but re-exposed to B (only the naturally occurring fear memory trace activated). Unlike in the first experiment, the CNO, and non-CNO groups demonstrated a similar contextual memory. Thus, the authors conclude that the representations for two different contexts demonstrate little, if any, spatial overlap among the neural populations encoding them. Further, memory traces consist of those populations that are most active at the time of encoding.
One limitation of this study this that the CNO activates any cell sufficiently active during encoding, no matter where. So, the sensory circuits activated during encoding will be activated during CNO circulation. Though the authors do show a clear correlation between hippocampal activation and memory performance, sensory contributions to behavior do not seem to be ruled out. Another limitation of this study is that we can’t see the traces labeled in A.
Liu, Ramirez et al. (2012) used the same concept of c-fos promoter labeling to drive the expression of ChR2 AND eYFP. Thus, not only can they activate the cells specifically, as in Garner et al., but they can also image the tagged engrams.
First, the authors habituated an animal to context A while on Dox. Then, they removed the Dox and fear conditioned the mouse to context B, thus tagging the trace for B with ChR2-eYFP. This resulted in a (beautiful) sparse label that could be activated by light stimulation. When re-exposed to the nonfearful context A, optical stimulation of the memory trace cells in the dentate gyrus for context B resulted in increased fear expression, demonstrating a memory for that context.
Confirming, the results of Garner et al., the authors found that cells tagged with eYFP to a novel context C overlapped very little with cells expressing c-fos for context B. That is, the memory traces for contexts B and C overlap very little. These memory traces are composed of largely separate neural populations.
To ensure that the elevated fear response in context A that they observed after optical stimulation, a group of animals was habituated to context A, then taken off of Dox and exposed to context C, and then fear conditioned to B. When re-exposed to A, optical stimulation (i.e., activation of memory trace C), did not result in fear expression.
These two papers demonstrate that combining IEG promoters with a Tet-Off system allows for the specific labeling of a memory engram, which can then be manipulated. Hippocampal memory traces for different contexts consist of largely non-overlapping cells. Particularly interesting to me is that Liu, Ramirez et al. note that though stimulation of DG cells is sufficient for fear memory recall, it is not likely necessary. This illustrates a problem I’ve had with many ablation/loss-of-function studies, which is that the hippocampus is able to compensate for loss-of-function. So damaging it before anything has happened and then doing behavioral tests isn’t likely to give you an amazing result. These two studies demonstrate how intact circuits work and will likely be better than other methods for demonstrating necessity. That is, once the memory is encoded, how does inhibiting the circuit without damaging it impact memory? The obvious limit now being how to decide what is necessary for learning.