During the last years, I have been working to understand not only neurons but also astrocytes and their role in the brain. The mode of action of astrocytes is dominated by a diversity of potential molecules and pathways involved and an almost equal diversity of opinions about what is the most important pathway. It is however clear that astrocytes sense many input molecules; there is a consensus that calcium might be a key player for intracellular signaling in astrocytes; and there are quite opposing views about the most relevant output pathways of astrocytes. In the following, I will discuss four recent papers on how astrocytes interact with neurons (and with blood vessels).
Norepinephrine Signals Through Astrocytes To Modulate Synapses
Do neuromodulators like noradrenaline act directly upon neurons, or are these effects mediated by, for example, astrocytes? In reality, it is not black or white, but an increasing number of scientists have acknowledged the potential big role played by astrocytes as intermediates (see e.g. Murphy-Royal et al. (2023)). In this study, Lefton et al. (2024) from the Papouin lab use slice physiology to carefully dissect such a signaling pathway from neuromodulators to astrocytes to neurons.
It is rare to see such consistent and convincing evidence for a complex neuromodulation signaling pathway as it is presented in a paper. To drive home the main messages, the authors apply many controls and redundant approaches from pharmacology, optogenetics. They use three different tools for astrocyte silencing (iBarks, CaleX and thapsigargin), conditional and region-specific knockouts and two-photon imaging to confirm their ideas. I think the paper is definitely worth the read. The main conclusion is that noradrenaline release in hippocampus silences pre-synapses of the CA3 -> CA1 pathway (the so-called Schaffer collaterals). This presynaptic effect is convincingly shown with several lines of evidence. The demonstrated mode of action of this pathway is the following: noradrenaline binds to alpha1-receptors of hippocampal astrocytes. Those astrocytes release ATP, which is metabolized to adenosine. Adenosine in turn binds to the adenosine A1-receptor that has been shown to locate at the CA3 -> CA1 presynapses, finally resulting in silencing of these synapses. Together, this cascade results in long-lasting synaptic depression on the timescale of minutes. Quite impressive work!
There are a few caveats to consider when interpreting the study. First, most of the work was done with a noradrenline concentration of 20 uM in the bath. This is relatively high, especially given previous work that showed somewhat opposite effects for sub-uM concentrations (Bacon et al., 2020). One can speculate that the physiological effect of the pathway found by Lefton et al. may therefore be weaker and, instead of fully silencing the presynaptic effects, rather tone down their relative importance compared to other inputs. The observed effect and signaling cascade is, however, interesting by itself.
Second, Lefton et al. convincingly show that the presynapses are depressed after noradrenaline release. This finding is also accurately reflected in the title. However, in some places, the finding is reframed as an “update of weights” in a non-Hebbian fashion, and “reshaping of connectivity”. This description is not wrong, but a bit misleading because these terms suggest an important role for memory and long-term potentiation, which is not how I would interpret the results. But this is just a minor detail.
Thinking about these results, I’m wondering how specific the effect is on the investigated CA3 -> CA1 synapses. It is an appealing idea to think that, e.g., synapses from entorhinal cortex (EC) onto CA1 might be less affected by this signaling pathway. This way, noradrenaline could be used to specifically reduce inputs from CA3 vs. inputs from EC. An obvious next step for a follow-up study would be to investigate the distribution of A1 receptors on different synapses, and the effect of noradrenaline via astrocytes on other projections to CA1.
Altogether, despite the caveats, this is really a nice paper, and it clearly shows the raw power of slice work when it is performed systematically and thoroughly. This work is particularly interesting as a companion paper describes a very similar pathway with noradrenaline, astrocytes and adenosine to silence not only neurons but also behavior (Chen et al., 2024).
A spatial threshold for calcium surge
Our own work has recently shown that astrocytic somata conditionally integrate calcium signals from their distal processes, and we have shown that the noradrenergic system is sufficient to trigger such a somatic integration (Rupprecht et al., 2024). In this conceptually related paper, Lines et al. (2023) from the Araque lab similarly describe conditional somatic activation of astrocytes, which they term somatic “calcium surges”. However, they use distal calcium signals rather than noradrenaline levels to explain whether these somatic calcium surges do occur or not.
Their main finding is a “spatial threshold”, i.e., a threshold of a minimum fraction of distal astrocytic processes that need to be activated in order to lead to somatic calcium surges. This is an interesting finding which they validate both in vivo and in slices in somatosensory cortex. The authors quantify that activation of >23% of the arborization results in a somatic calcium surge. Although I like the attempt to be quantitative and makes the results easier to compare to other conditions, I believe that the precise value of this threshold is a bit over-emphasized in the paper. I believe that this specific value could change quite a bit with different imaging conditions, with different analysis tools, or when assessing the calcium signals volumetrically in 3D instead of in a 2D imaging plane. However, I still like the overall approach, and I think it is quite complimentary to our approach focusing on noradrenaline as the key factor to control somatic integration. In the end, these two processes – noradrenaline signaling and activation of processes – are not mutually exclusive, but two processes are not only correlated with each other but that are also very likely to causally affect each other.
Figure 6 of the paper makes an additional step by establishing a connection between somatic calcium surges and gliotransmission and subsequent slow-inward currents in neurons. This connection is potentially of very big interest; however, I don’t think that the authors do themselves a favor by addressing this question in a short single figure at the end of an otherwise solid paper. But other readers might have a different perspective on that. In any case, I can only recommend checking out this interesting study!
How GABA and glutamate activate astrocytes
It is well-known that activation of neuronal glutamatergic or GABAergic synapses also activates astrocytes. Cahill et al. (2024) from the Poskanzer lab investigated this relationship systematically in slices using localized uncaging of glutamate and GABA. In particular the application or uncaging of glutamate lead to quite strong activation of astrocytic processes and somata. Very interesting experiments. The authors find that events locally evoked by GABA or Glu release propagate within – and across – astrocytes. This finding is, at least for me, quite unexpected, and I hope that it will be confirmed in future studies.
In addition, I believe that these experiments and results would be really useful to better understand somatic activation of astrocytes. Does simple stimulation with glutamate also result in somatic activation (in the spirit of “centripetal propagation” or “somatic calcium surges”), as one would expect from the analysis of Lines et al. (2023); or would it require the additional input from noradrenaline, as our results (Rupprecht et al., 2024) seem to suggest? A – in my opinion – interesting question that could be addressed with this dataset.
Astrocytic calcium and blood vessel dilations
It is well-known that astrocytes and in particular their end-feet interact with blood vessels. However, there has been a longstanding debate about the nature of these interactions. A big confound is that the observables (blood vessel dilations and astrocytic endfeet activation) might be connected via correlative rather than causal processes. For example, both actions might take place upon noradrenaline release but could be triggered independently by two separate signaling pathways without interaction directly.
In this fascinating paper, Lind and Volterra (2024) try to disentangle these processes by looking specifically at moments when the observed animals do not move. In this “rest” state, all these processes are less correlated with each other, enabling a better understanding of the natural sequence of events. In brief, the authors find that calcium signals in astrocytic endfeet seem to control whether a vessel dilation spreads across compartments or not. These analyses were enabled by imaging blood vessel dilation and astrocytic endfeet calcium in a 3D volume using two-photon microscopy in behaving mice. Great work!
Bacon, T.J., Pickering, A.E., Mellor, J.R., 2020. Noradrenaline Release from Locus Coeruleus Terminals in the Hippocampus Enhances Excitation-Spike Coupling in CA1 Pyramidal Neurons Via β-Adrenoceptors. Cereb. Cortex 30, 6135–6151. https://doi.org/10.1093/cercor/bhaa159
Cahill, M.K., Collard, M., Tse, V., Reitman, M.E., Etchenique, R., Kirst, C., Poskanzer, K.E., 2024. Network-level encoding of local neurotransmitters in cortical astrocytes. Nature 629, 146–153. https://doi.org/10.1038/s41586-024-07311-5
Chen, A.B., Duque, M., Wang, V.M., Dhanasekar, M., Mi, X., Rymbek, A., Tocquer, L., Narayan, S., Prober, D., Yu, G., Wyart, C., Engert, F., Ahrens, M.B., 2024. Norepinephrine changes behavioral state via astroglial purinergic signaling. https://doi.org/10.1101/2024.05.23.595576
Lefton, K.B., Wu, Y., Yen, A., Okuda, T., Zhang, Y., Dai, Y., Walsh, S., Manno, R., Dougherty, J.D., Samineni, V.K., Simpson, P.C., Papouin, T., 2024. Norepinephrine Signals Through Astrocytes To Modulate Synapses. https://doi.org/10.1101/2024.05.21.595135
Lind, B.L., Volterra, A., 2024. Fast 3D imaging in the auditory cortex of awake mice reveals that astrocytes control neurovascular coupling responses locally at arteriole-capillary junctions. https://doi.org/10.1101/2024.06.28.601145
Lines, J., Baraibar, A., Nanclares, C., Martín, E.D., Aguilar, J., Kofuji, P., Navarrete, M., Araque, A., 2023. A spatial threshold for astrocyte calcium surge. https://doi.org/10.1101/2023.07.18.549563
Rupprecht, P., Duss, S.N., Becker, D., Lewis, C.M., Bohacek, J., Helmchen, F., 2024. Centripetal integration of past events in hippocampal astrocytes regulated by locus coeruleus. Nat. Neurosci. 27, 927–939. https://doi.org/10.1038/s41593-024-01612-8
There is no recipe for discoveries, and there is no cookbook on how to publish a paper. But at least there are typical events and routes that are often encountered. Here, I’d like to share the trajectory of a study that we recently published in Nature Neuroscience (Rupprecht et al., 2024), with the hope that my recount will be useful for those who have a similar path before them and especially for those who may encounter these obstacles for the first time.
Conceiving a research project
When I joined the lab of Fritjof Helmchen at the University of Zurich in Summer of 2019, I was primarily interested in the role of pyramidal dendrites, and I was hoping to work on dendritic calcium imaging for my postdoc. However, at very short notice, Fritjof was looking for somebody to shoulder a project focused on calcium signals in hippocampal astrocytes, and he managed to convince me to give it a shot. At this point, we had a clear hypothesis (derived from the slice experiments of a PhD student), and I thought this could be a mini-project to get me started working with mice: doing my first surgeries, building a 2P microscope, and building my first behavioral rig.
The first technical problems
The initial plan was to perform calcium imaging of pyramidal neurons and astrocytes in hippocampus of mice on a treadmill. The treadmill design I copied from the then-junior research group of Anna-Sophia Wahl and learned from her and other researchers how to implant a chronical window that enables to look into the hippocampus of living mice. However, I soon struggled with the first major problems.
First, in an attempt to perform dual-color imaging of astrocytes and neurons, I injected two viruses. One to express the red calcium indicator R-CaMP1.07 in neurons, the second to express the green calcium indicator GCaMP6s in astrocytes. To be sure, I replicated the procedures from a neighboring lab that had used this very same approach in cortex (Stobart et al., 2018). However, my attempts were not successful. I could express either R-CaMP in neurons or GCaMP in astrocytes, but not at the same time. It seemed like a mutual exclusion pattern, due to phase separation or some sort of competition among the viruses. I learned that this has happened to others as well, but nobody seems to fully understand under which conditions it does so. In any case, I gave up on dual-color imaging and simply performed calcium imaging of astrocytes to get started.
A second, more severe problem was my struggle with the interpretation of the observed calcium signals. The calcium signals were extremely weak and dim, and astrocytes only become vaguely brighter during activity. I therefore focused on the only astrocytes that I could see, the very superficial ones. This turned out to be a mistake. After my first surgeries – and I waited only little before performing imaging experiments – there was a thin layer of reactive astrocytes at the surface between hippocampus or corpus callosum and the cover slip. These astrocytes were not only a bit larger than normal astrocytes, but also brighter, and responsive towards slightly increased laser power (Figure 1).
Figure 1. A reactive astrocyte with many long protrusions is activated by laser light. Different from typical astrocytic activation (see below), calcium does not propagate from distal to central compartments.
After several months of confusion and iterations, I suspected and confirmed that these astrocytes were activated not by behavioral circumstances but by the infrared imaging laser. I then improved my surgeries and focused the imaging on the deeper and much dimmer normal hippocampal astrocytes. But I remained suspicious about reactive astrocytes.
Lockdown / Covid-19
In March 2020, I had my first cohort of mice with nicely expressing astrocytes (in particular non-reactive astrocytes!). I had recently improved my microscope in terms of collection optics, resolution and pulse dispersion. First tests under anesthesia were promising, and I was starting to habituate the animals to running on the treadmill. I was about to generate my first useful dataset! Then, Covid-19 hit me. The Brain Research Institute, as all of the University of Zurich, was locked down, and I had to euthanize my mice and terminate the experiments. I went into home office and, not having acquired any useful data yet, worked on the analysis of existing data for other, independent projects that I expanded instead (Rupprecht et al., 2021).
In Autumn 2020, finally, I again prepared a cohort of animals, verified proper expression in astrocytes, and recorded my first dataset of mice running on a treadmill, while recording body movement and running speed. At this point, it was already quite clear that my data did not contain any evidence to support the initial hypothesis that I had used as a starting point. So the project switched from hypothesis-driven to exploratory.
Looking at the data
My first decent recordings of astrocytes with calcium imaging were incomprehensible to me at first glance, and drowning in shot noise. The activity did no obviously correlate with behavior, at least from what I could tell when watching it live. I was a bit lost. One of the main problems I struggled with was the efficient inspection of raw data. Finally, I spent two days and wrote a Python-based script to browse through the raw data (not much in hindsight, but very useful to advance the project). To this end, I synchronized calcium data, behavioral videos of the mouse, and behavioral events such as sugar water rewards, spatial position or auditory cues. Then I carefully browsed through the data, something like 20-30 imaging sessions of each roughly 15 minutes with very variable recording quality. It took me roughly two weeks of focused work (Figure 2). I noticed that the random spontaneous activity of individual astrocytes did not correlate with anything. From the single trials where I found a correlation, I tried to build different hypotheses, but none of them could hold up to a critical test with the rest of the data. The only thing which was more or less consistent was an almost simultaneous activation of most astrocytes throughout the field of view.
Figure 2. Annotations of recordings after visual inspection of calcium recordings together with behavioral movies. In total, I took around six pages of such notes.
Given my bad experience with laser-induced activation of reactive superficial astrocytes, I was worried rather than happy. Was there an effect of switching on the laser, which lead to a global activation of astrocytes due to accumulation of heat? I spent a few months investigating this artifact. I reasoned that these activations might be due to laser-induced heating, as described for slices (Schmidt and Oheim, 2020). So I warmed up the objective with a custom-designed objective heater (Figure 3). However, I did not observe astrocytic activation through heating. Together with more experiments, I started to believe that what I was seeing was real.
Figure 3. Custom-built device to heat up the objective, described in more detail in this blog post.
Another thing I noticed that often the animal, whether it was moving or not, seemed to be quite aroused approximately 10 seconds before these activation patterns. This was difficult to judge and only based on my visual impression of the mouse. From these rather subjective impressions, I drew the conclusion to definitely monitor pupil diameter as a readout of arousal for my next batch of animals – which turned out to be essential for the further course of this project.
In hindsight, these observations seem pretty obvious. While I struggled with the conceptualization of the data, similar results and very clear interpretations were already in the literature, and not too well-hidden (Ding et al., 2013; Nimmerjahn et al., 2009; Paukert et al., 2014). The only problem: I did not know about them. I was definitely reading a lot of papers on astrocytes – but still driven by my initial hypothesis, which was focused on a slightly different subfield of astrocyte science that was somehow not connected at all to this other field of astrocyte science. Only several months later, when I had confirmed my own results, I noticed that some of these results were already established, in particular the connection of astrocyte activation with arousal and neuromodulation.
First results
A key decision for the progress of this project was to drop all analyses of single-cell analyses for the moment. For a long time, I had been trying to find behavioral correlates for single astrocytes that were distinct from the global activity patterns, but I was unable to find anything robust. The main problem was that astrocytic activity is very slow. As a consequence, a single astrocyte will sample only a very small fraction of its activity space during a typical recording of 15 to 30 min. This feature makes it challenging to find any robust relationship with a fast-varying behavioral variable.
Therefore, I started analyzing the mean activity across all astrocytes in a field of view. This part of my analyses is now reflected in the figures 2-4 in the paper (Rupprecht et al., 2024) in its current form.
After more in-depth analysis, validation and literature research, I still found these systematic analyses of the relationship between astrocytic activity and the behavior or neuronal activity quite interesting and relevant. At the same time, I also realized that much of these findings had already been made before: often in cortical astrocytes (Paukert et al., 2014), but partially also in Bergmann glia in cerebellum (Nimmerjahn et al., 2009), although not in hippocampus. Nowhere, however, the description seemed as systematic and complete as in my case. So I thought this could make a good case for a small study of somewhat limited novelty but with solid and beautiful descriptive work. I also felt that recently published work on hippocampal astrocytes made misleading interpretations about the role of hippocampal astrocytes (Doron et al., 2022), an error that was easy to identify with my systematic analyses. So I started to make first drafts of figures.
A bold hypothesis
In Summer 2021, I had an interesting video call with Manuel Schottdorf, then located in Princeton and working in the labs of David Tank and Carlos Brody. Among other things, we discussed about the role and purpose of hippocampus. Specifically, we discussed about the hippocampus as a sequence generator. I can see this discussion topic tracing back to the work on “time cells” by Howard Eichenbaum (Eichenbaum, 2014), but also to work from David Tank’s lab (Aronov et al., 2017). The potential connections of such sequences to theta cycles, theta phase shifting, replay events and reversed replay sequences seemed complicated and still opaque, but also highly interesting. I left the discussion with new enthusiasm about studying the function of hippocampus.
A few days later, I went back to the analysis of astrocytic calcium imaging data from hippocampus, and to the analysis of single-cell activity. Out of curiosity, I checked for sequential activity patterns by sorting the traces according to their peaks. Indeed, I found a clear sequential activation pattern across astrocytes (Figure 4).
Figure 4. Apparent sequential activation of hippocampal astrocytes. This finding was later explained by subcellular sequences (centripetal) instead of population sequences. See also Fig. 5a of the main paper.
I expected this effect to be an artifact that can occur when sorting random, slowly varying signals and performed cross-validation (sorting on the first part of the recording, visualization on the second half), but the sequential pattern remained. I was a bit puzzled (why should astrocytes tile time in a sequence?), but also a bit excited. I went on to analyze recordings across multiple days and observed that the same astrocytes seemed to be active in the same sequences across days. Intriguing! This was a very unexpected finding. And, as most findings that are unexpected and surprising, it was wrong. But I was still excited and set up a meeting with my postdoc supervisor Fritjof to discuss the data and analyses.
Death of a hypothesis, birth of a new hypothesis
The evening before the planned meeting, I was questioning the results and performed further control analyses. For example, I specifically looked at astrocytes that were activated early in the sequences and those that were activated later. Was there any difference? I could see none.
I checked whether there was any spatial clustering of astrocytes that were temporally close in a sequence, but this did not seem to be the case. Finally, already late during the evening, I wondered whether sequences could be subcellular instead of across cells, for example always going from one branch of an astrocyte to another branch. To test this alternative hypothesis systematically, I came up with the idea to test the sequence timing on a single-pixel basis. Single pixel traces were quite noisy, but it was quite clear to me correlation functions would solve this problem (only a few years before this time, I had written even a blog post on the amazing power of correlation functions!). So, I used correlation functions to determine for each pixel in the FOV whether it was early or late in the sequence, using the average across the FOV as a reference. It took me an hour to write the code, and I let it run over night over a few datasets. In the morning (as the next paragraph will show, I’m definitely not a morning person), I looked at the results, and at first glance I could not really see a pattern (Figure 5). In some way, I was relieved, because this was only a control analysis. I quickly made a short set of power point slides ready and went to work.
Fritjof was quick as usual to understand all the details of my analyses and controls. He was intrigued, but with a good amount of scepticism as well (and I was still very sceptical myself). However, when I showed the results of the pixel-wise correlation function analysis, telling him that I did not see a pattern, he looked carefully and was a bit confused.
Figure 5. Pixel-based analysis of delays, showing that pixels distant from somata were activated earlier than somatic pixels. Of course, the rings were added only much later for visual guidance. Apart from that, this is exactly the picture which I showed to Fritjof. For context, check out our final version of the analysis in Fig. 5 of the published study.
He clearly saw the pattern: somatic regions of astrocytes were activated later, while distal regions were activated earlier. It took me several seconds to acknowledge that this was indeed true. Why had I not seen it myself? Maybe because the colormap that I used was not a great fit for my colorblind eyes; or because I had not slept a lot before looking at the data; or because I had introduced a small coding error that shifted these delay maps compared to the anatomy maps. A bit confused, I promised to look very carefully into this observation. And every analysis I did afterwards confirmed it. That’s how we made the central observation of our study – centripetal propagation. At this point, I was not yet fully convinced that this would be an important finding; interesting, for sure, but not necessarily the key to understanding astrocytes. I changed my mind, but only gradually.
Writing up a first paper draft
End of 2021, I decided to write everything up for a paper: a solid and descriptive piece of work: No fancy optogenetics, no messy pharmacology, no advanced behavior or learning, just a solid paper.
Often times, my writing and analysis is an entangled process that can take months if it requires complex analyses and a lot of thinking. In this process, I found two additional interesting aspects about centripetal propagation that I had missed before. I noticed that sometimes propagation towards the soma seemed to start in the periphery of the astrocyte, only to fade out before reaching the soma. Very quickly, I realized that these fading events occured primarily when the arousal, as measured by pupil diameter, was low. I took me a bit more time to understand the relevance of this finding: Arousal seemed to control whether centripetal propagation occured or not.
The more I thought about it, the more I found it interesting. I had always been fond of dendritic integration mechanisms and how apical input to pyramidal cells gated certain events like bursts (Larkum, 2013). Here, I saw a similar effect at work, with somatic integration in astrocytes being gated by arousal.
Submitting the manuscript
After a few iterations with my postdoc supervisor Fritjof, we finished a first version of the manuscript. I presented the data at FENS in Paris in Summer 2022 and received positive feedback. Briefly after that, we submitted the manuscript to Cell. I’m rather hesitant to submit to the CNS triad (the journals Cell/Nature/Science). However, in this case I thought that the story of conditional somatic integration in astrostrytes was so interesting that it did not need to be stretched and massaged in order to be fascinating for a broader audience. We selected Cell because they accept papers with a larger number of figures. After a reasonable amount of time, we got an editorial rejection, with an helpful explanation of why they did not consider the paper; I was a bit disappointed but was positively surprised by the editor who provided some helpful feedback. We transfered the manuscript to Neuron, which would in my opinion have been a perfect fit for the paper due to manuscripts of related topics in the same journal in the past, but it got rejected with a standard reply. Quite disappointing! I decided to give it another try, with Nature Neuroscience. But first I rewrote the Abstract and the Introduction entirely, because I thought that they had been the weak parts of the initial submission. Luckily, the paper went into review. As is our lab policy, we uploaded the preprint of the manuscript to bioRxiv once it went into review (Rupprecht et al., 2022). This was in September 2022.
The reviewers’ requests
The reviewer reports came back ~80 days after submission. Four reviewers!The general tone was positive, appreciative and constructive. The editor had done a good job selecting the reviewers. Besides some more analyses and easy experiments, the reviewers also asked for more “mechanistic insights” and additional (perturbation) experiments to dissect the “molecular” events underlying our observations. The reviewers asked both for pharmacological and optogenetic perturbation experiments. I had anticipated the requests for pharmacology and started to write, in Summer 2022, an animal experiment license to cover those experiments. Fortunately, the license got already approved by beginning of 2023. I started pharmacology experiments with classical drugs affecting the noradrenergic system, e.g., prazosin, DSP-4, etc. Not all of these experiments worked, and all drugs exhibited strong side effects on behavior that confounded the effects we wanted to observe. This way, I learned (again) how messy pharmacology can be when it affects a complex system.
To reduce side-effects, I was planning to use a micropipette to inject e.g. prazosin locally in the imaging FOV under two-photon guidance through a hole drilled into the cover slip. This was an extremely difficult experiment that I did together with Denise Becker in early 2023. It worked for only a single time, and when it exhibited only a small effect, interpretation was difficult. We decided to stop here and use only the previous pharmacology experiments for the paper, with an open discussion of the confounds on animal behavior (now described in the newly added Fig. 8e-f). Altogether, a lot of work with mixed results that were difficult to interpret. Underwhelming!
New collaborations, new experiments
Luckily, via mediation of my colleague Xiaomin Zhang, I realized that there was Sian Duss, a PhD student in the lab of Johannes Bohacek, working on the locus coeruleus. This seemed interesting because the locus coeruleus is one of the key players of arousal, and it would be nice to manipulate this brain region and see what happens to hippocampal astrocytes.It turned out that Sian had done this very same experiment already! She had used fibers to optogenetically stimulate in locus coeruleus and to record from hippocampal astrocytes. And she had observed exactly what we would have predicted from our observations. We could have stopped here, and probably we would have gotten the paper through the revision at Nature Neuroscience.
However, I realized that we could try one more experiment, a bit more challenging, but much more interesting: To optogenetically stimulate locus coeruleus and perform subcellular calcium imaging of astrocytes in hippocampus. And that’s what we did.
First, I wrote an amendment of our animal license to cover these experiments, and after a lot of tedious but efficient Swiss bureaucratic processes, we got it approved in Spring 2023. Sian and I immediately started experiments, rather complicated surgeries with two virus injections, one angled fiber implant and one hippocampal window in transgenic mice. To cut it short, the experiments with Sian were very successful and its outcomes very interesting (check the paper for all the details!).
Final steps towards publication
At this point it was clear to me that the paper would very likely be accepted at Nature Neuroscience. All results of our additional experiments supported the initial findings fully and very cleary. It took me a few more months to analyze all the data, draft a careful rebuttal letter (I did not want to go into a second round of reviews) and re-submit to the journal in August 2023. After two months, we received a message from the journal with “accepted in principle”. Nice!
In the same email, the editors promised to send us a list with additional required modifications from the editorial side. We received those almost two months later, with requests that concerned the title, some important wordings and the length of the manuscript (“please reduce the word count by 45%”). We followed up on that until January and returned the revised manuscript. Then, in March, we received the proofs, treated by a slightly over-motivated copy-editor, and it took me two evenings to fix these changes. In April 2024, exactly 617 days after our submission to Nature Neuroscience, the paper was published online.
Overlap with work of others
Over the duration of the project, I became only gradually aware of similar works, both ongoing or completed. For example, I discovered only during the project work from Christian Henneberger’s lab (King et al., 2020), which inspired the analysis of history-dependent effects of calcium signalling (Fig. 6f). And in Summer of 2023, I talked to his lab members during a conference in Bonn, which helped me refine the Discussion for the revised manuscript.
Specifically related to centripetal propagation, I noticed that such phenomena had already been observed in slices, but rather anecdotally, and hidden in a small supplementary figure (Bindocci et al., 2017). However, in Summer of 2023, a study appeared that showed some of the same effects that we had reported in our preprint in 2022, in somatosensory cortex. I was only later informed that these findings had been obtained independently of our results (Fedotova et al., 2023).
There were also two relevant papers that I had missed entirely before acceptance of our manuscript. First, a study came out in Summer of 2023 (very shortly before we resubmitted our revised manuscript). This very interesting preprint from the Araque lab described somatic integration in cortical astrocytes (Lines et al., 2023). Second, after publication of our manuscript, my co-author Chris Lewis spotted a paper from 2014 that actually described some of the observations that we had thought to have made for the first time, in a small paper with analyses that seemed a bit anecdotal but solid (Kanemaru et al., 2014). I put these two papers on my list “I should have cited them and will definitely do so at the next opportunity!”
Future directions and follow-ups
One of the greatest parts of this project were the experiments done in 2023 with Sian Duss, an extremely skilled experimenter and great scientist. It turned out that she was eager to continue the collaboration to better understand the effects of locus coeruleus on hippocampus (and so was I). While doing experiments with her, so many interesting observations popped up that I find it hard to restrain my scientific curiosity and not dive into all these different observations, each of them probably worth a few years of intense scrutiny!
I’m very much looking forward to seeing what the future will bring; but I’m sure that there will always be at least a small (or large) part of my scientific work focusing on astrocytes.
.
.
References
Aronov, D., Nevers, R., Tank, D.W., 2017. Mapping of a non-spatial dimension by the hippocampal/entorhinal circuit. Nature 543, 719–722. https://doi.org/10.1038/nature21692
Bindocci, E., Savtchouk, I., Liaudet, N., Becker, D., Carriero, G., Volterra, A., 2017. Three-dimensional Ca2+ imaging advances understanding of astrocyte biology. Science 356, eaai8185. https://doi.org/10.1126/science.aai8185
Ding, F., O’Donnell, J., Thrane, A.S., Zeppenfeld, D., Kang, H., Xie, L., Wang, F., Nedergaard, M., 2013. α1-Adrenergic receptors mediate coordinated Ca2+ signaling of cortical astrocytes in awake, behaving mice. Cell Calcium 54, 387–394. https://doi.org/10.1016/j.ceca.2013.09.001
Doron, A., Rubin, A., Benmelech-Chovav, A., Benaim, N., Carmi, T., Refaeli, R., Novick, N., Kreisel, T., Ziv, Y., Goshen, I., 2022. Hippocampal astrocytes encode reward location. Nature 609, 772–778. https://doi.org/10.1038/s41586-022-05146-6
Eichenbaum, H., 2014. Time cells in the hippocampus: a new dimension for mapping memories. Nat. Rev. Neurosci. 15, 732–744. https://doi.org/10.1038/nrn3827
Fedotova, A., Brazhe, A., Doronin, M., Toptunov, D., Pryazhnikov, E., Khiroug, L., Verkhratsky, A., Semyanov, A., 2023. Dissociation Between Neuronal and Astrocytic Calcium Activity in Response to Locomotion in Mice. Function 4, zqad019. https://doi.org/10.1093/function/zqad019
Kanemaru, K., Sekiya, H., Xu, M., Satoh, K., Kitajima, N., Yoshida, K., Okubo, Y., Sasaki, T., Moritoh, S., Hasuwa, H., Mimura, M., Horikawa, K., Matsui, K., Nagai, T., Iino, M., Tanaka, K.F., 2014. In Vivo Visualization of Subtle, Transient, and Local Activity of Astrocytes Using an Ultrasensitive Ca2+ Indicator. Cell Rep. 8, 311–318. https://doi.org/10.1016/j.celrep.2014.05.056
King, C.M., Bohmbach, K., Minge, D., Delekate, A., Zheng, K., Reynolds, J., Rakers, C., Zeug, A., Petzold, G.C., Rusakov, D.A., Henneberger, C., 2020. Local Resting Ca2+ Controls the Scale of Astroglial Ca2+ Signals. Cell Rep. 30, 3466-3477.e4. https://doi.org/10.1016/j.celrep.2020.02.043
Larkum, M., 2013. A cellular mechanism for cortical associations: an organizing principle for the cerebral cortex. Trends Neurosci. 36, 141–151. https://doi.org/10.1016/j.tins.2012.11.006
Lines, J., Baraibar, A., Nanclares, C., Martín, E.D., Aguilar, J., Kofuji, P., Navarrete, M., Araque, A., 2023. A spatial threshold for astrocyte calcium surge. https://doi.org/10.1101/2023.07.18.549563
Paukert, M., Agarwal, A., Cha, J., Doze, V.A., Kang, J.U., Bergles, D.E., 2014. Norepinephrine controls astroglial responsiveness to local circuit activity. Neuron 82, 1263–1270. https://doi.org/10.1016/j.neuron.2014.04.038
Rupprecht, P., Carta, S., Hoffmann, A., Echizen, M., Blot, A., Kwan, A.C., Dan, Y., Hofer, S.B., Kitamura, K., Helmchen, F., Friedrich, R.W., 2021. A database and deep learning toolbox for noise-optimized, generalized spike inference from calcium imaging. Nat. Neurosci. 24, 1324–1337. https://doi.org/10.1038/s41593-021-00895-5
Rupprecht, P., Duss, S.N., Becker, D., Lewis, C.M., Bohacek, J., Helmchen, F., 2024. Centripetal integration of past events in hippocampal astrocytes regulated by locus coeruleus. Nat. Neurosci. 27, 927–939. https://doi.org/10.1038/s41593-024-01612-8
Schmidt, E., Oheim, M., 2020. Infrared Excitation Induces Heating and Calcium Microdomain Hyperactivity in Cortical Astrocytes. Biophys. J. 119, 2153–2165. https://doi.org/10.1016/j.bpj.2020.10.027
Stobart, J.L., Ferrari, K.D., Barrett, M.J.P., Glück, C., Stobart, M.J., Zuend, M., Weber, B., 2018. Cortical Circuit Activity Evokes Rapid Astrocyte Calcium Signals on a Similar Timescale to Neurons. Neuron 98, 726-735.e4. https://doi.org/10.1016/j.neuron.2018.03.050
This is a blog post dedicated to those who start with calcium imaging and wonder why their live images seem to drown in shot noise. The short answer to this unspoken question: that’s normal.
Introduction
Two-photon calcium imaging is a cool method to record from neurons (or other cell types) while directly looking at the cells. However, almost everyone starting with their first recording is disappointed by the first light they see – because the images looked better, with more detail, crisper and brighter in Figure 1 of the lastest paper. What these papers typically show, is, however, not a snapshot of a single frame, but a carefully motion-corrected and, above all, averaged recording.
In reality, it is often not even necessary to see every structure in single frames. One can still make efficient use of such data that seemingly drown in noise, and you do not have to necessarily resort to deep learning-based denoising to make sense of the data. Moreover, if you can see your cells very clearly in a single frame, it is in many cases even likely that either the concentration of the calcium indicator or the applied laser power is too high (both extremes can induce damage and perturb the neurons).
To demonstrate the contrast between typical single frames before and beautiful images after averaging for presentations, here’s a gallery of recordings I made. On the left, a single imaging frame (often seemingly devoid of any visible structure). On the right, an averaged movie. (And, yes, please read this on a proper computer screen for the details, not on your smartphone.)
Hippocampal astrocytes in mice with GCaMP6s
Here, I imaged hippocampal astrocytes close to the pyramidal layer of hippocampal CA1. Laser power: 40 mW, FOV size: 600 µm, volumetric imaging rate: 10 Hz (3 planes), 10x Olympus objective. From our recent study on hippocampal astrocytes, averaged across >4000 frames:
Pyramidal cells in hippocampal CA1 in mice with GCaMP8m
Here, together with Sian Duss, we imaged hippocampal pyramidal cells. Laser power: 35 mW, FOV size: 600 µm, frame rate: 30 Hz , 10x Olympus objective.Unpublished data, averaged across >4000 frames:
A single interneuron in zebrafish olfactory bulb with GCaMP6f
An interneuron recorded in the olfactory bulb of adult zebrafish with transgenically expressed GCaMP6f. Laser power <20 mW, 20x Zeiss objective, galvo-galvo-scanning. (Not shown: simultaneously performed cell-attached recording.) This is from the datasets that I recorded as ground truth for spike inference with deep learning (CASCADE). Zoomed in to a single isolated interneuron, averaged across 1000 frames:
A single neuron in zebrafish telencephalic region aDp with GCaMP6f
A neuron recorded in the telencephalic region “aDp” in adult zebrafish with transgenically expressed GCaMP6f. Laser power <20 mW, 20x Zeiss objective, galvo-galvo-scanning. (Not shown: simultaneously performed cell-attached recording.) This is from the datasets that I recorded as ground truth for spike inference with deep learning (CASCADE). Zoomed in to a single neuron, averaged across 1000 frames:
Population imaging in zebrafish telencephalic region aDp with GCaMP6f
Neurons recorded in the telencephalic region “aDp” in adult zebrafish with transgenically expressed GCaMP6f. Laser power <30 mW, 20x Zeiss objective, frame rate 30 Hz. Unpublished data, averaged across >1500 frames:
Sparsely labeled neurons in the zebrafish olfactory bulb with GCaMP5
Still in love with this brain region, the olfactory bulb. Here with sparse labeling of mostly mitral cells with GCaMP5 in adult zebrafish. This is one out of 8 simultaneously imaged planes, each imaged at 3.75 Hz, with this multi-plane scanning microscope. From our study where we showed stability of olfactory bulb representations of odorants (as opposed to drifting representations in the olfactory cortex homolog), averaged across 200 frames:
Population imaging in zebrafish telencephalic region pDp with OGB-1
Using an organic dye indicator (OGB-1), injected in and imaged from the olfactory cortex homolog in adult zebrafish. This is one out of 8 simultaneously imaged planes, imaged at 7.5 Hz each with this multi-plane scanning microscope. OGB-1, different from GECIs like GCaMP, comes with a relatively high baseline and a low ΔF/F response. The small neurons at the top not only look tiny, they are indeed very small (diameter typically 5-6 um). Unpublished data, averaged across 200 frames:
Pyramidal cells in hippocampal CA1 in mice with R-CaMP1.07
These calcium recordings from pyramidal neurons in hippocampal CA1 exhibited non-physiological activity. Laser power: 40 mW, FOV size: 300 µm, 16x Nikon objective, frame rate 30 Hz. From our recent study on pathological micro-waves in hippocampus upon virus injection, averaged across >1500 frames:
Conclusion
I hope you liked the example images! Also, I hope that this comparison across recordings and brain regions will help to normalize expectations about what to get from a single frame from functional calcium imaging. If you are into calcium imaging, you have to learn to love the shot noise!
And you have to learn to understand the power of averaging to be able to judge your image quality. Only averaging can truly reveal the quality of the recorded images. If the image remains blurry after averaging thousands of frames, then the microscope can indeed not resolve the structures. However, if the structures come out very clearly after averaging, the microscope’s resolution (and the optical access) are most likely good, and only the low amount of photons is stopping you from seeing signals clearly in single frames (which is often, as this gallery demonstrates, not even necessary).
Here are a few recent papers from the field of computational and theoretical neuroscience that I think are worth the time to read them. All of them are close to what I have been working on or what I am planning to work on in the future, but there is not tight connection among them.
The Neuron as a Direct Data-Driven Controller
In their preprint, Moore et al. (2024) provide an interesting perspective on how to think about neurons: rather than input-output devices, neurons are described as control units. In their framework, these neuronal control units receive input as feedback about their own output in a feedback loop which may involve the environment. In turn, the neurons try to control this feedback loop by adapting their output according to a neuron-specific objective function. To use the authors’ words, this scheme is “enabling neurons to evaluate the effectiveness of their control via synaptic feedback”.
These are ideas that have been fascinating for me since quite some time. For example, I have described similar ideas about the single-neuron perspective and the objective function of single neurons in a previous blog post. The work of Moore et al. (2024) is an interesting new perspective, not only because it clearly states the main ideas of their approach, but also because the ideas are shaped by a mathematical perspective of linear control theoretical approaches (see: control theory).
To probe the framework, the paper shows how several disconnected observations in neurophysiology emerge in such a framework, like STDP (spike time-dependent plasticity). STDP is a learning rule that has been found in slice work and had a huge impact on theoretical ideas about neuronal plasticity. STDP can be dissected into a “causal” part (postsynaptic activity comes after presynaptic activity) and an “a-causal” part (presynaptic after postsynaptic). The a-causal part of STDP makes a lot of sense in the framework of Moore et al. (2024) since the pre-synaptic activity could in this case be interpreted as a meaningful feedback signal for the neuron. These conceptual ideas that do not require a lot of math to understand them are – in my opinion – the main strength of the paper.
The proposed theoretical framework however also comes with limitations. It is based on a linear system; and I feel that the paper is too focused on mathematics and linear algebra, while the interesting aspects are rather conveyed in the non-mathematical part of the study. I found Figure 3 with a dissection of feedback and feedforward contributions in experimental data quite unclear and confusing. And the mathematical or algorithmic procedure how a neuron computes the ideal control signal given its objective function did not sound very biologically plausible to me (it included quite a lot complex linear algebra transformations).
Overall, I think it is a very interesting and inspiring paper. I highly recommend reading the Discussion. This discussion includes a nice sentence that summarizes this framework and distinguishes it from other frameworks like predictive coding: “[In this framework,] the controller neuron does not just predict the future input but aims to influence it through its output”. Check it out!
A Learning Algorithm beyond Backpropagation
This study by Song et al. (2024) includes several bold claims in the title and abstract. The promise is to provide a learning algorithm that is “more efficient and effective” than backpropagation. Backpropagation is the foundation of almost all “AI” systems, so this would be no small feat.
The main idea of the algorithm is to clamp the activity of input and output neurons with the teaching signals and wait until the activity of all layers in the middle converge (in a “relaxation” process), and then fix this configuration by weight changes. This is quite different conceptually from backpropagation, where the activity of output neurons is not clamped but compared to target activities, and differences are mathematically propagated back to middle layer neurons. Song et al. (2024) describe this relaxation process in their algorithm, which they term “prospective configuration” learning, as akin to relaxation of masses connected via springs. They also highlight a conceptual and mathematical relation to “energy-based networks” such as Hopfield networks (Hopfield, 1982). This is an aspect that I found surprising because such networks are well-known and less efficient than standard deep learning; so why is the proposed method here better than traditional energy-based methods? I did not find a satisfying answer to this question.
One aspect that I found particularly compelling about prospective configuration was that weights are updated not independently from each other but all together simultaneously, as opposed to backpropagation. Intuitively, this sounds like a very compelling idea. Thinking of it, it is surprising that backpropagation works so well although errors for each neuron are updated independently from each other. But as a consequence, learning rates need to be incremental to prevent a scenario where weight changes in input layers make the simultaneously applied weight changes in deeper layers meaningless, which is a limitation of backpropagation. It seems that prospective configuration does not have this limitation.
Is this algorithm biologically plausible? The authors seem to suggest this between the lines, but I found it hard to judge. The authors do not match the bits and pieces of their algorithm to biological entities, so I found it not easy to judge the potential correspondences. Given the physical analogies (“relaxation”, “springs”), I would expect that weights in these energy-based networks are symmetric (which is not biologically realistic). The energy function (Equation 6) seems to be almost symmetric, and I find it hard to imagine this algorithm to work properly without symmetric weights. The authors discuss this issue briefly in the Discussion, but I would have loved to hear the opinion of experts on this topic. One big disadvantage of the journal Nature Neuroscience is that it does not provide open reviews. Apparently, the paper was reviewed by Friedeman Zenke, Karl Friston, Walter Senn and Joel Zylberberg, all of whom are highly reputed theoreticians. It would have added a lot to read the opinions of these reviewers from relatively diverse backgrounds.
Putting these considerations aside, do these prospective configuration networks really deliver what they promise? It’s hard to say. In every single figure of this extensive paper, prospective configuration seems to outcompete standard deep learning in basically all aspects – catastrophic forgetting, faster target alignment, et cetera. In the end, however, the algorithm seems to be computationally too demanding to be an efficient competitor for backpropagation as of now (see last part of Discussion). The potential solutions to circumvent this difficulty do not sound too convincing at this stage yet. I would have been really glad to read a second opinion on these points that are rather difficult to judge just from reading the paper. Again, it would have been very helpful to have open reviews.
Overall, I found the paper interesting and worth the read. Without second opinions, I found it however difficult to properly judge novelty (in comparison to related algorithms such as “target propagation” mentioned briefly, by Bengio (2014)) and potential impact relative to standard deep learning (possibility to speed up the algorithm; ability to generalize). Let me know if you have an opinion on this paper!
Continuous vs. Discrete Representations in a Recurrent Network
In this study, Meissner-Bernard et al. (2024) investigate a specific biological circuit that has been thought of as a good model for attractor networks, the zebrafish homologue of the olfactory cortex. The concept of discrete attractors mediated by recurrent connections has been highly influential for more than 40 years (Hopfield, 1982) and has been early-on thought of as a good model for circuits like the olfactory cortex that exhibit strong recurrent connections (Hasselmo and Barkai, 1995). Here, Meissner-Bernard et al. (2024) investigate how such a recurrent network model is affected by the implementation of precise synaptic balance. What is precise balance?
Individual neurons receive both excitatory and inhibitory synaptic inputs. In a precisely balanced network, these inputs of opposite influence are balanced for each neuron and also precisely in time. To some surprise, Meissner-Bernard et al. (2024) find that a recurrent network that implements such a precise balance does not exhibit discrete attractor dynamics but locally constrained dynamics that result in continuous rather than discrete sensory representations. The authors include a nice control by showing that the same network without this precise balance and a globally tuned inhibition instead does indeed exhibit discrete attractor dynamics.
One interesting feature of this study is that the model is constrained by a lot of detailed results from neurophysiological experiments. For example, the experimental results of my PhD work on precise synaptic balance – (Rupprecht and Friedrich, 2018) – have been one of the main starting points for this modeling approach. Not only this but also other experimental evidence specific used to constrain the model had been acquired in the same lab where also the theoretical study by Meissner-Bernard et al. (2024) was conducted. Moreover, the authors suggest in the outlook section of the Discussion to use EM-based connectomics to dissect the neuronal ensembles in this balanced recurrent circuit. The lab of Rainer Friedrich is working on EM-connectomics with synaptic resolution for longer than a decade (Wanner and Friedrich, 2020). It is interesting to see this line of research that spans not only several decades of work with various techniques such as calcium imaging (Frank et al., 2019), whole-cell patch clamp (Blumhagen et al., 2011; Rupprecht and Friedrich, 2018) and EM-based connectomics, but also attempts to connect all perspectives using modeling approaches.
Blumhagen, F., Zhu, P., Shum, J., Schärer, Y.-P.Z., Yaksi, E., Deisseroth, K., Friedrich, R.W., 2011. Neuronal filtering of multiplexed odour representations. Nature 479, 493–498. https://doi.org/10.1038/nature10633
Frank, T., Mönig, N.R., Satou, C., Higashijima, S., Friedrich, R.W., 2019. Associative conditioning remaps odor representations and modifies inhibition in a higher olfactory brain area. Nat. Neurosci. 22, 1844–1856. https://doi.org/10.1038/s41593-019-0495-z
Hasselmo, M.E., Barkai, E., 1995. Cholinergic modulation of activity-dependent synaptic plasticity in the piriform cortex and associative memory function in a network biophysical simulation. J. Neurosci. 15, 6592–6604. https://doi.org/10.1523/JNEUROSCI.15-10-06592.1995
Hopfield, J.J., 1982. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. 79, 2554–2558. https://doi.org/10.1073/pnas.79.8.2554
Meissner-Bernard, C., Zenke, F., Friedrich, R.W., 2024. Geometry and dynamics of representations in a precisely balanced memory network related to olfactory cortex. https://doi.org/10.1101/2023.12.12.571272
Moore, J., Genkin, A., Tournoy, M., Pughe-Sanford, J., Steveninck, R.R. de R. van, Chklovskii, D.B., 2024. The Neuron as a Direct Data-Driven Controller. https://doi.org/10.1101/2024.01.02.573843
Rupprecht, P., Friedrich, R.W., 2018. Precise Synaptic Balance in the Zebrafish Homolog of Olfactory Cortex. Neuron 100, 669-683.e5. https://doi.org/10.1016/j.neuron.2018.09.013
Song, Y., Millidge, B., Salvatori, T., Lukasiewicz, T., Xu, Z., Bogacz, R., 2024. Inferring neural activity before plasticity as a foundation for learning beyond backpropagation. Nat. Neurosci. 27, 348–358. https://doi.org/10.1038/s41593-023-01514-1
Wanner, A.A., Friedrich, R.W., 2020. Whitening of odor representations by the wiring diagram of the olfactory bulb. Nat. Neurosci. 23, 433–442. https://doi.org/10.1038/s41593-019-0576-z
How does the brain work and how can we understand it? To view this big question from a broad perspective, I’m reporting some ideas about the brain that marked me most during the past twelve months and that, on the other hand, do not overlap with my own research focus. Enjoy the read! And check out previous year-end write-ups: 2018, 2019, 2020, 2021, 2022,2023,2024.
Introduction
During the past year, I have been starting my own junior group at the University of Zurich and spent some time finishing postdoc projects. After a laborious revision process with several challenging experiments, our study on hippocampal astrocytes and how they integrate calcium signals, which was my main postdoc project with Frijtof Helmchen, is now accepted for publication (in principle), and I’m looking forward to venturing into new projects.
For this year-end write up, I want to discuss the book “The unfolding of language” by Guy Deutscher, which I read this Summer. The book is primarily about linguistics and the evolution of languages, but at the end of this blog post, I will connect some of its ideas with current lines of research in systems neuroscience.
Languages as a pathway to understand thinking
After high school, my main interest was to understand how thoughts work. Among the various approaches to this broad question, I felt that I needed to choose between at least two paths. First, to study the physiology of neuronal systems, in order to understand how neurons connect, how they form memories, concepts, ideas, and finally thoughts. Second, to study the matrix that shapes our thoughts and that itself is formed by the dynamics and connections in our brain: language. I decided for the first trajectory, but I always remained curious about how to use language to understand our brain.
I like languages, especially from the theoretical perspective. I was a big fan of Latin in high school and particularly enjoyed to dive into the structure of its language, that is, its grammar. My mother tongue, German, also offered a lot of interesting and complex grammar to explore. And when I read through the books of J.R.R. Tolkien on Middle-earth, I was fascinated by all the world-building but in particular by the languages that he had invented. And I filled a few notebooks with some (less-refined) attempts to create my own languages. Apart from the beautiful words and the intriguing sounds, I was especially into the structure and grammar of these languages. – Later, I became relatively proficient in a few computer languages like C, Python, LaTeX, HTML or Matlab. I briefly tried to learn Chinese for a year, but although I was fascinated by the logical grammatical edifice, I did not have the same energy to internalize the look-up-tables of characters and sounds. Occupied with neuroscience and mostly reading and writing scientific English (a poor rip-off of proper English), I lost touch with the idea of using language to understand our thoughts and, therefore, the brain.
“The unfolding of language”
Then, Mid of last year, I came across the book The unfolding of language in a small but excellent bookstore at Zurich main station. I was captivated immediately. And I enjoyed the book for several reasons.
First, the book is abundant with examples how language evolves, about the origin of words or grammatical structures. Many details that I never had bothered to consider, made suddenly sense, like some old German words, the French conjugation of verbs, or a certain declination of a Latin word. In addition, I learned a few surprising new principles about other languages, for example, the structure underlying the semitic languages, for which the author is an expert.
Second, the depiction of the evolution of language revealed a systematic aspect to these changes, even across languages even from different language families. I did have some basic knowledge about the language sound changes that had been observed around 1800 and described by Grimm’s law. However, Guy Deutscher’s book did not describe these systematic changes as artifacts of our history but as a natural process that occurred like this or similarly in many different languages, independently from each other. To some extent, reading about all these systematic and – to some extent – inevitable changes of language made me more relaxed about the distortions that language undergoes when used by younger generations or people who mix in loanwords from other languages without hesitation; language is a non-static, evolving object. But what is the evolution of language described by Guy Deutscher actually?
The three principles of the evolution of language
Deutscher mentions three main principles underlying the evolution of language. The strong impression that this book left on me was not only through these principles of language evolution, but even more so through the wealth of examples across languages, types of words and grammars that it provided.
First principle: Economy
“Economy”, according to Deutscher, mostly originates from the laziness of the language users, resulting in the omission or merging of words. One of the examples given in the book was the slow erosion of the name of the month “august” from the Latin word augustus over the Old French word aost to août (which is pronounced as /ut/) in modern French. Such erosion of language appeared as the most striking feature of the evolution of language when it was discovered two centuries ago. It was an intriguing observation that the grammar of old languages like Latin or Sanskrit seemed to be so much more complex than the grammar of newer languages like German, Spanish or English. This observation led to the idea that language is not actually simply evolving but rather decaying.
Deutscher, however, describes how this apparent decay can also be the source and driving force for the creation of new structures. One particular compelling example is his description of how the French conjugation for the future tense has evolved. It is difficult to convey this complicated idea in brief terms, but it evolves around the idea that a late Latin expression like amare habeo (“I have to love”) evolved its meaning (to “I will love”), which was transferred to French, where the futur tense therefore contains the conjugation of the present tense of the word “to have”:
j’aimerai
i will love
j’ai
i have
tu aimeras
you will love
tu as
you have
il aimera
he will love
il a
he has
nous aimerons
we will love
nous avons
we have
vous aimerez
you will love
vous avez
you have
ils aimeront
they will love
ils ont
they have
I knew both Latin and French quite well, but this connection struck me as both surprising and compelling. Deutscher also comes up with other examples of how grammar was generated by erosion, but you will have to read the book to make up your own mind.
Second principle: Expressiveness
Expressiveness comes from the desire of language users to highlight and stress what they want to say, in order to overcome the natural inflation of meaning through usage. A typical example are the simple words “yes” and “no”, which are simple and short and therefore often enhanced for emphasis (“yes, of course!” or “not at all!”).
A funny example given by Deutscher is the French word aujourd’hui, which means “today” and is one of the first words that an early beginner learns about French. Deutscher points out that this word was derived from the Latin expression hoc die (“on this day”), which eroded to the Old French word hui (“today”). To more strongly emphasize the word, people started to say au jour d’hui, which basically means “on the day of this day”. Later, au jour d’hui was eroded to aujourd’hui. Nowadays, French speakers start using au jour d’aujourd’hui to put more emphasis on the expression. Therefore, the expression means “today” but literally can be decoded as “on the day of the day of this day”. This example illustrates the close interaction of the expressiveness principle and the erosion principle. And it shows that we are carrying these multiple layers of eroded expressiveness with our us, often without noticing ourselves.
Third principle: Analogy
“Analogy” occurs when humans observe irregularities of language and try to impose rules in order to get rid of exceptions that do not really fit in. For example, children might say “the ship sinked” instead of “the ship sank”. Through erosion, language can take a shape that does not make any sense (because its history of evolution, which would shine a light on the shape, is not obvious), and we try to counteract and impose some structure.
Metaphors are the connection between the physical world and abstraction
But there is one more ingredient that, according to Deutscher, drives the evolution of language. It is this aspect which I found most interesting and most closely connected to neuroscience: metaphors. This idea might sound surprising at first, but once it unfolds by means of examples, it becomes more and more convincing. Deutscher depicts metaphors as a way – actually, the way – how the meaning of words can become more abstract over time.
He gives examples of daily used language and then dissects the used words as having roots in the concrete world. These roots had been lost by usage but can still be seen through the layers of erosion and inflation of meaning. For instance, “abstraction” as a word comes from the Latin words ab and strahere, which basically means “to pull something off of sth.”, that is, to remove a word from its concrete meaning. “Pulling sth off”, on the other hand, is something very concrete, rooted in the physical word.
Such abstraction of meanings is most obvious for loanwords from other languages (here, from Latin). But Deutscher brings up convincing examples of how this process occurs as well for words that evolved within a given language. To give a very simple example that also highlights what Deutscher means when he speaks of metaphors: in the expressions “harsh measures”, the “harsh” has roots in the physical world, describing for example the roughness of a surface (“rough” or, originally, “hairy”). Later, however, “harsh” has been applied to abstract concepts such as “measures”– originally as a metaphor that we, however, do no longer perceive as such. Deutscher recounts many more examples, which, in their simplicity, are sometimes quite eye-opening. He makes the fascinating point that all abstracts words are rooted in the physical world and are therefore mostly dead metaphors. And how could one not agree with this hypothesis? Because, what else can be the origin of a word if not physical reality?
How metaphors create the grammar of languages
Deutscher, however, even goes beyond this idea and posits that abstraction and metaphors may have created also more complex aspects of language. For example, in most languages, there are three perspectives: me, you and they. He makes the point that these perspectives might have derived from demonstratives. “Here” transforms to “me”, “there” to “you”, and a third word for something more distant “over there” to “they”. All of these words are “pointing” words, probably deriving from and originally accompanying pointing gestures. In English, the third kind of word does not really exist as clearly, but for example Japanese features the threefold distinction between koko (“here”), soko (“there”) and asoko (“over there”). The third category is represented in Latin by the word ille, which refers to somebody who is more distant. As a nice connection, the Latin word ille was the origin for the French word il/ils, which means “he/they”. This shows how the metaphorical use of words related to the physical world (point words) can generate abstract concepts like grammar, here: the third person. Deutscher alos brings up languages where the connection between the three demonstratives for persons of variable distances and the pronouns for me/you/they is more directly visible, for example in Vietnamese.
Therefore, Deutscher plausibly demonstrates how not only abstract words, but also more complex structures that underlie the most basic grammar, are evolved from metaphorical usage of concrete words that are related to the physical world, either because they describe physical things (the surface roughness) or because they are originally only enhancements of gestures (pointing words). These ideas in the book are amongst the most interesting ones that I have encountered for quite some time.
Embedding of “The unfolding of language” in current research
Overall, I like the hypotheses presented by Deutscher. The only issue I have is that there are many missing bits and pieces that prevent me from properly see through all potential weaknesses. I simply don’t know whether these questions are unanswered in general or whether Deutscher did not have enough space to treat them in this (popular science) book. Put differently, the book was very engaging and a fascinating read but did not provide useful links to the research literature. How are all these ideas embedded in current linguistics research, or is all of this Deutscher’s own concepts? I observed that there are some links in this work to ideas like the conceptual metaphor or linguistic relativity, but I was unable to really figure out where to get started if I wanted to dig deeper into the principal role of metaphorical use for the development of grammar and abstraction. If a linguistics expert somehow happens to read this blog post, I’d be really happy to get a recommendation on a standard textbook (if there is any) on these topics and how Guy Deutscher’s work fits in.
Metaphors, abstraction and system neuroscience
However, I’d like to briefly discuss an aspect of the metaphor principle that I found particularly interesting, also because of a potential, albeit loose, link to current research in neuroscience. Many neuroscientists are probably aware of the large branch of neuroscience dedicated to understanding the generation and representation of abstract knowledge. This can take very different forms, but one of the most prominent takes is the idea that the hippocampus and the entorhinal cortex, originally shown to represent space (via the famous place cells and grid cells, respectively), also represents more abstract knowledge.
The entorhinal cortex represents physical space using the above-mentioned grid cells that span space with hexagonal lattices of varying spatial scales. Building upon the hexagonal lattice idea, researchers attempted to apply the concept experimentally to more abstract 2D spaces. For example, such an abstract conceptual space would not be spanned by the physical axes “x” and “y” but for example by the conceptual axes “neck length” and “leg length” of a bird. I have to admit that I was not fully convinced by this approach, but it is interesting nevertheless.
Apart from these experimental approaches, researchers have developed theories about the representation of abstract knowledge based on hexagonal lattices (for an example theory, see Hawkins et al., 2019 or Whittington et al., 2020). Or check out this review of experimental literature that concludes that grid cells reflect the topological relationship of objects, with this relationship being defined either via space or via more abstract connections. These approaches have in common that abstract concepts are built upon the neuronal scaffold; the neuronal scaffold, in turn, is provided by the representation of physical space.
In Deutscher’s book, I found a description that paralleled the above ideas from systems neuroscience, enough to make it intriguing. First, Deutscher posits that all words that now describe abstract relationships are rooted in meanings that represent physical space. More precisely, words that were originally used to describe physical relationships (e.g., “outside”), were later taken to describe an abstract relationship (“outside of sth”, with the same meaning as “apart from sth”). We even don’t notice the metaphorical usage here because it is so common, even across languages (hormis in French, utenom in Norwegian, ausserdem in German). Deutscher highlights one specific field where the transition of spatial to more abtract descriptors is very apparant. “Before”, “after”, “around”, “at” or “on” are all prepositions that describe physical relationships but that were lateron assigned to temporal relationships as well. It seems quite obvious and not worth any further thought, but is it really?
Deutscher not only suggests that spatial relationships are more basic than temporal relationships, but he strengthens this point by pointing out that words that describe physical relationships derived from something even simpler: the own body. For example, “in front of” derives from the French word front (“forehead”). Deutscher brings up many more examples from diverse languages that reveal how the language that describes abstract relationships can be traced back to body parts. Therefore, he hypothesizes that body parts (“forehead”, “back”, “feet”, etc.) were originally used to establish a system of spatial relationships, which was then applied to temporal and other more abstract relationships. It would be interesting to investigate these lines of thought for systems neuroscience (for example, egocentric vs. allocentric coding of position, or how temporal sequences are required to define abstract knowledge representations).
One of the open questions here is about a potential interaction between abstractions, language and neuronal representations. Was this connection already implemented on the neuronal level before the inception of language, such that language only had to use this analogy generator? Researchers who work on abstract knowledge representation in animals like mice would probably say so. Or was it only through language that abstraction was enabled? I find both possibilities equally likely. Often, we cannot use a concept or idea efficiently when we don’t put it into an expression – concepts remain vague and often difficult to judge if we don’t pour them into clear thoughts, ideally written down in a consistent and concise set of sentences.
In the end, I find the ideas about the evolution of language extremely interesting, especially because they relate to the generation of abstract relationships (temporal relationships, language grammar, or fully abstract concepts). From my point of view, two aspects could be worth some further research. First, to dig into the status of the linguistic research on these topics. And second, to understand whether there are any meaningful parallels between abstraction in neuronal representations and abstraction derived within an evolving and eroding language. In any way, I can recommend this book fully to anybody who is not afraid of foreign languages and a bit of grammar.
.
P.S. I’ve read the German version of Deutscher’s book. It is not simply a translation of the English version but provides many additional examples from the German language that enhance or replace the examples taken from English. This was done in an excellent manner, and I can only recommend to German-speaking readers of this blog to read the translation and not the original version.
Twitter used to be (and still is to some extent) a source of useful information for neuroscientists about technical details, clarifications of research findings and open discussions that cannot be obtained so easily otherwise. Here is a list of some these gems that have made it into my bookmarks, and I’m posting them here in order to archive their content at least to some detail. And yes, this list is mostly for my own reference, but it might be interesting for others as well.
Munir Gunes Kutlu asks which camera to use to record mouse behavior compatible with posture tracking. Among the recommendations are Raspberry Pi cameras as the budget option (I have used those together with Sian Duss for pupillometry and was happy with it), the more expensive PointGrey Chameleon3 cameras; WhiteMatter cameras, where up to 15 cameras can be connected to a computer with a hub; the Basler cameras that are also used as a reliable option with few frame drops in the Helmchen lab; and the uEye XCP camera as a low-cost camera that is sufficiently good for academic purposes. It was mentioned that it is important to know beforehand whether single dropped frames are problematic or not. This is indeed important for simultaneous recording of longer chunks of behavior and neuronal activity in order to be able to synchronize two or more input streams.
Tobias Rose and Jérôme Lecoq discuss where to buy cheap UV curing lights (often used for surgeries with mice to cure dental cement). Tobias bought his curing light from Aliexpress, Jérôme recommends one from McMaster-Carr and advices to use protective goggles in any case. Luke Sjulsonadds that blue (not UV) curing lights are state of the art, and that they can be easily found by googling for “dental curing wand”. This made me remember a previous discussion involving Luke where curing lights and adhesive were also intensely discussed, where he recommended switching from Metabond to Optibond (both are used as the ‘the “luting” layer between bone and acrylic’).
Guy Bouvier asks around for experience with iontophoresis pumps for AAV injection. I was completely unaware of this technique, which is apparently ideal for very controlled and small injections with less damage, the “gold standard for classical anterograde tracing” (according to Thanh Doan). Maximiliano Nigro seems experienced with this technique and recommends a specific precision pump.
Matthijs Dorst asks about recommendations for oxygen concentrators vs. oxygen cylinders for isoflurane anesthesia stations for mouse surgeries. Oxygen concentrators seemed to work quite well, except for being rather noisy. Apart from the normal medical equipment brands (e.g., VetEquip), some reported using small fish tank pumps as a replacement for oxygen concentrators. It was mentioned that compressed air might work similarly well as compressed oxygen, but others noted that for very long procedures (>2h), mice started developing cataracts when using compressed air only. I also have learned some time ago that letting the mice inhale pure oxygen (without the isoflurane) after a surgery can improve and speed up the recovery from anesthesia.
On pre-Musk Twitter, Eleonore Duvelle used to initiate a lot of interesting discussions about rodent behavior and extracellular electrophysiology in hippocampus. She deleted these Twitter posts some time ago but is now very active on Mastodon. In addition, she has archived her Twitter threads on Github and has annotated some selected discussions in this Mastodon thread.
Software
Joy A. Franco asks for tools to view z-stacks in 3D. Responses included the commercial and widely used Imaris or the free Imaris Viewer. Then, 3Dscript and ClearVolume as plugins for the open image analysis environment FIJI. Other free options included ChimeraX, AGAVE, FPBioimage. Also mentioned was Python-based napari. And, for very large (EM) datasets, there is Neuroglancer developed by Google. I wish I had the time to test all these options and compare them against each other!
Maxime Beauhighlights an interesting tool, the scientific Inkscape extension. Adobe Illustrator has been the best tool for vector graphic-based scientific illustrations, but Inkscape was always a good and free alternative. Affinity Designer is a newer and cheaper alternative which became more attractive when Adobe transformed to an annoying and very expensive cloud-based software a few years ago. In my opinion, all three tools are very useful, with Adobe Illustrator still being a bit better than the rest when you don’t consider the price tag. Importing PDFs generated with Python or in Matlab was, however, never easy. This extension is therefore a useful tool for any scientist using Inkscape for figure design.
Matthijs Dorst asks which software/platform to use to quickly sketch out a custom imaging setup. Recommendations include gwoptics, which looks like a simple and straightforward-to-use library; more advanced, but maybe worth it for perfectionists, is Blender. A collection of example objects by Ryo Mizuta was highlighted, but there might be many more.
Other
I’m no expert for sleep or sleep problems, but this thread brought up a lot of things and drugs that people apparently use to improve their sleep: amitriptyline; glycine + magnesium threonate; addition of taurine, ashwagandha, lavendar, valerian is considered; melatonin is recommended by some but not others; cannabis; antihistamines; tincture of hops; phosphatidylserine; a tidbit of morphine; trazadone; sodium axybate; gaboxadol; modafinil. I have not tried any and would not recommend anything, but I’d be curious to get all this weird stuff explained to me by an expert!
Calcium imaging with two-photon point scanning is the technique to chronically record from identified neurons in the living brain of animals. The central piece of two-photon point scanning microscopes is a scan engine. This can be a complex optical device like a deformable mirror or an acousto-optical deflector; but more often, it is just a mirror sitting on a rod and scanning forth and back as fast as possible. The fastest such mirrors are so-called resonant mirrors.
Currently, there is only one major provider of resonant scan mirrors for microscopy, and only few months ago, the lead times had risen to >1 year. Resonant mirrors are therefore much more precious than their mere price tag, and it is worth trying to get the best out of existing resonant scanners, instead of replacing them with new scanners at the sight of the slightest problem.
Recently, I have been working with Johanna Nieweler, a PhD student in the Helmchen lab, to piece together her two-photon microscope from the remains of a previous microscope. Among the surprisingly numerous problems that Johanna encountered and fixed during this work was a problem that was ultimately due to the resonant scanner. In this blog post, I will describe how we identified the problem and came up with a – in my opinion – very elegant solution. This might be a useful resource for optical engineers who are dealing with similar problems and for microscopists who want to understand more about resonant scanners.
A periodic line jitter for high-zoom scanning
The problem was not immediately apparent. An image of small fluorescent beads acquired with the scanning microscope looked fine when zooming out:
However, to evaluate the imaging quality, but also for real experiments that resolve subcellular structures, zoomed-in imaging is essential. In our case, we noticed some kind of irregular distortion of the scan pattern, as if the beads were changing their shapes or dancing around:
To better understand this distortion, we switched off the slow galvo scanners and performed a line scan with the fast resonant scanners only. This configuration clearly revealed a periodic jitter of the scan phase of the resonant scanner.
Such an artefact could be due to many possible sources. First, it could be that the software to acquire and bin the incoming data stream might have a bug. Second, there could be a vibration responsible for sample movement. Third, it could be some line noise coupling into the “sync signal” emitted by the resonant scanner. Fourth, the mechanical scanning itself could be governed by this period modulation.
Finding the problem
The resonant scanner under scrutiny was a 4 kHz resonant scanner from Cambridge Technology. First, we measured the periodicity of the signal distortion – the frequency was around 270 Hz. So it was unlikely to be line noise, which is at 50 Hz or multiples thereof in Europe.
Next, we also did not find any vibrations that might have caused this problem.
We checked and replaced the the power supplies of the resonant scanner, without any improvement.
Finally, we turned our attention to the resonant scanner. The resonant scanner produces a so-called “sync signal”, which is a TTL signal that indicates whether the scanner is moving in the clock-wise or counter-clockwise direction. At the turning point of the directional change, the scanner flips the TTL signal and therefore generates the electrical trigger signal for the next line of the imaging frame. This means that an imprecise generation of the TTL signal, or a wobbly oscillation of the mirror itself could generate a jitter of the TTL signal and a modulation of the line signal as we observed.
Indeed, when we looked at the sync signal on the oscilloscope (we triggered on a rising flank and looked at the oscilloscope on the subsequent rising flank of the TTL), we observed a jitter of the signal that would explain the artefact in the image.
Now, this jitter could be due to a physically wobbling scan mirror; or due to an imprecise readout of the turnaround point by the TTL signal generator at the resonant scanner. Is it possible to distinguish between these two options? Yes, it is. If the scanner is scanning properly and only the TTL signal generation is affected, one could in theory simply replace the periodic TTL signal and then observe an artefact-free image.
We therefore replaced the bad TTL signal with an artificial TTL produced by a signal generator, at the exact same frequency as the resonant scanner’s frequency. Fascinatingly, we observed that this procedure fixed the problem beautifully. One can also notice that the picture drifts away if the signal generator frequency does not exactly match the frequency of the resonant scanner:
Video 1. Linescan of beads, with a signal generator replacing the sync signal of the resonant scanner. While recording the video, we slightly modified the frequency of the signal generator, resulting in a drift of the image to the left or right for even the tiniest mismatch between the scanner’s frequency and the signal generator’s frequency. We did not move the sample or the microscope at any time during the recording.
Despite this limitation, we concluded that the scanner was apparently not wobbling around, and only the generation of the TTL signal was defective.
From a workaround of the problem to a permanent and user-friendly solution
But there is a problem – we cannot simply use the signal generator with a fixed frequency TTL signal and hope that things will be fixed. First, the resonant scanning frequency of any resonant scanner changes slightly over time as it warms up. Second, the resonant scanning frequency also varies (a few fractions of a percent) when the zoom level of the microscope and therefore also the scan amplitude of the resonant scanner is changed. A mismatch between the scanner’s frequency and the sync signal would result in the drift that is visible in the video above.
What we therefore needed was a system that uses the jittery TTL sync signal of the resonant scanner and produces an output that is phase-locked to the sync signal, but without the jitter …
At this point, I vaguely remembered that this can be achieved by a simple analog electric circuit, and after an internet search, I found the Wikipedia article on the “phase locked loop” (PLL), which described exactly what we were looking for. A PLL uses an input (in this case, a jittery TTL signal) and creates a synchronized TTL signal that is typically phase-locked and runs at the same frequency but without jitter when the feedback is filtered appropriately. So we only had to implement this circuit, insert this device between the scanner’s sync signal and the “line trigger” input channel of the DAQ board, and our images would look perfectly line-triggered!
The problem here is that neither Johanna nor I had been trained as electrical engineers, so we would have struggled to translate this relatively simple idea into a working device. However, there was a true expert and lover of electrical circuits at the Brain Research Institute in Zürich: Hansjörg Kasper. Working as a support engineer at the institute for several decades, he was not only an expert for basically any practical technical question ranging from laser physics to soldering techniques, but he was also very familiar with analog circuits. While nowadays often replaced by micro-processor-based solutions, analog circuit elements such as PLLs had been standard components of electrical engineering 30-40 years ago and therefore very familiar to Hansjörg. For example, he had used PLLs in a system that he designed more than twenty years ago to track head and eye movements of small animals.
Figure 2. A photo of Hansjörg Kasper in 2011.
Hansjörg was quickly intrigued by our problem and accidentally had such a phase-locked loop circuit at his hand, the “classic” – as he called it – CD4046 PLL element.
By the way, these circuits are quite cheap (<10 Euros). Within one day, Hansjörg soldered the components together to produce the desired behaviour. To this end, he followed the instructions that come with such a circuit element, indicating how to choose resistors and other elements to produce the desired behaviour.
He sent me this circuit diagram of his final circuit, optimized for stabilizing the sync signal of a 4 kHz resonant scanner:
Figure 4. Circuit diagram of a PLL circuit based on a CD4046 element. An inverter (“U2A 74HC14”) is included to generate a signal with the same phase shift as the input signal. At the bottom, it is indicated how to re-dimension the capacitor C1 in order to optimize the circuit for a 8 kHz instead of for a 4 kHz resonant scanner. Drawing and annotations were done by Hansjörg Kasper in KiCad.
Soldering this circuit requires a bit of practice but could be learned rather quickly by any talented tinkerer. The true challenge, in my opinion, is to start with the CD4046 datasheet (or the datasheet of a similar PLL circuit) and figure out how everything must be connected. It is not much more than 5 pages of the datasheet that are relevant, but I would probably not have understood it easily without Hansjörg’s explanations. Hopefully the circuit diagram above provided by Hansjörg will make it easy for anybody who will try to replicate our PLL stabilizer!
How it works
The circuit turned out to work beautifully. We hooked it up to the microscope, and within days we had already almost forgotten that it existed – this is a true hallmark of great engineering: that the problem is solved so perfectly and robustly that you quickly forget about its mere existence. Here is the stabilized line scan, in direct comparison with the line scan without stabilization. The residual slow wiggling of the line is due to 50 Hz line noise (a different problem).
And the same FOV with a regular scan pattern, which showed the bead without the additional dance moves, therefore enabling us to clearly see the point spread function:
If there is anybody out there struggling with a similar resonant scanner problem, I hope that this blog post will give them the tools to address and solve this problem!
In the end, I was curious whether the same solution could also be helped to generally stabilize resonant scanners. Resonant scanners are known to become “wobbly” with age or when scanning with low amplitudes (high zoom-in). I was simultaneously working also on another two-photon resonant scanning setup with an 8 kHz scanner, and I noticed some sort of wobblying and instability for very high zoom settings. Therefore, I used a signal generator to provide a highly stable sync signal to replace the scanner’s sync signal. Unfortunately, no improvements of the imaging quality could be observed. Apparently, in this case the resonant scanner was indeed wobbling physically, while the 4 kHz scanner was oscillating properly and only the generation of the TTL signal was compromised.
Altogether, this small engineering project with Johanna and Hansjörg was, in my opinion, extremely interesting and valuable. Growing up in the age of Arduinos and Raspberry Pis, where every problem can be solved by a bit of code running on a microprocessor, it was impressive to be reminded of the power of analog circuits. Of course, this implementation was only possible because we had an expert for and lover of analog circuits, Hansjörg Kasper, in our institute.
P.S. Hansjörg had been at the heart of the institute for more than 40 years. He retired officially in 2022 but did the work for this PLL project while working part-time at the Brain Research Institute in the Spring of 2023. Unexpectedly and unfortunately, he died in the early Summer of 2023. During his time at the institute, his work had helped many dozens of PhD students and postdocs tremendously, by solving many small and big technical challenges that typical scientists are rarely equipped to address by themselves. Many a small device and machine that was built during his era, from synchronization boxes for behavioural setups to our small PLL circuit, will continue to run in the labs of the Brain Research Institute in the future, many years after him.
P.P.S. Big kudos to Johanna Nieweler, together with whom I worked on this project, to Hansjörg Kasper (R.I.P.), who designed and built the PLL circuit, to Martin Wieckhorst, who helped with the first brainstorming about the PLL circuit, and to Fritjof Helmchen, who supervises both Johanna and me.
Bagur, Bourg et al. (2022) from the Bathellier Lab in Paris make the interesting finding that the auditory code is represented as temporal sequences of neuronal activity in early auditory processing stages and as spatial patterns in auditory cortex. I am not an expert for auditory cortex, but I found this paper quite compelling and interesting, both in terms of methodological approaches and in terms of concepts. Sophie Bagur has already published a few very interesting papers that are both excellent and creative and definitely worth a read. For example, in a study published with her as senior author Verdier et al. (2022) on an entirely different topic, they showed that optogenetic stimulation of the medial forebrain bundle could be used to improve behavioral training of mice and replace water restriction protocols very effectively.
Lai, Tanaka et al. (2023) from Albert Lee’s lab wrote a very cool paper where they made rats navigate mentally. In short, they used the spiking output of the hippocampus, which they extracted online, to control a virtual reality that (1) represented themselves in a spatial context (“mental navigation”), or that (2) represented another object in a spatial context (“telekinesis”). Using this approach, the authors tried to show that hippocampus represents mental navigation of the self but also mental navigation of objects in the external world. A minor issue I had with the paper was that the authors write very prominently that they constructed a low-latency (“real-time”) closed-loop system, feeding back neuronal activity into the virtual environment. However, this ultra-fast feedback is first going through a decoder that uses a 5-s running window, basically eliminating the speed of the feedback loop. This secret is hidden somewhere in the Methods section. In addition, I had the feeling that the claim of “mental navigation” would require slightly stronger evidence that the animal did not really move physically while performing “mental navigation”. The existing analyses could have been deepened a bit. However, in general I really liked the paper, both for its ideas and for its technical implementation of the brain-machine interface for rats. Definitely worth a read! Let’s try to dig a bit into the conceptual space around this study. The manuscript made me wonder how different it is whether we mentally move objects (like tetris objects, “telekinesis”) or avatars (like in a 1st person video game, “mental navigation”). Do both situations feel the same for us? We do certainly identify with avatars viewed from a 3rd person perspective (as in many video games). The transition from 3rd person avatars (“mental navigation”) to movable objects (“telekinesis”) seems pretty smooth from my perspective. A pixelized avatar from super mario feels almost like an object, and we treat pacman for sure like an avater, even though it is barely more than an object. There does not seem to be a sharp transition between an eating blob like pacman or snake to something less animated like a tetris element. It seems like introspection might be a good way to better understand how we as humans differentially treat virtual avatars or virtual objects in our mind.
.
.
References
Bagur, S., Bourg, J., Kempf, A., Tarpin, T., Bergaoui, K., Guo, Y., Ceballo, S., Schwenkgrub, J., Verdier, A., Puel, J.L., Bourien, J., Bathellier, B., 2022. A spatial code for temporal cues is necessary for sensory learnin. https://doi.org/10.1101/2022.12.14.520391
Lai, C., Tanaka, S., Harris, T.D., Lee, A.K., 2023. Mental navigation and telekinesis with a hippocampal map-based brain-machine interface. https://doi.org/10.1101/2023.04.07.536077
Verdier, A., Dominique, N., Groussard, D., Aldanondo, A., Bathellier, B., Bagur, S., 2022. Enhanced perceptual task performance without deprivation in mice using medial forebrain bundle stimulation. Cell Rep. Methods 2, 100355. https://doi.org/10.1016/j.crmeth.2022.100355
Behavioral timescale synaptic plasticity (BTSP) is a form of single-shot learning observed in hippocampal place cells in mice (Bittner et al., 2015, 2017). This finding is both interesting and inspiring for computational neuroscience for several reasons. In the first place, synaptic plasticity is the basis for learning, and any neuronal theory on the organizing principles of the brain must consider synaptic plasticity. In addition, previous findings on plasticity were usually based on artificial stimulation protocols in slices (LTP, LTD, STDP), often requiring unphysiological stimulation intensities, careful selection of stimulation protocols and many repetitions, while BTSP is a single-shot learning paradigm. Here are a some quite recent interesting theory papers on BTSP, complementing a previous blog post about experimental studies on BTSP:
Nonlinear slow-timescale mechanisms in synaptic plasticity
This review by O’Donnell (2023) nicely covers both experimental results and modeling work about the potential mechanisms of synaptic plasticity on a slow timescale. O’Donnell attempts to reconcile seeming discrepancies between experimental results on LTP from a theory perspective, and he stresses the importance of non-linearities to induce plasticity specifically during bursts but not regular spiking. A quick and interesting read!
One aspect I found particularly interesting is when he points out the relevance of long timescales and non-linear processes for classic slice LTP/LTD or STDP protocols (pages 2-3 of the PDF). Typically, one forgets about the long timescale and only focuses e.g. on the millisecond timing difference for the STDP protocol.
How BTSP affects subthreshold representations of place
Milstein et al. (2021) perform some really nice experiments and analyses to show that BTSP does not only induce new place fields but can alter existing synaptic weights in both ways, potentiation and depression.
If you read the abstract of the Milstein et al. study and have no idea what this is about: worry not, the rest of the paper is much easier to digest and actually super interesting. After reading the abstract myself, I was rather confused. One of the problems is that the “bidirectional plasticity” mentioned in the title can be misread as plasticity among bidirectionally connected cell pairs, while here the “bidirectional” only refers to “depression” (down-direction) and “potentiation” (up-direction) of synpases.
Two things I like in particular about this paper: Fig. 3L, which summarizes the main finding in a very nice visualization. And the experiments reported in Fig. 4A-K, where the authors try to disentangle the contributions of cellular depolarizations vs. initial synaptic weights for plasticity – a beautiful perturbation experiment.
P.S. This is not exactly a very “recent” paper, but I found it useful as context to better understand the subsequent paper by Moldwin et al. (2023).
A calcium level-based, general framework for plasticity
Moldwin et al. (2023) describe a framework for synaptic plasticity focused on calcium levels. It is based on the well-established finding that (a) synaptic depression occurs for low calcium levels and (b) synaptic potentiation occurs for high calcium levels, while (c) synapses remain approximately stable for the lowest (baseline) calcium levels. With their modeling work, the authors extend on existing models and make them more flexible and more easily parameter-adjustable.
In general, I think it is interesting to generate such a more general and flexible framework. At some points, however, I had the feeling that adding more model features to fit all use cases does not necessarily make the model better (more powerful yes, but not always more useful). For example, the authors write that some experimental edge cases were “incorporated into [their model] by adding additional thresholds into the step function”. Adding parameters to account for special cases does not seem the most elegant way if this additional complexity does not make any further predictions about model behavior. Despite these concerns, however, I believe that their general model based on calcium as a single actor is interesting, and their attempt to apply the model to diverse plasticity protocols is useful. It also helps to understand which assumptions are necessary to make the calcium-based plasticity model work for a particular experiment.
In the title, the authors claim that this framework could also be used to describe BTSP. Their take on BTSP is similar to the one described by Milstein et al. (2021) (see above). Moldwin et al. (2023) assume that there is a summation of synaptic calcium induced by pre-synaptic stimulation and calcium induced by the plateau potential. The summed calcium signal would then induce LTP or LTD at this synapse. The main difference compared to Milstein et al. (2021) is therefore the idea that both slow signals, the eligibility trace and the instructive signal, are encoded in the local calcium concentration; apart from that, the mathematical shape of their model seems to be pretty similar compared to Milstein et al. (2021) as far as I can judge.
The main problem of this application to BTSP is the decay time of calcium at synaptic locations. The authors make the following assumption: “The synaptic calcium decayed with a time constant of ~2 seconds”. This seems really unrealistic – calcium levels after synaptic activation typically decay faster by an order of magnitude. Most experimentalists would assume that this long timescale is taken care of by a slow turnover of more stable mediators like CaMKII (for potentiation) or Calcineurin (for depression); see also the review by O’Donnell (2023). Overall, I have the impression that this framework is a nice and generalizable model for LTP, LTD and STPD, but maybe less so for BTSP. It is unclear to me why they advertise their model to cover BTSP so prominently in the title – this aspect may not represent the best part of their work.
A review on normative models on synaptic plasticity
In their review paper, Bredenberg and Savin (2023) discuss the criteria that should be used to judge normative models on synaptic plasticity. What do they mean with “normative models”? The authors nicely describe a very clear distinction between phenomenological and mechanistic models on the one side, which use fitting or equations to describe experimental observations; and normative models on the other side, which try to understand the importance of the modeled process for the organism. These normative models therefore ask: how should synaptic plasticity be like in order to serve the organism well? That’s an interesting perspective, not driven bottom-up by experimental observations but top-down from “desiderata”, as the title of this review indicates.
The review is interesting and establishes – not surprisingly given its focus on the practical usefulness of plasticity models – many connections to deep learning-based approaches. I was, however, slightly disappointed by the apparent lack of discussion of behavioral timescale synaptic plasticity (BTSP). For example, they describe how to choose an objective function for synaptic plasticity to improve performance and how to linearize it for small weight changes, which I found a very interesting point of view. It would be instructive to investigate BTSP and single-shot learning in such a model. In particular, it seems to me that BTSP – due to its single-shot learning behavior – cannot be a learning rule based on small and linear weight changes. It would therefore be interesting to describe BTSP in a normative model and, for example, to investigate how a loss function can be used to derive or describe BTSP.
.
Did I miss an interesting recent paper on this topic? Let me know!
Moldwin, T., Azran, L.S., Segev, I., 2023. A Generalized Framework for Calcium-Based Plasticity Describes Weight-Dependent Synaptic Changes in Behavioral Time Scale Plasticity. https://doi.org/10.1101/2023.07.13.548837
Behavioral timescale synaptic plasticity (BTSP) is a form of single-shot learning observed in hippocampal place cells in mice (Bittner et al., 2015,2017). The idea is that post-synaptic activations (“eligibility traces”) are potentiated by a dendritic plateau potential/burst (“instructive signal”) that comes up to several seconds after the first signal. This result turned out to be one of the most reproducible findings of recent systems neuroscience, being replicable across labs and across stimulation protocols (via single-cell electrophysiology (Bittner et al., 2015) but also via optogenetic induction (O’Hare et al., 2022)). Many different labs are working in this interesting field, and here are a few recent papers on this topic.
A role for CaMKII in BTSP
Xiao et al. (2023) from the Magee lab report on a “critical role for CaMKII” in BTSP. “CaMKII” is known as a promotor fragment that is specific for excitatory neurons, but CaMKII is more than that – CaMKII is a calcium-dependent kinase (but see the recent paper by Tullis et al. (2023)) with important functions for plasticity. After phosphorylation, CaMKII remains in this activated state for several seconds and is therefore a candidate actor for BTSP, which acts on a timescale of several seconds as well. I learnt from this paper’s introduction that there is a CaMKII sensor based on FRET which revealed a shorter decay time for CaMKII’s phosphorylated state when it carried a specific mutation in CaMKII.
The authors take advantage of an established mouse line with this specific mutation, which resulted in deficits of memory and synaptic plasticity. Xiao et al. found, using in vivo patch clamp, that BTSP was prevented or reduced in these mutated mice (Fig. 3), suggesting an indeed critical role of CaMKII for the induction of BTSP.
As a side-finding, the authors describe that intrinsic burst propensities of CA1 neurons were quite different for mutants and wildtypes (unfortunately, they only show few example traces of such bursts), and they follow up on this result by characterizing the intrinsic properties in slices. A question that came up and was partially addressed by the authors are the possible side-effects of chronically mutated CaMKII on the entire development and on other brain regions or also the hippocampus before the start of the experiment. However, the main finding of the manuscript (prevention of BTSP in CaMKII-mutated mice) seems to be solid despite these non-avoidable confounds. Of course, the study does not address how exactly CaMKII is involved in BTSP. The way to address this question will probably require using fluorescent reporters of CaMKII activity and very tedious dissection of molecular signaling pathways.
Monitoring of place cell induction via BTSP with calcium imaging
Grienberger and Magee (2022) conducted a simple and elegant study using calcium imaging that confirms and connects previous ideas on hippocampal plasticity and BTSP. It had been known since a long time that hippocampal place cells are more abundant around more important locations, for example, reward locations. Here, the authors observe the transition from similar representation of all locations towards preferred representation of reward location using calcium imaging. They find evidence that this transition is induced by BTSP. To this end, they identify putative BTSP burst events in single neurons from calcium imaging data and track the activities of these neurons before and after such burst events. Moreover, they use optogenetics to show that input from entorhinal cortex (EC3) is required for the transformation of equal to reward-preferring representation of space. This finding is in line with the existing ideas about EC3 projections to provide the instructive signal that triggers bursts in apical dendrites and BTSP in CA1 neurons.
One of the most critical parts of this study is the identification of bursts to induce place fields using calcium imaging (in previous studies, this was usually done by in vivo whole-cell recordings). There is some uncertainty about the identification of these induction events (see the “CA1 place cell identification” subsection in the Methods section; check out a more recent preprint by the same lab, which uses much more complex criteria to detect BTSP events from calcium imaging; see the Methods subsection “BTSP Analysis” from Vaidya et al. (2023)). It would be very interesting to thoroughly study – for example with simultaneous calcium imaging and single-cell electrophysiology in vivo -, which calcium events correspond to BTSP/burst events and which calcium events correspond to “regular” spiking. However, this seems technically too challenging, and the more ad-hoc methods to identify such events in this study seem fully adequate for the question addressed. P.S. If you find this paper interesting, check out the publicly available reviewer reports.
Voltage imaging of BTSP with optogenetic perturbations
Fan et al. (2023) use an impressive combination of voltage imaging and optogenetics techniques in vivo to study BTSP. Let me cite the abstract to give an idea about the methodological craziness of this study: “We combined genetically targeted voltage imaging with targeted optogenetic activation and silencingof pre- and post-synaptic neurons to study the mechanisms underlying hippocampal behavioral timescale plasticity.” Optogenetic induction of place cells in hippocampus has been done before, but not combined with voltage imaging. This together with simultaneous silencing of specific projections sounds like a completely crazy project. Several interesting findings: Excitability of pyramidal neurons (as measured with combined optogenetic activation and voltage imaging) did not increase upon BTSP. CA2/3 to CA1 inputs were potentiated upon BTSP (optogenetic activation of, although only contralateral, CA2/3 with a fiber during voltage imaging of CA1 neurons). And the activity of CA2/3 cells projecting to CA1 was required for BTSP (optogenetic silencing of projection-specific CA2/3 neurons while voltage imaging in CA1).
Overall, I would have loved to read more about these experiments and their interpretation. Grienberger and Magee (2022) showed that (ipsilateral) EC3 input is necessary for BTSP, while Fan et al. (2023) showed that (contralateral) CA2/3 input is required. The idea is that both CA2/3 (for the eligibility trace) and EC3 input (for the instructive signal) are required, and both results are consistent with the standard theory of BTSP. It is only a bit surprising that even the contralateral CA2/3 input, as reported by Fan et al. was sufficient. In their study, the contralateral side was inhibited to enable fiber placement at CA2/3 and voltage imaging in CA1. This must be kept in mind because the authors, to access CA1 with 1P voltage imaging, also removed the central part of the external capsule; I wonder whether this removed part of the corpus callosum does not contain some of these contralateral projections.
A plasticity rule in cortex reminiscent of BTSP
Caya-Bissonnette et al. (2023) from Jean-Claude Béïque’s lab in Ottawa used slice physiology and modeling to show that a variant of BTSP exists in layer 5 pyramidal cells of cortex (S1) in mice. I have to admit that the often very long and complicated sentences in the Results section and the slightly overcrowded figures make it a tough paper to read, but it is worth it.
The authors use pre- and post-synaptic stimulation pairing paradigms as they were used extensively since the 1990s – but with the crucial difference that pre- and post-synaptic events were not separated by milliseconds but by 0.5-1.0 s. Using this approach, they found a potentiation of synapses that did not depend on the temporal order of pre/post stimulation. Therefore, they call this plasticity rule “associative”. The authors furthermore show that the endoplasmic reticulum (ER, reminiscent of O’Hare et al. (2022)) and several related calcium signaling pathways are involved in the latent “eligibility trace” that holds the memory of first stimulation before the second stimulation arrives, by extending the decay time constant of calcium (Fig. 4). The decay time is not extended by much, but maybe this slight increase is enough to induce plasticity.
Altogether, this is a very interesting study focusing on slice physiology. On one side, this is a limitation since the paper claims to have found a cortical variant of hippocampal BTSP, an effect that was primarily convincing because it had been shown in behaving animals. Therefore, one should critically inspect the choices of the plasticity protocol. For example, I noticed that the authors used a 10×20 Hz stimulation that seemed to me maybe a bit too strong and repetitive a stimulation and does not resemble typical BTSP events in vivo; moreover, during BTSP events, bursting and the associated plateau potentials were shown to be involved in the post-synaptic not the pre-synaptic cell in vivo (Bittner et al., 2015). Maybe I missed it, but I did not find this aspect discussed in the paper.
On the other side, the study is, despite this limitation, very interesting since it demonstrates how powerful slice physiology can be and how much we miss by only focusing on processes accessible in vivo. Moreover, it shows under which condition a classical STDP protocol can be transformed into a longer-timescale “BTSP” (Fig. 1F). To my knowledge, a plasticity rule with such long delays has not been convincingly demonstrated in cortex before, neither in slices nor in vivo. It may be a stretch to call the observed effect “BTSP”, drawing the analogy to the results seen in hippocampus in vivo, thereby gradually diluting the meaning of “BTSP”. However, this is just naming conventions – the science in the paper is interesting.
That’s it from my side about interesting papers on behavioral timescale synaptic plasticity.
Did I miss an interesting recent paper on this topic? Let me know in the comments!
Xiao, K., Li, Y., Chitwood, R.A., Magee, J.C., 2023. A critical role for CaMKII in behavioral timescale synaptic plasticity in hippocampal CA1 pyramidal neurons. https://doi.org/10.1101/2023.04.18.537377