Annual report of my intuition about the brain (2022)

How does the brain work and how can we understand it? To view this big question from a broad perspective at the end of each year, I’m reporting some of the thoughts about the brain that marked me most during the past twelve month – with the hope to advance and structure the progress in the part of my understanding of the brain that is not immediately reflected in journal publications. Enjoy the read! And check out previous year-end write-ups: 2018201920202021, 2022.

During the last years’ write-ups, I have been thinking about experiments to study self-organized plasticity in living brains. This year, I have been busy with preparations for the implementation of this line of research and hope to be working on it next year. Experimentally, I have been mostly occupied with – and more and more intrigued by – the study of hippocampal astrocytes and how they integrate calcium signals. In experiments connected to this project, we are now studying the role of neuromodulation and the locus coeruleus in more detail. And I’m glad that I can learn more about this interesting brain area by doing experiments myself. But I will discuss this in another blog post in more detail.

For this write-up, I want to discuss a small subfield of neuroscience that I only became aware of this autumn and that is closely related to ideas of self-organized neuronal networks: the experimental study of learning in cell cultures. How can the neuronal system of a cell culture be used to interact with a (virtual) world in a closed loop? In this blog post, I will discuss a few important papers to understand what can be studied with this model system of learning.

In conclusion, I think that this field is interesting in principle, as it provides a method to study plasticity (although this is not necessarily the primary aim of this kind of research). The method suffers from the problem, as most in vitro slice approaches as well, that experimental protocols for plasticity induction might be artificial and not related to processes in vivo.

I became aware if the field in October, when a high-profile paper on this topic came out and was prominently covered on social media – partly for making misleading claims and not citing prior research. I want to make it better and start chronologically with a seminal paper from 2001.

A network of cultured cortical neurons is trained to stop a stimulus

Shahaf and Marom showed in 2001 what can be described as “learning-in-a-dish” [1,2]. In these experiments, they grew a culture of cortical neurons on a 2D multi-electrode array (MEA) such that they could both stimulate these neurons and record from them. They used this system to provide some neurons with a specific stimulus. When the neuronal network exhibited a certain firing pattern briefly after stimulation, the stimulation was stopped. With this approach, the cultured network was embedded in a closed-cloop interaction.

Interestingly, the cultured network indeed started to increasingly show these spiking patterns that stopped the external stimulation. The observation is very easily summarized by figure 2 in their experimental paper [1]. After learning, the network is much more activate in a pre-defined time window (shaded area), thereby shutting down the stimulation:

This is a fascinating and also surprising observation. It seems as if the network is realizing that there is a stimulus, decides that the stimulus is annoying and therefore puts forward measures to stop and prevent the stimulus. Such a description is however highly anthropomorphic and does not really help to understand what is going on.

According to Shahaf and Marom, their observation shows in the first place that a neuronal network does not depend on a distinct implementation of reward or other forms of reinforcement. Instead, the network, following Shahaf and Marom, explores configurations until a desired state is reached (in this case: the state of stimulus removal) [1]. The authors discuss the implications a bit more in detail in [2] (check out section 7) but remain rather vague on the possible mechanisms of synaptic plasticity that might underlie such behavior.

A network of cultured cortical neurons interacts with a virtual world

Going slightly beyond the very simple closed loop by Shahaf and Marom, researchers from the lab of Steve Potter, which according to the lab’s website has “created the field of embodied cultured networks”, used the activity of the cultured neurons to drive the behavior of an “animal” in a virtual world [3]. The sensory feedback received by this animal in the virtual world is fed back to the cultured neurons. In this study, the focus is primarily on showing that it is possible to have a cultured network and a very simple virtual environment in a closed loop. This sounds particularly fascinating as the paper was published during a time when the movie Matrix had just come out.

Afterwards, in order to be able to evaluate learning and plasticity systematically, the lab moved to specific tasks based on this experimental design [4]. This study by Bakkum, Chao and Potter is conceptually close to the studies by Shahaf and Marom discussed above. The experimental design is depicted in this nice schematic (part of figure 1 in [4]) and shows a clearly identified mapping of the population activity onto a “motor output”.

Here, SBS (unpatterned shuffled background stimulation) serves as a kind of reward or neutral stimulus while PTS (patterned training stimulation) serves as an aversive stimulus that makes the network provide different output activity patterns. The (unchanging) context-control probing sequence (CPS) is used as a probe, and the network response upon the CPS pattern is regarded as the system’s motor output. Therefore, PTS or SBS act as plasticity- or feedback-stimuli, whereas CPS defines a probing time window to elicit the “motor output”.

The authors show that the networks could learn to provide mostly correct motor output when trained with such a paradigm. In addition, they quantify plasticity processes over time. Plasticity was higher during the learning procedure compared to baseline and remained higher for >1h after training. To summarize, the authors do not dig into mechanisms of cell-to-cell plasticity but rather provide a broader, systemic description of what is going on.

A difficult-to-grasp feature of this experimental study is the design of the patterned training stimulations (PTSs). The authors repeatedly applied different stimulation patterns (e.g., PTS1 stimulates electrodes 1, 4, 7 and 15, PTS2 stimulates electrodes 6, 8, 19, 20, 21 with specific stimulation intensities). Their algorithm was designed to choose from a pool of PTSs and select the PTS that resulted in plasticity towards a network state that generated the desired “motor” output. In 2022, most people (or at least myself) are so much used to deep learning and gradient descent that it is almost strange to read about such a random exploration approach.

Interestingly, the authors also investigated a comparable paradigm in a separate and purely theoretical study [5]. In this study, they replaced the cultured network with a simulated network and found very similar learning compared to the in vitro study. They found that this behavior depended on spike-time dependent plasticity, a cell-to-cell synaptic plasticity rule, and on STD, a short-term plasticity rule that helps to prevent large system-wide bursts.

From my perspective, it is interesting to see that the cell culture results could be replicated with a network that was based on STDP. A large part of the paper however instead focuses on the question how to choose and adapt stimuli that make the network learn the desired output. I would have been interested to get to know more about how STDP is shaped by the stimulation patterns and driven towards the desired output.

Learning by stimulation avoidance and autopoiesis

The group of Takashi Ikegami tried to better understand what is actually going on when neuronal cultures as in the experiments by Shahaf and Marom [1] learn to shut down an external stimulus. They study this effect using simulated neuronal networks and, as in work from the Potter lab [5], they identify spike-timing dependent plasticity (STDP) as an essential factor to mediate this learning process. As in work from the Potter lab, they also use an “embodied” application with a moving simulated robot that learns to avoid walls [6].

The main take-away of their simulation experiments is that the experiments by Shahaf and Marom can be replicated in silico and that the observed phenomena can be interpreted as “Learning by Stimulation Avoidance (LSA)”. The authors write in the discussion:

LSA is not just an another theoretical neural learning rule, but provides a new interpretation to the learning behavior of neural cells in vitro. (…) LSA relies on the mechanism of STDP, and we demonstrated that the conditions to obtain LSA are: (1) Causal coupling between neural network’s behaviour and environmental stimulation; (2) Burst suppression. We obtain burst suppression by increasing the input noise in the model or by using STP.

I have the impression that this is an overstatement of what “LSA” is. First, both STDP and burst suppression by spontaneous activity as ingredients to enable learning-in-a-dish had been described already previously [5]. Second, I don’t think this is “a new interpretation” of in vitro learning but simply the demonstration of a model that is consistent with the observed experiments from [1].

The authors expand a bit and provide more context:

As we have shown, LSA does not support the theory of the Stimulus Regulation Principle. It could be closer to the Principle of Free Energy Minimisation introduced by Friston. The Free Energy Principle states that networks strive to avoid surprising inputs by learning to predict external stimulation.

The “stimulus regulation principle” suggests a network that samples possible network configurations and stops the exploration once a favorable state is reached. A favorable configuration would be one which manages to suppress the stimulation. The STDP-model put forward by [6] instead is based on a synaptic learning rule that specifically strengthens and weakens synapses in order to reach a favorable network configuration.

The mentioned “free energy principle”, on the other hand, is infamous for being so general that almost any imaginable theory is consistent with it. The most popular theory about an implementation of the free energy principle is probably predictive processing [7]. In classical predictive processing, an internal model in the brain provides top-down signals to cancel out expected sensory inputs. The internal model adapts in order to improve the predictive power of the top-down signals. It is interesting how this typically hierarchical top-down/bottom-up view breaks down when being applied it to the model system of a dissociated cell culture. It would be hard to assume that there is an hierarchy in this dish. And still, the stimulus-avoidance paradigm does have some clear resemblance with surprise-minimization and predictive processing.

For more work from the Ikegami lab in collaboration with others, check out [8], where they interpret the findings that neurons in cultured networks that receive incontrollable stimulation will be disconnected from the network. They do so in the context of the rather old concept of autopoiesis, thereby going beyond the simple principle of “learning by stimulus avoidance” (LSA).

A network of cultured cortical neurons learns to play a simplified version of pong

In October 2022, a related paper by Kagan et al. with Karl Friston as senior author – who is famous for inventing the free energy princple – was published [9]. It received a lot of attention, but also quite some criticism by several neuroscientists on Twitter (ref 1, ref 2, ref 3, and other discussions that were deleted afterwards). It was critizised that the paper failed to cite previous research and interpreted the results in a misleading manner. For example, the title speaks of cultured neurons that “exhibit sentience”. This statement does not use a commonly used definition of “sentience” and therefore seems quite misleading for the reader. Unfortunately, this is not the only part of the manuscript which sounds like a buzzword combination made up by ChatGPT. Check out this excerpt of the introduction:

Instantiating SBIs [synthetic biological intelligences] could herald a paradigm shift of research into biological intelligence, including pseudo-cognitive responses as part of drug screening, bridging the divide between single-cell and population-coding approaches to understanding neurobiology, exploring how BNNs compute to inform machine-learning approaches, and potentially giving rise to silico-biological computational platforms that surpass the performance of existing purely silicon hardware.

But let’s put all the the buzz aside and discuss the science. Similar to the previously discussed studies, Kagan et al. used cultured neurons that were integrated into a closed loop using multi-electrode arrays. The virtual environment of this closed loop was a simplified version of the game “pong”. The multi-electrode array (MEA) was used to provide the cell culture with “sensory” feedback about the position of the ball and the success of the game, and to provide “motor” output from the MEA activity pattern back to the game in order to control the paddle.

Pong as implemented in the paper was less demanding (using a paddle half as large as the side of the arena). Despite this simplification, the performance of the cell culture controlling the game was not very impressive. The cultured network, after some learning, managed to hit the ball on average a bit more often than once (1x) before losing the game. This performance is not much higher (but significantly above) chance level.

The experimental design is well illustrated by this panel from their figure 5 (taken from the preprint, which had been published under the CC-BY-NC-ND-4.0 license):

The electrode array is subdivided into a “sensory” region indicated by the crosses and a “motor” region indicated by up and down arrows; the firing patterns in the motor region move the paddle up or down. The sensory region is provided with sensory feedback about the position of the ball, but also with a predictable stimulus when the ball hits the paddle and a random stimulus when the paddle misses the ball. Over time, the neurons learn to avoid the random stimulation by hitting the ball more often.

The interesting aspect is that the network seems to learn based on an intrinsic preference for predictable as opposed to unpredictable stimuli. The authors interpret this result as supportive evidence for the free energy principle, because the systems seems to learn the game in order to escape punishment by the random and unpredictable stimulus.

I have however some doubts about the interpretation of the results and also about the conceptualization. First, it is strange that predictable and unpredictable stimuli are used to reward or punish the system. This is not how, according to my understanding, the free energy principle works. One would rather expect the the system (the ball/paddle interaction) to be modeled by the neuronal network and therefore become predictable. There would not be any use of additional reinforcement by predictable stimuli as reward. – Interestingly, in Bakkum et al. [4], in contrast, the negatively reinforcing stimulus was patterned while the stabilizing, positively reinforcing stimulus was random. This fact shows that different stimuli are used for different purposes, and interpretations of their effect on the network depends on the conceptual framework of the paper.

Second, it is strange that the predictable stimulus (75 mV at 100 Hz over 100 ms; unfortunately, the duty cycle is not stated) and the unpredictable stimuli (150 mV at 5 Hz over 4000 ms) differ quite a bit in terms of stimulation strength, duration and frequency of stimulation. One needs to make oneself aware of the fact that in reality the “predictive stimulus” is a high-frequency tetanus stimulation. Such tetanic stimuli are known to be able to drive long-term potentiation (LTP). It is not hard to imagine that the slower unpredictable stimulation results in long-term depression (LTD), not by means of being unpredictable but by means of its frequency and duration.

Therefore, an alternative and, in my opinion, much more parsimonious explanation of the results is that the system does not try to minimize surprising/unpredictable results, but that it potentiates neuronal patterns that precede successful performance by tetanic stimulation. I am therefore not convinced that Kagan et al. demonstrated a phenomenon that can be linked to predictive processing or the free energy principle in the way they describe it.

However, I would also like to highlight the positive sides of the paper: the figures are well-designed; a preprint is available; the experiments were done with several conditions which allow for comparisons across conditions (e.g., different cell lines); and the experimental system was characterized with several complementary methods (including IHC, EM, qPCR).

Conclusion

The fact that cultured neuronal networks can interact with a closed-loop and learn to optimize these interactions is quite fascinating. The interpretation of such behavior as “learned stimulation avoidance” [6], “autopoiesis” [8] or a reflection of the “free energy principle” [6,9] is certainly intriguing. However, it might rather serve as a metaphor or a high-level description and does not really provide a deeper analysis or understanding of the underlying mechanisms. One possible starting point to investigate the mechanisms of such learning could be the STDP rules that were found to be consistent with the learning behavior in simulation studies [5,6]. It would be possible to make predictions about how spike sequences evolve over learning in the dish, and to test those predictions in the experiment.

It is remarkable how limited the performance of the cultured in vitro systems is when they were trained to control a virtual agent or a game. The performances are, seemingly without exceptions, barely above chance level, and nowhere close to mastering the virtual world. Years ago, deep Q-learning has achieved performances that are close to perfection in video games way more complex than “pong”. I do not think that anybody should make a point of “intelligence in a dish” when the dish can barely process a binary input.

However, I am still somehow intrigued by the experimental findings, especially those made initially by Shahaf and Marom [1,2]. Would it be possible to observe the same behavior with a more naturalistic network, for example a cultured slice, or an ex vivo explant of a brain, or even a brain region in a living animal? For example, one could use optogenetics to stimulate a subset of cortical neurons in the mouse (“sensory” neurons), use calcium imaging or electrophysiology to record from another subset of neurons in the same brain area (“motor” neurons) and stop the stimulation once the “motor” neurons show a specific activity pattern. Using this experimental design, one could test for example whether the learning by stimulation avoidance can also be elicited in vivo.

Another aspect that might deserve attention is the apparent dependence of plasticity processes on a strong stimulus. In the experiments by Shahaf et al. [1], the stimulus co-occurs with the activity pattern that is then modified by plasticity. In Bakkum et al. [4], the plasticity-inducing stimulus is a tetanic stimulation following the activity pattern. Very similarly, in Kagan et al. [9], the tetanic stimulus is directly following the activity pattern that is strengthened afterwards. The observed effect could therefore be the reflection of a very coarse form of plasticity which occurs in a network that is strongly stimulated by an external stimulus that somehow reinforces previous activity patterns.

Overall, from all these studies it becomes clear that networks of cultured neurons do indeed seem to show some level of systems-level learning when integrated into a closed loop. I have the impression that the plasticity induced (e.g., by a strong tetanic stimulation) is not very naturalistic. Despite this limitation, it would be interesting to investigate experimentally what plasticity rules underlie such behavior. It is likely that the plasticity studied in slices since the ’90s is in some similar way artificial since it does not integrate neuromodulation or realistic levels of external calcium. And still these slice experiments brought forward useful concepts that can be tested in living organisms. Similarly, I think there might be interesting aspects of studying plasticity rules in cultered neurons in closed loops. But I do not think that it is of any use to frame such cultured networks as “synthetic biological intelligence” [9] or to overinterpret the results in a theoretical framework of choice – in particular if the performance of this “intelligence” remains so much lower than the lowest standards of both biological and artificial neural networks.

References

[1] Shahaf, Goded, and Shimon Marom. Learning in networks of cortical neurons. Journal of Neuroscience 21.22 (2001): 8782-8788.

[2] Marom, Shimon, and Goded Shahaf. Development, learning and memory in large random networks of cortical neurons: lessons beyond anatomy. Quarterly reviews of biophysics 35.1 (2002): 63-87.

[3] DeMarse, Thomas B., et al. The neurally controlled animat: biological brains acting with simulated bodies. Autonomous robots 11.3 (2001): 305-310.

[4] Bakkum, Douglas J., Zenas C. Chao, and Steve M. Potter. Spatio-temporal electrical stimuli shape behavior of an embodied cortical network in a goal-directed learning task. Journal of neural engineering 5.3 (2008): 310.

[5] Chao, Zenas C., Douglas J. Bakkum, and Steve M. Potter. Shaping embodied neural networks for adaptive goal-directed behavior. PLoS computational biology 4.3 (2008): e1000042.

[6] Sinapayen, Lana, Atsushi Masumori, and Takashi Ikegami. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics. PloS one 12.2 (2017): e0170388.

[7] Keller, Georg B., and Thomas D. Mrsic-Flogel. Predictive processing: a canonical cortical computation. Neuron 100.2 (2018): 424-435.

[8] Masumori, Atsushi, et al. Neural autopoiesis: Organizing self-boundaries by stimulus avoidance in biological and artificial neural networks. Artificial Life 26.1 (2020): 130-151.

[9] Kagan, Brett J., et al. In vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Neuron 110.23 (2022): 3952-3969.

Posted in electrophysiology, Network analysis, Neuronal activity, Review, Reviews | Tagged , , | 6 Comments

“Laser Scanners” by William Benner

William Benner is a scanner enthusiast and the president of the company Pangolin. His company sells equipment mostly for laser shows but also for other applications. Some years ago, he wrote a book on “Laser Scanners”, which is available through this website but can also be received as pdf via request (or directly as PDF).

It’s an interesting read also for microscope builders. The book is not too technical but offers inspiration or ideas also for more experienced scan system designers.

For example, the book covers the basics of different scan systems, ranging from galvo scanners, resonant scanners, acousto-optic scanners, MEMS scanners, and others. However, the main focus is on galvo scanners, how they are built, controlled, and what can be done with them.

One chapter that I found the most interesting (naturally, because of my own work on z-scanning), is chapter 14, “Scanning in the third (focus) dimension”. The retroreflector approach is really cool!

The beam path designs shown in chapter 15, “Scanner blanking”, are quite inspiring, not only for blanking a beam but also for other applications. For example, one scan beam path designs from this book had been re-invented as the so-called scandreas for light sheet microscopy.

Chapter 16 suggests wide-angle scan lenses, which made me think about the potential use of such a concept for microscopy, maybe together with scan devices that scan very fast but only at a tiny optical angle. It would be interesting to simulate and optimize such a beam path (and then probably understand all the reasons why it would not work for microscopy).

Anyway, check out the book. It’s a colorful mix of technical manual, advertisement for his own company and some pieces of entrepreneurial advice. There are some explanations that could be improved (for example the chapter on scan lenses), but that’s okay. The book is an easy to read, avoids jargon and describes some interesting ideas that I for example was not aware of before – useful!

Posted in Imaging, Microscopy | Tagged , | Leave a comment

Ambizione fellowship and an open PhD position

I’m glad to share that I am going to start my own junior research group at the University of Zurich in March 2023!

As an Ambizione fellow, I will receive funding for my own salary, some equipment, consumables and a PhD student (total of ca. 900k CHF). There will be possibilities to recruit more students, but I will try to start small, so that I can spend my time with my students, experiments and data analysis and less so with buraucratic obligations that come with other positions. The general research direction of my group is already reflected by some write-ups of previous years (2018201920202021). There will be a focus on single neurons, plasticity, possibly also dendrites, and closed-loop paradigms.

My group will benefit from being part of the lab of Fritjof Helmchen. Working with all the resources and experts available in the Helmchen Lab, and at the same time being supervised by myself is probably not bad combination!

So if you’re finishing your Master’s right now and are looking for a PhD position, or if you have a talented Master’s student who would be a good fit for my group, get in touch with me. Here’s the official position opening:

Project description

The SNSF AMBIZIONE junior group of Dr. Peter Rupprecht at the Brain Research Institute, University of Zurich, Switzerland (starting 1.3.2023), is offering a PhD position to study neuronal circuits in the living mouse brain. The goal of the project is to understand how a single neuron learns about the effect of its action potentials. The project therefore addresses the problem of “credit assignment” in the brain experimentally.

The project will be conducted in the lab of and officially supervised by Prof. Fritjof Helmchen. Working on this project will therefore provide ample opportunity to collaborate with and learn from some of the best neuroscientists.

The methods to drive this project will be, among others, in vivo calcium imaging of neuronal activity, in vivo cell-attached recordings, closed-loop virtual realities, and advanced data analysis methods. You will be able to select your favorite sub-projects and techniques from this spectrum, and you will receive direct guidance from Peter Rupprecht to learn anything that is required.

Requirements

You have studied neuroscience, physics, biology, informatics or a related field. You are familiar with or eager to learn the techniques required for this project, including elements of microscopy, mouse handling, electrophysiology, programming and data analysis in Python and/or Matlab. Ideally, you enjoy solving problems, even if it takes long and the problems are difficult. Some programming skills or the strong desire to acquire such are important for any kind of project in this lab.

Application guideline

Please send your application including a CV and your transcript of records to Peter Rupprecht (rupprecht@hifo.uzh.ch). Please include a letter of motivation that covers the following aspects: What talents or previous experience of yours are possibly relevant to this project? What would you like to learn and achieve during your PhD? Do you enjoy writing in English? If you have coding or experimental experience, describe your most challenging previous program or project.

Starting date

The PhD position is available starting March 2023.

Posted in Calcium Imaging, Data analysis, electrophysiology, machine learning, Microscopy, Neuronal activity | Tagged , , , , , , , | Leave a comment

Post-publication review: The geometry of robustness in spiking neural networks

Selected paper: Calaim, Dehmelt, Gonçalves and Machens, The geometry of robustness in spiking neural networks, eLife (2022)

The main message: This theoretical neuroscience paper describes an intuitive way how to think about the effect of single spikes in a network where the output and its error is closely monitored and used as feedback. This intuition is best illustrated by the video below. The video shows how the activity of a network tries to trace a target (in this case a sinus wave). Each neuron defines a boundary (the differently colored lines in the video in the 2D case). If the output of the network exceeds one such boundary surface, a corrective spike of the respective neuron is emitted. The signal is therefore kept in the target zone by being bounced back once it becomes incorrect. This enables us for example to think about cases where single neurons are perturbed – when they are excited, inhibited or when they die – in an intuitive and geometric way in terms of single spikes.

Video 1 from Calaim, Dehmelt, Gonçalves and Machens, eLife (2022), reproduced here under the CC BY 4.0 license.

The strong points:

  • This visualization of the spiking dynamics in a neuronal network makes several points intuitively clear. For example, why such networks are robust to deletion or inhibition but not excitation of a single neuron (this is illustrated by deformations of the bounding box).
  • More generally, the visualization makes it intuitive how neurons might be able to cooperate when taking advantage of such specific and fast feedback loops.
  • This sort of simulated network tends to produce unphysiological or at least undesired behavior (discussed as “ping-pong” effect in the paper). This behavior becomes quite apparent and understandable due to the intuitive visualization.
  • Finally, it is simply really cool to use this visualization tool and to think about redundant neurons as those neurons where the boundary surfaces in this bounding box have similar locations/orientations.

The weak points:

  • The first weak point is that the presentation, despite the nice and intuitive video above, does not seem, from my perspective, accessible for everybody. Here, the task of the presented network is to approximate an externally supplied function (e.g., a sinus function). While many other neuroscientists, also many theoretical neuroscientists, think about neuronal networks as devices to represent information and to transform it, this paper therefore rather aims for a “control” approach. In my opinion, it is reasonable to think about the brain as a control device (to control the environment), but I feel that many readers might be already lost because they don’t understand why such an approach is taken. Only when reading the related work by the group (for an introduction, check out Boerlin et al., PLoS Comp Biol (2013) or Denève and Machens, Nat Neuro (2016)), one understands that this property of function approximation could also be a useful model of normal operation of the brain, where the task is to react towards the environment and not to approximate a god-given function. I feel that the authors introduce their idea not in an ideal way to reach a broader audience, especially since I the general idea could in principle be understood by a broader audience.
  • It is not clear how the intuition about the effect of single spikes and the target zone in the bounding box would translate to other network architectures that do not follow the rather specific designs. These network designs were described in previous work by Christian Machens and Sophie Denève and feature a specific excitatory/inhibitory connectivity. A core feature of such networks is that the voltages of neurons are largely synchronized but their spiking is not, resulting in a seemingly asynchronous firing regime as found in the cortex, despite redundancy at the voltage behavior of single neurons.
    This limitation also came up during the review process. The authors try to discuss these questions in detail and to address some of them, as much as possible, with additional analyses and models. Check out the open reviews (which are always accessible for eLife papers)!

Conclusion: I like the paper because it comes up with a surprising, new and maybe even useful idea. It attempts to think about the consequences and circumstances of a single neuron’s spike, and how the effect of this single spike on other neurons can be understood. As a caveat, all these intuitions come with the assumption that there is a closely monitored error of the output signal, which in turn is fed back into the system in a very precise manner. This assumption might seem very strong in the beginning. However, in order to understand things, we have to make some assumptions, and even from wrong assumptions, we might be able to develop first intuitions about what is going on. I would be curious whether and how this bounding box picture could be applied to other network architectures.

Posted in Network analysis, Neuronal activity, Reviews | Tagged | Leave a comment

Self-supervised denoising of calcium imaging data

This blog post is about algorithms based on deep networks to denoise raw calcium imaging movies. More specifically, I will write about the difficulties to interprete their outputs, and on how to address these limitations in future work. I will also share my own experience with denoising calcium imaging data from astrocytes in mice.

Introduction

Averages from calcium imaging or representative movies in publications often look great. In reality, however, two-photon calcium imaging is often limited primarily by low signal-to-noise ratios. There are many cases where recordings are dominated by shot noise to the extent that almost no structure is visible from a single frame. This can be due to a weakly expressing transgenic line; or on-purpose low expression levels to avoid calcium buffering; or it can be due to the fact that the microscope scans so fast across a large field of view or volume that it only picks up a few photons per neuron.

The advent of new algorithms to denoise calcium imaging movies

So, would it not be great to get rid of the noise in these noise-dominated movies using some magical deep learning algorithm? This seems to be the promise of a whole set of algorithms that were designed to denoise noisy images and recover the true signal using either supervised [1] but recently also self-supervised deep learning [2-3]. Recently, there have also been a few applications of these algorithms for calcium imaging [4-6]. The following tweet by Jérôme Lecoq was the first time I saw the results of such algorithms:

The results look indeed very impressive. The implementations of these algorithms were subsequently published, by Lecoq et al. (DeepInterpolation) [4] and independently with a very similar approach by Li et al. (DeepCAD) [5]. Since then, a few follow-ups were published to compare performance among the two algorithms and also to improve performance of DeepCAD [6], or with ideas to apply a sort of mixture algorithm between supervised and un-supervised to denoise calcium imaging data [7].

Despite the great-looking results, I was a bit skeptical. Not because of a pratical but rather because of a theoretical concern: there is no free lunch. Why should the output suddenly be less noisy than the input? How can the apparent information content increase? Let’s have a closer look at how the algorithm works to address this question.

Taking into account additional information to improve pixel value estimates

Let’s start very simple: Everybody will agree that the measured intensity for a pixel is a good estimate for the true fluorescence or calcium concentration at this location. However, we can refine our estimate very easily using some background information. What kind of background information? For example, if a high intensity value occurs in a region without any labeling, with all other time points of this pixel having almost zero value, we can be rather certain that this outlier is due to a falsely detected photon and not due to a true fluorescence signal. If however all surrounding pixels had high intensity values and the pixel of interest not, we could also correct our estimate of this pixel’s intensity value using (1) our experience about the spatial structures that we want to observe and (2) the information gained from the surround pixels. Therefore, refining our estimate of the pixel’s intensity is simply taking into account a prior what we expect the pixel’s intensity to be.

Methods based on self-supervised deep networks perform more or less such a procedure, and it is in my opinion a very reasonable way to obtain a better estimate for a pixel’s intensity. As a small difference, they only use the surrounding frames (adjacent in time) and not the pixel intensity itself (therefore lacking this Bayesian idea of improving an estimate using prior information). Despite this interesting small difference, it is clear that such denoising will – in principle – work. The network then uses deep learning to gain knowledge about what to expect in a given context; practically speaking, the prior knowledge will be contained in the network’s weights and extracted from a lot of raw data during learning. Using such a procedure, the estimate of the pixel’s intensity value will, probably under most conditions, be better than the raw intensity value.

A side note: Computing the SNR from raw and denoised pixels

From that, it is also clear that neighboring pixels of a denoised movie are correlated since their original values have influenced each other. It is therefore not justified to compare something like a SNR based on single pixels or single frames between raw and denoised data, because in one case (raw data) adjacent data points are truly independent measurements, while in the other (denoised data) they are not. Both DeepInterpolation [4] and DeepCAD [5] used such SNR measures that reflect the visual look and feel but are, in my opinion, not a good quantification of how much signal and how much noise is in the data. But this just as a side note.

Denoising can make plausible point estimates that are however artifacts

However, there is a remaining real problem. Let’s take some time to understand it. Clearly, the estimated intensity value is only a point estimate. So we don’t know anything about the confidence of the network to infer exactly this pixel intensity and not a different intensity value. Deep networks have been often shown to hallucinate familiar patterns when they were unconstrained by input. It is therefore not clear from looking at the results whether the network was very confident about all pixel intensities or whether it just made up something plausible because the input did not constrain the output sufficiently.

To make this rather vague concern a bit more concerete, here is an example of a calcium recording that I performed a few years ago (adult zebrafish forebrain, GCaMP6f). On the left side, you can see the full FOV, on the right side a zoom-in.
In the movie, there is first a version based on raw data, then the same raw data but with a smoothing temporal average, and finally a version denoised using the DeepInterpolation algorithm [4]. To provide optimal conditions, I did not use a default network provided by the authors but retrained it on the same data to which I applied the algorithm afterwards.

First, the apparent denoising is impressive, and it is easy to imagine that an algorithm performing automated source extraction will perform better for the denoised movie as for the raw movie.
When we look more carefully and with more patience, a few odd things pop out. In particular, the neuronal structures seem to “wobble around” a bit. Here is a short extract of a zoom-in into the denoised movie:

Neurons are densely packed in this region, such that the cytoplasms filled by GCaMP generate an almost hexagonal pattern when you slice through it with the imaging plane. In the excerpt above, there is indeed a sort of hexagonal pattern in each frame. However, the cell boundaries are shifting around from frame to frame. This shifting of boundaries can be particularly well seen for the cell boundary between the right-most neuron and its neighbor to the left. From the perspective of an intelligent human observer, these shifting boundaries are obviously wrong – it is clear that the brain and its neurons do not move.

So, what happened? The network indeed inferred some structural pattern from the noise, but it arrived at different conclusions for different time points. The network made the most likely guess for each timepoint given the (little) information it was provided, but the inconsistency of the morphological pattern shows that the network made up something plausible that however is partially wrong.

Solution (1): Taking into account the overall time-averaged fluorescence

To fix this problem specifically, the network could take into account not only surrounding pixels, but also the overall mean fluorescence (average across all movie frames) in order to make an educated guess about pixel intensity. As human observers, we do this automatically, and that’s why we can spot the artifact to start with. With the information about the overall anatomy, the network would have the same prior as the human observer and would be able to produce outputs that do not include such artifacts.

Solution (2): Taking into account the uncertainty of point estimates

However, the more general problem of the network to fill up uncertain situations with seemingly plausible but sometimes very wrong point estimates still persists. The only difference is that a human observer probably would be unable to identify the generated artifacts.

A real solution to the problem is to properly deal with uncertainties (for reference, here a review of uncertainty in deep networks). This means that the network needs to be able to estimate not only the most likely intensity values for each pixel but also the confidence intervals for each value. With a confidence interval for each pixel value, one could compute the confidence interval for e.g. the time course of a neuron’s calcium ΔF/F averaged across an ROI. The computational problem here is that the error ranges for each pixel do not just add as independent errors, resulting in a standard error of the mean, since the values and confidence intervals for adjacent pixels are dependent on each other. I assume that a straight-forward analytical treatment might be too tricky and some sort of Monte Carlo-based simulation would work better here. This would make it possible to use the denoised movie to derive e.g. a temporal ΔF/F trace of a neuron together with an uncertainty corridor of the estimated trace.

To sum it up, at this point it seems that there is not only a need to develop tools that provide faster and more beautiful denoised images, but even more so procedures to properly deal with uncertainties of estimates that reflect an output that is not enough constrained by the input. Without such tools, analyses based on denoised data must be carefully inspected whether they might be susceptible to such artifacts.

Practical aspects: Using denoising for astrocytic calcium imaging

In a recent preprint [8], I used such methods (DeepInterpolation [4]) to denoise calcium recordings from hippocampal astrocytes. Astrocytes are rather sensitive to laser-induced heating, and I therefore applied low excitation power, resulting in relatively noisy raw recordings. One main goal of the study was to study not only ROIs drawn around somata but also the spatio-temporal calcium dynamics from somatic and distal compartments, ideally with a precision of a single pixel.

To be able to quantify such patterns, it was essential to denoise the raw movie (see Figure 6e and supplemental Figure S8 in [8]). Briefly, it worked really nicely:

It was however crucial to carefully look at both raw and denoised data to understand what was going on, and to consider potential artifacts with respect to downstream analyses. In my case, it helped that the central result of the paper, that calcium signals propagated from distal to proximal compartments under certain conditions, was based on analyses averaged over time (due to the use of correlation functions). Such averaging is likely to undo any harm introduced by small artifacts generated by denoising. In addition, I carefully looked at raw together with denoised data and thought about possible artifacts that might be introduced by denoising.

The second aspect to notice is that the algorithm was rather difficult to use, required a GPU with large memory and still then was very slow. This has improved a bit since then, but the hardware requirements are still high. An alternative algorithm [5] seems to have slightly lower requirements on hardware, and the authors of [5] also developed a modified version of their algorithm that seems to be much faster, at least for inference [6].

Outlook

The development of methods to denoise imaging data is a very interesting field, and I look forward to seeing more work in this direction. Specifically, I hope that the two possible developments mentioned above (taking into account the time-averaged fluorescence and dealing properly with uncertainty) will be properly explored by other groups.

Researchers who apply denoising techniques are themselves often very well aware of potential pitfalls and hallucinations generated by U-Nets or other related techniques. For example, Laine et al. [9] end their review of deep learning-based denoising techniques with this note of caution:

“Therefore, we do not recommend, at this stage, performing intensity-based quantification on denoised images but rather to go back to the raw [images] as much as possible to avoid artefacts.”

With “quantification”, they do not refer to the computation of ΔF/F but rather to studies that quantify e.g. localized protein expression in cells. But should the computation of ΔF/F values have less strict standards?

There are a few cases where potential problems and artifacts are immediately obvious for the application of denoising methods to calcium imaging data. Self-supervised denoising uses the raw data to learn the most likely intensity value given. As a consequence, there will be a tendency to suppress outliers. This is not bad by itself because such outliers are most likely just noise. But there might also be biologically relevant outliers: rare local calcium events on a small branch of a dendritic tree; or unusually shaped calcium events due to intracellularly recruited calcium; or unexpected decoupling of two adjacent neurons that are otherwise strongly coupled by electrical synapses. If the raw SNR is not high enough, the network will take such events as unlikely to be true and discard them in favor of something more normal.

As always, it is the experimenter who is responsible that such concerns are considered. To this end, some basic understanding of the available tools and their limitations is required. Hopefully this blog post helps to make a step into this direction!

References

  1. Weigert, M., Schmidt, U., Boothe, T., Müller, A., Dibrov, A., Jain, A., Wilhelm, B., Schmidt, D., Broaddus, C., Culley, S. and Rocha-Martins, M. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nature Methods15(12). 2018.
  2. Krull, A., Buchholz, T.O. and Jug, F. Noise2void-learning denoising from single noisy images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.
  3. Lehtinen, J., Munkberg, J., Hasselgren, J., Laine, S., Karras, T., Aittala, M. and Aila, T. Noise2Noise: Learning image restoration without clean data. arXiv. 2019.
  4. Lecoq, J., Oliver, M., Siegle, J.H., Orlova, N., Ledochowitsch, P. and Koch, C. Removing independent noise in systems neuroscience data using DeepInterpolation. Nature Methods18(11). 2021.
  5. Li, X., Zhang, G., Wu, J., Zhang, Y., Zhao, Z., Lin, X., Qiao, H., Xie, H., Wang, H., Fang, L. and Dai, Q. Reinforcing neuron extraction and spike inference in calcium imaging using deep self-supervised denoising. Nature Methods18(11). 2021.
  6. Li, X., Li, Y., Zhou, Y., Wu, J., Zhao, Z., Fan, J., Deng, F., Wu, Z., Xiao, G., He, J. and Zhang, Y. Real-time denoising of fluorescence time-lapse imaging enables high-sensitivity observations of biological dynamics beyond the shot-noise limit. bioRxiv. 2022.
  7. Chaudhary, S., Moon, S. and Lu, H. Fast, Efficient, and Accurate Neuro-Imaging Denoising via Deep Learning. bioRxiv / [Update September 2022: Nature Communications]. 2022.
  8. Rupprecht, P., Lewis, C., Helmchen, F. Centripetal integration of past events by hippocampal astrocytes. bioRxiv. 2022.
  9. Laine, R.F., Jacquemet, G. and Krull, A. Imaging in focus: an introduction to denoising bioimages in the era of deep learning. The International Journal of Biochemistry & Cell Biology140. 2021.
Posted in Calcium Imaging, Data analysis, Imaging, machine learning, Microscopy, Neuronal activity, Review, zebrafish | Tagged , , , , | 2 Comments

Video introduction to CASCADE

I recently recorded a short video talk about CASCADE, our supervised method to infer spike rates from calcium imaging data (Github / paper / preprint).

The video includes short video tutorials of our Colab Notebooks to explore the ground truth database and to test the algorithm without installation in the cloud.

Check it out:

Please note that you can also increase the playback speed of of the video. There is also a reference in the video to a previous blog post on noise levels.

Posted in Calcium Imaging, Data analysis, electrophysiology, Imaging, machine learning, Microscopy, Neuronal activity | Tagged , , , , | Leave a comment

Simple geometrical optics to understand and design point-scanning microscopes

Custom-built microscopes have become more and more sophisticated over the last years to provide a larger FOV, better resolution through some flavor of adaptive optics or simply more neurons simultaneously. Professional optical engineers are hired to design the ideal lens combination or the objectives with Zemax, a software that can simulate the propagation of light through lenses systems based on wave optics.

Unfortunately, these complex design optimizations might discourage users from trying to understand their microscopes themselves. In this blog post, I will give a few examples how optical paths of microscopes can be understood and, to some extent, also designed using simple geometrical optics. Geometrical optics, or ray optics, are accessible to anybody who is willing to understand a small equation or two.

Three examples: (1) How to compute the beam size at the back focal plane of the objective. (2) How to compute the field of view size of a point scanning microscope. (3) How to compute the axial focus shift using a remotely positioned tunable lens.

All examples are based on a typical point scanning microscope as used for two-photon microscopy.

(1) How to compute the beam size at the back focal plane of the objective

The beam size at the back focal plane is the limiting factor for the resolution. The resolution is determined by the numerical aperture (NA) focusing onto the sample, and a smaller beam diameter at the “back side” of the objective can result in an effectively lower NA compared to what is possible with the same objective and larger beam diameter.

In general, it is therefore the goal to overfill the back focal aperture with the beam (check out this review if you want to know more). However, especially for video-rate two-photon microscopy, one of the scan mirrors is a resonant scanner, which usually comes with a usable aperture of ca. 5 mm. Often, there are only two lenses between scan mirror and objective, scan lens and tube lens. The two black lines illustrate the “boundaries” of the beam:

If the beam diameter at the scan mirror is 5 mm, the beam diameter dBFA at the objective’s back aperture will be, using simple trigonometry:

d_{BFA} = f_t/f_s \cdot 5 mm

Therefore, for a typical ratio ft:fs of 4:1 or 3:1, you will get a beam size at the objective’s back aperture of 20 mm or 15 mm. This is enough to overfill a typical 40x objective (objective back aperture 8-12 mm, depending on the objective’s NA), but barely enough for a 16x objective or even lower magnification with reasonably high NA (e.g., for NA=0.8, the back aperture is around 20 mm or higher).

Therefore, when you design a beam path or buy a microscope, it is important to plan ahead which objectives you are going to use with it. And this simple equation tells you how large the beam will be at the objective’s back aperture’s location.

(2) How to compute the field of view size of a point scanning microscope

Another very simple calculation to do is to compute the expected size of the field of view (FOV) of your microscope design. This calculation is based on the very same lenses as above, plus the objective. In this case, different rays (colored) illustrate different deflections through the mirror, not the boundaries of the beam as above:

The deflection angle range of the beam, α, is de-magnified by the scan lens/tube lens system to the smaller angle β and then propagates to the objective. The top-bottom spread of the scanned beams on the left indicates the size of the FOV. Using simple trigonometry based on the angle and the distances of lenses to their focal point, one can state (but see also Fabian’s comment below the blog post):

tan(\alpha) = d/f_s

tan(\beta) = d/f_t

tan(\beta) = s_{FOV}/f_o

From which one can derive the expected size of the FOV:

s_{FOV} = tan(\beta) \cdot f_o = tan(\alpha) \cdot f_o \cdot f_s/f_t

Interestingly, the FOV size depends linearly on the ratio fs/ft. As we have seen above, the beam diameter at the back aperture of the objective depends linearly on ft/fs, the inverse. This results in a trade-off, such that, when you try overfill the back aperture by increasing ft/fs, you will automatically decrease the maximal FOV size. It is therefore important to know beforehand what is more important, high NA (and therefore resolution) or a large FOV size. This is not only but to some extent already determined by the choice of scan lens and tube lens.

To give some real numbers, the deflection angle of a typical resonant scanner is 26°. Let’s say we have ft/fs=4. The “effective focal length” of the objective is often not obvious nor indicated. As a rule of thumb, the magnification (e.g., 16x from Nikon) together with the appropriate tube lens (e.g., 200 mm) can be used to compute the focal length as 200 mm/16 = 12.5 mm. – Where does the 200 mm come from? This is a value that is company-specific. Most companies use 200 mm as this standard tube lens focal length, while for example Olympus uses 180 mm as default. (As a side-effect, a 20x objective from Olympus has a shorter focal length than a 20x objective from Nikon.)

Together, we have fo=12.5 mm, α=26°, ft/fs=4, arriving at sFOV=1.5 mm as an estimate for the maximally achievable FOV using these components.

As another side note, when you look up the maximal scan angles of a scanner, there is often confusion between “mechanical” and “optical” scan angle. When a scanner moves mechanically over 10°, the beam will be optically deflected by twice the amount, 20°. For FOV calculations, the optical scan angle should be used, of course.

(3) How to compute the axial focus shift using a remotely positioned tunable lens

The final third example for gemetrical optics is a bit more involved, but it might be interesting also for less ambitious microscope tinkerers to get the gist of it.

I few weeks ago, together with PhD student Gwen Schoenfeld in the lab of Fritjof Helmchen, I wanted to re-design a two-photon microscope in order to enable remote focusing using an electro-tunable lens. Here’s the basic microscope “design” that I started with:

As you can see, the objective, scan lens and tube lens are as simple as in the previous examples. Behind the scan lens (fs), there is only a slow galvo scanner (for the y-axis), while the fast resonant scanner (for the x-axis) is behind a 1.5:1 relay. There are two ideas behind this “relay” configuration. First, a relay between the two scan mirrors makes it possible to position them more precisely at the focus of the scan lenses (there exist also some more advanced relay systems, but this is a science by itself). Second, the relay system here is magnifying and enlarges the beam size from 5 mm (at the resonant scanner) to 7.5 mm (at the slow galvo scanner). In the end, this results in a smaller FOV size for the x-axis but in a larger beam size at the back of the objective and therefore better resolution.

To insert a remote tunable lens into this system, we had to do this in an optically conjugate plane of the back focal plane. This required the addition of yet another relay. This time, we chose a de-magnifying relay system. The tunable lens had a larger free aperture than the resonant scanner, so it made sense to use the full aperture. Also, as you will see below, this de-magnifying relay system considerably increases the resulting z-scanning range using the tunable lens. For the tunable lens itself, we chose a tunable lens together with a negative offset lens, resulting in a default behavior of the lens as if it did not exist (at least to a first approximation).

Now, before choosing the parts, I wanted to know which z-scanning range would be expected for a given combination of parts. My idea, although there might be simpler ways to get the answer, was to compute the complex beam parameter of the laser beam after passing through the entire lens system. The beam waist and therefore the focus can then be computed as the point where the real part of the beam parameter is zero (check out the Wikipedia article for more details).

To calculate the complex beam diameter after passing through the system, I used geometrical optics. More precisely, ABCD optics. ABCD optics is a formalism to use geometrical optics using 2×2 matrices. A lens or a certain amount of free propagation space are represented as simple matrices, and the beam propagation (distance from center line and angle) is then computed by multiplying all the matrices. For the system above, this means the multiplication of 16 matrices, which is not difficult but takes very long. The perfect tool for that is Wolfram’s Mathematica, which is not free but can be tested as a free trial for 30 days per e-mail address. All the details of this calculation are in a Mathematica Notebook uploaded to Github.

Briefly, the result of this rather lengthy mathematical calculation is a simple result, with the main actors being the axial z-shift in the sample (z) and the effective focal length of the tunable lens (fETL):

z = -\frac{f_o^2 \cdot f_s^2 \cdot f_{r1}^2 \cdot f_{r3}^2}{f_{ETL} \cdot f_t^2 \cdot f_{r2}^2 \cdot f_{r4}^2}

Using this formula and a set of candidate lenses, it is then possible to compute the z-scanning range:

Here, the blue trace is using the formula above. For the red trace, I displaced the ETL slightly from the conjugate position, making the transfer function more linear but the z-range also smaller, and showing the power and flexibility of the ABCD approach using Mathematica.

Of course, this calculation does not give the resolution for any of these configurations. To compute actual resolution of an optical system, you will have to work with real (not perfect) lenses, using ZEMAX or a similar software that can import and simulate lens data from e.g. Thorlabs. But it is a nice playground to develop an intuition. For example, from the equation it is clear that the z-range depends on the magnification of the relay lenses quadratically, not linearly. Also, the objective focal length enters this equation with the power of the square. Therefore, if you have such a remote scanning system where a 16x objective results in 200 μm of z-range, switching to a 40x objective will reduce the z-range to only 32 μm!

This third example for the use of geometrical optics goes a bit deeper, and the complex beam parameter is, to be honest, not really geometrical optics but rather Gaussian optics (which can however use the power of geometrical ABCD optics, for complicated reasons).

In 2016, I used these mathematical methods to do some calculations for our 2016 paper on remote z-scanning with a voice coil motor, and it helped me a lot to perform some clean analyses without resorting to ZEMAX.

Apart from that, the calculation of beam size and FOV size are very simple even for beginners and a nice starting point for a better understanding of one’s point scanning microscope.

Posted in Calcium Imaging, Imaging, Microscopy | Tagged , , | 2 Comments

Public peer review files

Peer-review is probably the most obscure part of the publication of scientific results. In this blog post, I would like to make the point that the best way to learn about it – except by being directly involved – is to read public peer review files. In addition, I will recommend some interesting or surprising peer review files, mostly for systems neuroscience, but also for some optics papers.

Public peer review files should be the standard

Peer review takes place before official “publication” in a peer-reviewed journal and is therefore usually not accessible to the reader. During the last years, this practice has changed, and more and more journals are now offering the peer review and rebuttal letters as supplementary files. This is done for example by the journal eLife, but also Nature Communications, sometimes for the SfN journals, and Nature. For some journals, like Nature, the authors can opt out of the peer review file publication (here is Nature’s policy). But, honestly, if you are not willing to share the reviews publicly, what do you want to hide? I think it should become a mandatory standard to share the reviews, with only specific reasons justifying an opt-out. (Update 2022-02-14: a Twitter-discussion on this topic.)

What to learn from peer review files

As a young researcher like a PhD student, it is rare to be involved in the review process, which, therefore, remains a black box. For me, it was fascinating to read my first peer review files on eLife maybe 5 years ago. It felt like the entire smooth surface of this paper started to crumble and give way to a more rich and more nuanced point of view. Looking back at this paper, this nuanced view was also somewhat included in the paper, but in such a smooth manner that it was difficult to extract without absolute expert knowledge.

Nowadays, I rarely read eLife papers until also the peer review section is online (which comes a couple days after the preliminary pdf). The reviewer comments provide an additional point of view that is very helpful in particular when I cannot fully judge the paper myself.

Additionally, reading such review files helps to write both better manuscripts, better rebuttal letters, and also better reviews. Also, gaining experience from these files prepares a bit for the sometimes very difficult experiences when receiving the reviews. Plus, reading those reviews makes the entire process a bit more transparent. When I wrote my first reviews for journals, I had seen only three or four reviews for my own co-author papers; but I had seen many more as public review files.

Let’s look into some examples that show what can happen during peer review. Paper links are in the title, links to the review files/sections thereafter.

Toroidal topology of grid cell activity (Review file)

When the preprint of this paper by Gardner et al. from the Moser lab appeared, I put it on my reading list, but never actually read it fully because I did not feel on top of the vast literature about continuous attractor models, and I did not even know whether a “toroidal” geometry of the activity space would be a surprising result or not.

Checking the peer review files after publication at Nature provided exactly this entry point that I had been missing before. In retrospect, all the information needed is also in the paper, but the more direct and less formal language in the reviews took me by the hand and showed me directly what the paper is about. The summaries at the beginning of each review provide a very useful second perspective on the main messages of the paper.

Deep physical neural networks (Review file)

A few weeks ago, I noticed an interesting article published at Nature on “deep physical neural networks”. I checked the author’s Twitter thread on the paper and found the topic intriguing, but slightly beyond my area of expertise. The review file provided me exactly with the lacking context and critical external opinion that I needed to form a somewhat less vague opinion of the specific advance made by the paper. Really helpful!

Place cells respond to moving bars in immobile rats (Review file)

This review file contains an entire story by itself. In the first round of review, reviewer #1 was rather critical, while the other two reviewers were almost convinced already. The authors, in a 8-month revision period finally managed to very decently address most points brought up by the reviewers. They do this, as has become apparently common practice for rebuttal letters to high-impact journals, in a very long and detailed letter (the entire review file is 44 pages).

But after this round of reviews and rebuttals, suddenly, reviewer #3 changes his/her opinion entirely:

“Unfortunately my evaluation of this paper has changed since the first submission due to comments from the other reviewers and because of literature I have discovered since then. This paper presents a set of reasonably well performed and analyzed experiments, but I no longer think the main results are novel or surprising. I therefore do not recommend publication in Nature and think this paper is better suited for a specialist journal.”

This is just the beginning of a long comment on why the reviewer thinks the paper is not novel enough any more. This is of course a nightmare for the authors. In the end, the authors do their best to address these concerns of novelty. Reviewer #3 remains unconvinced. The editor decides to ignore these concerns and goes with the recommendation of the other reviewers to publish the manuscript. Check it out yourself to form your own opinion on this discussion!

Non-telecentric two-photon microscopy (Review file)

A much wilder story is hidden in the peer review file of this optics paper from Tom Baden’s lab, which was finally published at Nature Communications. The first set of reviews at Nat Comm is still pretty normal, but then something goes wrong. Reviewer #4, who is – very obviously – highly competent but also a bit obsessed with details and annoyed by imprecision, has a few things to complain about, mostly about a few relatively small unsubstantiated claims and some minor errors that do not affect the main idea of the manuscript. However, the authors do not agree with this opinion, and a long and slowly escalating discussion between an annoyed reviewer and frustrated authors evolves. Check it out yourself. If you don’t get stomach pain while reading, you have my full admiration. At some point, reviewer #4 writes:

“In the last round of review, I wrote detailed comments, color-rebuttal, in a 16-page PDF, with the hope that they would be of help to the authors to make the manuscript better. The response I received on this round, surprisingly, is only 5 pages, and the authors reluctantly chose, on purpose or randomly, 5 comments to address, and ignored all other comments I curated. PLEASE RE-WRITE your response by addressing all comments I gave last time.”

Upon “editorial guidance”, the authors refrain from doing so. It all ends with mutual passive aggression (“No further comment” from both sides) – and acceptance for publication as an article.

Read it yourself to see how crazy and tiring peer review can be, and consider yourself how this situation could have been avoided by the authors or the reviewers. However, in the end, this review file is also a contribution to the scientific discussion (e.g., about proper PSF measurements) and therefore valuable by itself. It is a painful document, but also useful and full of insights.

Three-photon microscopy (Paper 1, Reviews; Paper 2, Reviews)

Three-photon microscopy is a relatively new field still, and when these two papers came out in Nature Communications and eLife, respectively, I was very happy to be provided with the additional context and details in the peer review files. I found especially the discussion in the Nat Comm paper (paper 1) about potential concerns for three-photon imaging very interesting.

Somato-dendritic coupling of V1 L5 neurons (Review file)

A few years ago, I had covered a preprint by Francioni et al. from the Rochefort lab on my blog. This study was later published on eLife, and since I liked the work of this paper already, I was very curious about the comments of the reviewers, their concerns, and the author’s replies. It is nice to get additional insights into such interesting studies!

Juxtacellular opto-tagging of CA1 neurons (Review file)

The review file of this beautiful methods paper from the Burgalossi lab tells an interesting story. Apparently, the authors had included an additional experimental sub-study based on cfos in the paper, but the reviewers were not convinced by the results. They therefore suggested – very surprising to me! – the acceptance of the paper for publication, but only after deletion of this subsection. I would not have guessed that such a unexpected and helpful consensus can be reached during the review process. This was probably helped by the fact that at eLife, it is common practice that editors and reviewers discuss their reviews with one another.

Nonlinear transient amplification (Review file)

Purely theoretical (neuroscience) papers are often challenging because it is difficult to fully judge the novelty, even whenthe concepts and ideas and equations are transparent. This paper by Wu and Zenke is conceptually close to what I have studied experimentally during my PhD (paper link in case you’re interested), so I was happy that this paper got published at eLife, with the review publicly available. A very useful secondary perspective!

In vivo calcium imaging in CA3 (Review file)

This is – so far – the only paper with public review file where I have been involved as an author. I wrote some sections of the rebuttal letter, actually without knowing that the reviews and rebuttals would be openly available afterwards. Unfortunately, the journal (eNeuro) messed up the formatting in such a horrible way that the review file becomes almost unreadable (which is a pity, because our rebuttal letter was very nicely written). This mess-up shows that there is still some progress to make, also in terms of infrastructure.

In general, I hope that public review files will become more common in the future, to the extent that non-public review files will be a thing of the past entirely. Public reviews make the editorial process more transparent, they open up the discussion behind the paper, lower the barriers for junior researchers with less peer review experience, and do not have, to my understanding, any major negative side-effects.

Posted in Calcium Imaging, Imaging, machine learning, Microscopy, Neuronal activity, Review | Tagged , , , | Leave a comment

Annual report of my intuition about the brain (2021)

How does the brain work and how can we understand it? I want to make it a habit to report some of the thoughts about the brain that marked me most during the past twelve month at the end of each year – with the hope to advance and structure the progress in the part of my understanding of the brain that is not immediately reflected in journal publications. Enjoy the read! And check out previous year-end write-ups: 2018201920202021, 2022.

During the last year, I have continued to work on the ideas described during previous year-end write-ups, resulting in a project proposal that is currently under evaluation. I will use this year’s write-up to talk about something different, although related, a recent book by Peter Robin Hiesinger: The Self-Assembling Brain.

Hiesinger, based in Berlin, is working in the field of developmental neurobiology. However, this book is rather a cross-over between multiple disciplines, ranging from developmental neurobiology, circuit neuroscience, artificial intelligence, robotics, and many side-branches of the mentioned disciplines. Hiesinger masterfully assembles the perspectives of the different fields around his own points of interest. For example, his introductory discussion about the emergence of the field of artificial intelligence in the 1950s is one of the most insightful account that I have read about this period. He tells the stories how key figures like von Neumann, Minsky, Rosenblatt or McCarthy and their relationships and personalities influenced the further development of the field.

The main hypothesis of Hiesinger’s book is that the genetic code does not encode the endpoint of the system (e.g., the map of brain areas, the default state network, thalamocortical loops, interneuron connectivity, etc.). According to him, and I think that most neuroscientists would agree, the neuronal circuits of the brain are not directly encoded in the genetic code. Instead, the simple genetic code needs to unfold in time in order to generate the complex brain. More importantly, it is, according to Hiesinger, necessary to actually run the code in order to find out what the endpoint of the system is. Let’s pick two analogies brought up in the book to illustrate this unfolding idea.

First, in the preface Hiesinger describes how an alien not familiar with life on earth finds an apple seed. Upon analysis of the apple seed, the alien realizes that there are complex and intricate genetic codes in the apple seed, and it starts to see beauty and meaning in these patterns. However, the analysis based on its structural content would not enable the alien to predict the purpose of the apple seed. This is only possible by development (unfolding) of the seed into an apple tree. Unfolding therefore is the addition of both time and energy to the seed.

Second, Hiesinger connects the unfolding idea with the field of cellular automata, and in particular with the early work of Stephen Wolfram, a very influential but also controversial personality of complexity research, and his cellular automaton named rule 110. The 110 automaton is a very simple rule (the rule is described in this wikipedia article) that is applied to a row of 1’s and 0’s and results in another binary row. The resulting row is again subject to rule 110, etc., leading to a two-dimensional pattern as computed here in Matlab:

The pattern is surprisingly complex, despite the simplicity of the rule. For example, how can one explain the large solitary black triangle in the middle right? How the vertical line of equally sized triangles in the center that ends so abruptly? The answers are not obvious. These examples show that a very simple rule can lead to very complex patterns. From Hiesinger’s point of view, it is important to state that the endpoint of the system, let’s say line 267, cannot be derived from the rule – unless it is developed (unfolded) for exactly 267 iterations. Hiesinger believes that this analogy can be transferred to the relationship between the genetic code and the architecture of the brain.

The rest of Hiesinger’s book discusses the implications of this concept. As a side-effect, Hiesinger illustrates how complex the genome is in comparison with the simple 101 automaton. Not only is the code richer and more complex, but it is also, due to transcription factor cascades that include feedback loops, a system of rules where rules, unlike rule 110, change over time with development. Therefore, according to Hiesinger, the classical research in developmental biology that tries to match single genes (or a few genes) onto a specific function is ill-guided. He convincingly argues that the examples for such relationships that have been found as “classical” examples for the field (e.g., genes coding for cell adhesion molecules involved in establishing synaptic specificity) are probably the exception rather than the rule.

The implication of the unfolding hypothesis for research on artificial intelligence is, interestingly, very similar. That is, to stop treating intelligent systems like engineered systems, where the rules can be fully designed. Since the connection between the generative rules and the resulting endpoint system cannot be understood unless their unfolding in time is observed, Hiesinger is in favor of research that embraces this limitation. He suggests to build models based on a to-be-optimized (“genetic”) code and, letting go of the full control, make them unfold in time to generate an artificial intelligence. Of course, this idea reminds of the existing field of evolutionary algorithms. However, in classic evolutionary algorithms, evolving properties of the code are more or less directly mapped to properties of the network or the agent. If I understood the book right, it would be in Hiesinger’s spirit to make this mapping more indirect through developmental steps that allow for higher complexity, even though it would also obfuscate the mechanistic connection between rules and models.

Overall, I find Hiesinger’s approach interesting. He shows mastery of other fields as well, but it is pncing point that the idea of the unfolding code, the self-assembling brain, is reasonable, and he also brings up examples of research that goes into that direction. However, as a note of caution to myself, accepting the idea of self-assembly seemed a bit like giving in when faced with complexity. There is a long history of complexity research that agreed on the fact that things are too complex to be understood. Giving in resulted in giving vague names to the complex phenomena, which seemed to explain away the unknown but in reality only gave it a name. For example, the concepts of emergence, autopoiesis or the free energy principle are in my opinion relatively abstract and non-concrete concepts that contributed to the halting of effective research by preventing incremental progress on more comprehensible questions. I get similar vibes when Hiesinger states that the connections between the self-organizing rules and the resulting product are too complex to be understood and require unfolding in time. The conclusion of this statement is either that everything is solved, because the final explanation is unfolding in time of a code that cannot be understood; or it is that nothing can be solved because it is too complex. In both cases, there seems to be some sort of logical dead end. But this just as a note of caution to myself.

So, what is the use of the unfolding hypothesis about the organization and self-assembly of the brain? I think it is useful because it might help guide future efforts. I agree with Hiesinger that the field of “artificial intelligence” should shift its focus on self-organized and growing neuronal networks. In my opinion, work focusing on evolutionary algorithms, actor-based reinforcement learning (e.g., something called neuroevolution), neural architecture search or more generally AutoML go into the right direction. Right now it seems a long shot to say this, but my guess is that these forms of artificial neuronal networks will become dominant within 10 years, potentially replacing artificial neuronal networks based on backpropagation. – After finishing the write-up, I came across a blog post by Sebastian Risi that is a good starting point with up-to-date references on self-assembling algorithms from the perspective of computer science and machine learning – check it out if you want to know more.

For neurobiology, on the other hand, the unfolding hypothesis means that an understanding of the brain requires understanding of its self-assembly. Self-assembly can happen, as Hiesinger stresses, during development, but it can also happen in the developped circuit through neuronal plasticity (synaptic plasticity on short and long time scales, as well as intrinsic plasticity). I have written about this self-organizing aspect of neuronal circuits in my last year’s write-up. Beyond that, if we were to accept the unfolding hypothesis as central to the organization of the brain, we would also be pressured to drop some of the beautiful models of the brain that are based on engineering concepts like modularity. For example, the idea of the cortical column, the canonical microcircuit, or the concept of segregated neuronal cell types. All those concepts have been obviously very useful frameworks or hypotheses to advance our understanding of the brain, but if the unfolding of the brain is indeed the main concept of its assembly, these engineering concepts are unlikely (although not impossible) to turn out to be true.

It is possible that most of the ideas are already contained in the first few pages, and the rest of the book is less dense and feels often a bit redundant. But especially the historical perspective in the beginning and also some later discussions are very interesting. Language-wise, the book could have benefitted from a bit more inference by the editor to avoid unnaturally sounding sentences, especially during the first couple of pages. But this is only a minor drawback of an otherwise clear and nice presentation.

The fictional characters of a systems neuroscientist (Alfred), an AI researcher (Pramesh), a developmental biologist (Minda) and a robotics researcher (Aki) discuss how developmental growth could be implemented for artificial neuronal networks.

The book is structured into ten “seminars”, which are each of them a slightly confusing mix of book chapter and lecture style. Each of the “seminars” is accompanied by a staged discussion between four actors: a developmental biologist, an AI researcher, a circuit neuroscientist and a robotics engineer (see the photo above). Theoretically, this is a great idea. In practice, it works only half of the time, and the book loses a bit of its natural flow because the direction is a bit missing. However, these small drawbacks are acceptable because the main ideas are interesting and enthusiastically presented.

Altogether, Hiesinger’s book is worth the time to read it, and I can recommend it to anybody interested in the intersection of biological brains, artificial neuronal networks and self-organized systems.

Posted in machine learning, Network analysis, Review | Tagged , , , | 6 Comments

Large-scale calcium imaging & noise levels

Calcium imaging based on two-photon scanning microscopy is a standard method to record the activity of neurons in the living brain. Due to the point-scanning approach, sampling speed is limited and the dwell time on a single neuron reduces with the number of recorded neurons. Therefore, one needs to trade off the number of quasi-simultaneously imaged neurons versus the shot noise level of these recordings.

To give an simplified example, one can distribute the laser power in space and time over 100 neurons at 30 Hz, or 1000 neurons at 3 Hz. Due to the lower sampling rate, the signal-to-noise-ratio (SNR) of the 1000 neurons will decrease as well.

A standardized noise level

To compare the shot noise levels across recordings, in our recent paper (Rupprecht et al., 2021) we took advantage of the fact that the slow calcium signal is typically very similar between adjacent frames. Therefore, the noise level can be estimated by

\nu  = \frac{Median_t \mid \Delta F/F_{t+1} - \Delta F/F_t \mid}{\sqrt{f_r}}

The median makes sure to exclude outliers that stem from the fast onset dynamics of calcium signals. The normalization by the square root of the frame rate f_r renders the metric comparable across datasets with different frame rates.

Why the square root? Because shot noise decreases with the number of sampling points with a square root dependecy. The only downside of this measure is that the units seem a bit arbitrary (% for dF/F, divided by the square root of seconds), but this does not make it less useful. To compute it on a raw dF/F trace (percent dF/F, no neuropil subtraction applied), simple use this simple one-liner in Matlab:

noise_level = median(abs(diff(dFF_trace)))/sqrt(framerate)

Or in Python:

import numpy as np
noise_level = np.median(np.abs(np.diff(dFF_trace)))/np.sqrt(framerate)

If you want to know more about this metric, check out the Methods part of our paper on more details (bioRxiv / Nature Neuroscience, subsection “Computation of noise levels”).

The metric \nu comes in handy if you want to compare the shot noise levels between calcium imaging datasets and understand whether noise levels are relatively high or low. So, what is a “high” noise level?

Comparison of noise levels and neuron numbers across datasets

I collected a couple of publicly available datasets (links and descriptions in the appendix of the blog post) and extracted both the numbers of simultaneously recorded neurons and the shot noise level \nu. Each data point stands for one animal, except for the MICrONS dataset, where each dataset stands for a separate session in the same animal.

As a reference, I used the Allen Brain Institute Visual Coding dataset. For excitatory neurons, typically 100-200 neurons were recording with a standard noise level of 1 (units omitted for simplicity). If you distribute the photons across an increasing number of neurons, the shot noise levels should increase with the square root of this multiple (indicated by the black line). Datasets with inhibitory neurons (de Vries et al., red) have by experimental design fewer neurons and therefore lie above the line.

A dataset that I recorded in zebrafish with typically 800-1500 neuron per recording lies pretty much on this line, similar to the MICrONS dataset where they used a mesoscope to record from several thousand cortical neurons simultaneously, at the cost of lower frame rate and therefore higher noise levels, similar to the dataset by Sofroniew et al., which recorded ca. 3000 neurons, but all from one plane in a large FOV.

Two datasets acquired by Pachitariu and colleagues stands out a bit by pushing the number of simultaneously recorded neurons. In 2018, this came at the expense of increased noise levels (pink). In 2019 (a single mouse; grey), despite a dataset with ca. 20,000 simultaneously recorded neurons, the noise level was impressively low.

In regular experiments, in order to mitigate possible laser-induced photodamage or problems due to overexpression of indicators, noise levels should not be maximized at the cost of physiological damage. For example, the mouse from the MICrONS dataset was later used for dense EM reconstruction; any sort of damage to the tissue, which might be invisible at first glance, could complicate subsequent diffusive penetration with heavy metals or the cutting of nanometer-thick slices. As a bottom line, there are often good reasons not to go for the highest signal yield.

Spike inference for high noise levels

To give an idea about the noise level, here is an example for the MICrONS dataset. Due to the noisiness of the recordings (noise level of ca. 8-9), only large transients can be reliably detected. I used spike inference through CASCADE to de-noise the recording. It is also clear from this example that CASCADE extracts useful information, but won’t be able to recover anything close to single-spike precision for such a noise level.

Above are shown the smooth inferred spike rates (orange) and also the discrete inferred spikes (black). The discrete spikes (black) are nice to look at, but due to the high noise levels, the discretization into binary spikes is mostly overfitting to noise and should be avoided for real analyses. For analyses, I would use the inferred spike rate (orange).

Conclusion

The noise level \nu can be used to quantitatively compare noise levels across recordings. I hope that other people can use this noise level metric \nu for their work.

As a note of caution, \nu should never be the sole criterion for data quality. Other factors like neuropil contamination, spatial resolution, movement artifacts, potential downsides of over-expression, etc. also play important roles. Low shot noise levels is not a guarantee for anything. However, high shot noise levels on the other hand are always undesirable.

.

Appendix: Details about the data shown in the scatter plot

de Vries et al. (2020; red and black) describes the Allen Visual Coding Observatory dataset. It includes recordings from more than 100 mice with different transgenic backgrounds in different layers of visual-related cortices. Red dots are datasets from mice that only expressed calcium indicators in interneurons, while black dot are datasets with cortical principal neurons of different layers. The datasets are highly standardized and of low shot noise levels (standardized level of ca. 1.0), with relatively few neurons per dataset (100-200).

Rupprecht et al. (unpublished; green) is a small dataset in transgenic Thy-1 mice in hippocampal CA1 that I recorded as a small pilot earlier this year. The number of manually selected neurons is around 400-500, at a standardized noise level of 2.0-3.0. With virally induced expression and with higher laser power (here, I used only 20 mW), lower noise levels and higher cell counts could be easily achieved in CA1.

Rupprecht et al. (2021; violet) is a dataset using the small dye indicator OGB-1 injected in the homolog of olfactory cortex in adult zebrafish. At low laser powers of ca. 30 mW, 800-1500 neurons were recorded simultaneously at a standardized noise level of 2.0-4.0.

Sofroniew et al. (2016; light green) recorded a bit more than 3000 neurons simultaneously at a relatively low imaging rate (1.96 Hz). Different from all other datasets with >1000 neurons shown in the plot, they recorded only from one single but very large field of view. All neuronal ROIs had been drawn manually, which I really appreciate.

Pachitariu et al. (2018; pink) is a dataset recorded at a relatively low imaging rate (2.5 Hz), covering ca. 10,000 neurons simultaneously. The standardized noise level seems to be rather high according to my calculations.

Pachitariu et al. (2019; black) is a similar dataset that contains ca. 20,000 neurons, but at a much lower standardized noise level (4.0-5.0). The improvement compared to the 2018 dataset was later explained by Marius Pachitariu in this tweet.

MICrONS et al. (2021; red) is a dataset from a single mouse, each dot representing a different session. 8 imaging planes were recorded simultaneously at laser powers that would not damage the tissue, in order to preserve the brain for later slicing, with the ultimate goal to image the ultrastructure using electron microscopes. The number of simultaneously imaged neurons comes close to 10,000, resulting in a relatively high standardized noise level of 7.0-10.0.
[Update, November 2021] As has become clear after a discussion with Jake Reimer on Github, the MICrONS data that I used were not properly normalized; it was not proper dF/F but with a background subtraction. The noise measure for this dataset is therefore not very meaningful, unfortunately. My guess is that the true noise level is in the same order of magnitude as shown in the plot above, but I cannot tell for sure.

The black line indicates how the noise level scales with the number of neurons. For n_1 = 150 neurons (Allen dataset, de Vries et al.), a standardized noise level of \nu_1 = 1.0 can be assumed. For higher numbers of neurons n_2, the noise level \nu_2 scales with \nu_2 = \nu_1*\sqrt{n_2/n_1}. Deviations from the line indicate where recording conditions were better or worse compared to these “typical” conditions.

Posted in Calcium Imaging, Data analysis, Imaging, Microscopy, Neuronal activity | Tagged , , , , | 5 Comments