Heating up the objective for two-photon imaging

To image neurons in vivo with a large field of view, a large objective is necessary. This big piece of metal and glass is in indirect contact with the brain surface, with only water and maybe a cover slip in between. The objective touching the brain effectively results in local cooling of the brain surface through heat conduction (Roche et al., eLife, 2019; see also Kalmbach and Waters, J Neurophysiology, 2012). Is this a problem?

Maybe it is: Cooling by only few degrees can result in a drop of capillary blood flow and some side-effects (Roche et al., eLife, 2019). And it has also been shown (in slice work) that minor temperatures changes can affect the activity of astrocytic microdomains (Schmidt and Oheim, Biophysical J, 2020), which might in turn affect neuronal plasticity or even neuronal activity.

For a specific experiment, I wanted to briefly test how such a temperature drop affects my results. Roche et al. used a commercial objective heating device with temperature controller, and a brief email exchange with senior author Serge Charpak was quite helpful to get started. However, the tools used by Roche et al. are relatively expensive. In addition, they used a fancy thermocouple element together with a specialized amplifier from National Instruments to probe the temperature below the objective.

Since this was only a brief test experiment, I was hesitant to buy expensive equipment that would maybe never be used again. As a first attempt, I wrapped a heating pad, which is normally used to keep the temperature of mice during anesthesia at physiological levels, around the objective; however, the immersion medium below the objective could only heated up to something like 28°C, which is quite a bit below the desired 37°C.

Heating pad, wrapped around a 16x water immersion objective. Not hot enough.

Therefore, I got in touch with Martin Wieckhorst, a very skilled technician from my institute. He suggested a more effective heating of the objective by using a very simple solution. After a layer of insulation tape (Kapton tape, see picture below), we wrapped a constantan wire, which he had available from another project, in spirals around the objective body, followed again by a layer of insulation tape. Then, using a lab power supply, we just sent some current (ca. 1A at 10 V) through the wire. The wire acts as a resistor – therefore it is important that adjacent spirals do not touch each other – and produces simply heat that is taken up by the objective body.

Constantan wire wrapped in spirals around the objective body. Semi-transparent Kapton tape used for insulation makes the wires barely visible on this picture.

To measure the temperature below the objective, we needed a sensor as small as possible. A typical thermometer head would simply not fit into the space between objective and brain surface. We decided to use a thermistor or RTD (resistance temperature detectors). How can we read out the resistance and convert it into temperature? Fortunately, Martin found an old heating block which contained a temperature controller (this one). These controllers are typically capable to use information from standardized thermistors of different kinds or thermocouples.

Next, we bought the sensor itself, a PT100 thermistor (I think it was this one) with a very small spatial footprint. The connection from the PT100 to the temperature controller is pretty straightforward once you understand the connection scheme based on three wires (explained here). This three-wire scheme serves to eliminate the effect of the electrical resistance of the cables on the measurement. Then, we dipped the head of the PT100 into non-corrosive hot glue in order to prevent a shortcut of the PT100 resistor once it dips into the immersion medium. The immersion medium is at least partially conductive and would therefore affect the measure resistance and also the measured temperature. Once we had everything set up, we checked the functionality of the sensor in a water bath, using a standard thermometer for calibration. Another way to perform this calibration would be an ice bath, which is stably at 0°C.

A repurposed heating block to read out a thermistor. We first looked up the data sheet of the built-in controller (bottom right) and then connected a PT100 thermosensor to its inputs. The PT100 sensor is located at the tiny end of the blue cable (inset), covered by a thin film of non-corrosive hot glue.

The contact surface of my objective with the immersion medium is mostly glass and a bit of plastic, therefore it took roughly 30-60 min until the temperature below the objective reached a stable value of around 37°C. In order to prevent that the heat is distributed throughout the whole microscope, we used a plastic objective holder that does not conduct heat.

Together, I found this small project very instructive. First, I was surprised to learn how reliable and fast an objective heater based on simple resistive wire can be. Heating up the metal part of the objective up to >60°C within minutes was no problem. It took however much longer until the non-metal parts of the objective also reached the desired temperature. I was also glad to see that the objective (16x Nikon) was not damaged and its resolution during imaging was not affected by its increased temperature!

The problem of designing a very small temperature sensor was more complicated, also due to the standard three-wire scheme to measure with thermistors. However, all components that we used were relatively cheap, and I think that these temperature measurement devices are interesting tools that could be used also for other experiments, e.g., to monitor body temperature or to build custom-made temperature controllers of water bath temperature for slice experiments.

Posted in Calcium Imaging, Imaging, Microscopy | 2 Comments

Temporal dispersion of spike rates from deconvolved calcium imaging data

On Twitter, Richie Hakim asked whether the toolbox Cascade for spike inference (preprint, Github) induces temporal dispersion of the predicted spiking activity compared to ground truth. This kind of temporal dispersion had been observed in a study from last year (Wei et al., PLoS Comp Biol, 2020; also discussed in a previous blog post), suggesting that analyses based on raw or deconvolved calcium imaging data might falsely suggest continuous sequences of neuronal activations, while the true activity patterns are coming in discrete bouts.

To approach this question, I used one of our 27 ground truth datasets (the one recorded for the original GCaMP6f paper). From all recordings in this data set, I detected events that exceeded a certain ground truth spike rate. Next, I assigned these extracted events in 3 groups and systematically shifted the detected event of groups 1 and 3 by 0.5 seconds forth and back. Note that this is a short shift compared to the timescale investigated by the Wei et al. paper. This is how the ground truth looks like. It is clearly not a continuous sequence of activations:

To evaluate whether the three-bout pattern would result in a continuous sequence after spike inference, I just used the dF/F recordings associated with above ground truth recordings and Cascade’s global model for excitatory neurons (a pretrained network that is available with the toolbox), I infered the spike rates. There is indeed some dispersion due to the difficulty to infer spike rates from noisy data. But the three bouts are very clearly visible.

This is even more apparent when plotting the average spike rate across neurons:

Therefore, it can be concluded that there are conditions and existing datasets where discrete activity bouts can be clearly distinguished from sequential activations based on spike rates inferred with Cascade.

This analysis was performed on neurons at a standardized noise level of 2% Hz-1 (see the preprint for a proper definition of the standardized noise level). This is a typical and very decent noise level for population calcium imaging. However, if we perform the same analysis on the same data set but with a relatively high noise level of 8% Hz-1, the resulting predictions are indeed much more dispersed, since the dF/F patterns are too noisy to make more precise predictions. The average spike rate still shows three peaks, but they are only riding on top of a more broadly distributed, seemingly persistent increase of the spike rate.

If you want to play around with this analysis with different noise levels or different data sets, you do not need to install anything. You can just, within less than 5 minutes, run this Colaboratory Notebook in your browser and reproduce the above results.

Posted in Calcium Imaging, Data analysis, machine learning, Microscopy, Neuronal activity | Tagged , , | 1 Comment

Annual report of my intuition about the brain (2020)

How does the brain work and how can we understand it? I want to make it a habit to report some of the thoughts about the brain that marked me most during the past twelve month at the end of each year – with the hope to advance and structure the progress in the part of my understanding of the brain that is not immediately reflected in journal publications. Enjoy the read! And check out previous year-end write-ups: 2018, 2019, 2020, 2021.

Doing experiments in neuroscience means opening Pandora’s box. On a daily basis, you’re confronted with the vexing fact that the outcome of experiments is not only slightly, but much more complex and variable than any mental model you could come up with. It is rewarding and soothing to read published stories about scientific findings, but they often become stories only because things which did not fit in were omitted or glossed over. This is understandable to some extent, since nobody wants to read 379 side-notes on anecdotal and potentially confusing observations. But it leads to a striking gap between headlines with clear messages, and the feeling of being overwhelmed by complexity when doing experiments or going through a raw dataset. It is possible to overcome this complexity by nested analysis pipelines (‘source’ extraction, unsupervised clustering, inclusion criteria, dimensionality reduction, etc.) and to restore simplicity. But the confusion often comes back when going back to the raw, unreduced data, because they contain so much more complexity.

In this year’s write-up, I want to address this complexity of the brain from the perspective of self-organized systems, and I will try to point out lines of research that can, in my opinion, contribute to an understanding of these systems in the brain.

Complex systems

Two years ago, I have been writing about the limitation of the human mind to deal with the brain’s complexity, and the reasons behind this limitations (Entanglement of temporal and spatial scales in the brain but not in the mind). This year again, I have been thinking quite a bit about these issues. During summer, in the bookshelf of a friend, I noticed the novel Jurassic Park, which my friend, to my surprise, recommended to me. The book, more so than the movie, tells the story of how a complex system – the Jurassic Park – cannot be controlled because of unexpected interactions among system components that were thought to be separated by design. This perspective is represented in the book by a smart-ass physicist who works on chaos theory. He not only predicts from the start that everything will go downhill but also provides lengthy rants about the hubris of men who think they can control complexity.

This threw me back to the days when I studied physics myself, actually also with a focus on complex systems: non-linear dynamics, non-equilibrium thermodynamics, chaos control and biophysics. So, with some years in neuroscience behind me, I went back to the theory of complex systems. I started to go through a very easy-to-read book on the topic by Melanie Mitchell: Complexity: A Guided Tour. Melanie Mitchell is herself a researcher in complexity science. She did her PhD work with Douglas Hofstadter, famously known for his book Gödel, Escher, Bach. Mitchell summarizes the history and ideas of her field in a refreshingly modest and self-critical way, which I can only recommend. As another bonus point, the book was published in 2009, just before deep learning emerged as a dominant idea – which also suppressed and overshadowed many other interesting lines of thought.

For example, Mitchell brings up John von Neumann’s idea of self-organization in cellular automata, Douglas Hofstadter’s work, Alan Turing’s idea of self-organization in simple reaction-diffusion systems, the cybernetics movement around Norbert Wiener, Hermann Haken’s concept of Synergetics, and Stephen Wolfram’s A New Kind of Science.

Unfortunately, many of these ideas about complex systems were intellectually inspiring and certainly influenced many people; but at the same time they often did not really hold their promise. They did not have a significant real-world impact outside of the philosophical realm, in contrast to, e.g., the invention of semiconductors or backpropagation. On both extreme sides of the spectrum, things were a bit detached from reality. On one extreme, ideas around self-organization like the Autopoiesis concept were riddled with ill-defined concepts and connected to ideas like “emergence”, “cognition” or “consciousness” in very vague ways. On the other side, many very influential researchers like Douglas Hofstadter or Stephen Wolfram had a very strong mathematical background and therefore were fascinated by beauty and simplicity rather than truly high-dimensional chaos. I think it’s fascinating to proof that a cellular automaton like the Game of Life is Turing-complete (i.e., it is a universal computer), but wouldn’t a practical application of such an automaton be more convincing and useful than a theoretical proof?

It is therefore tempting for an experimentalist and natural scientist to simply trash the entire field as verbal or mathematical acrobatics that will not help to understand complex systems like the brain. However, in the next section I’d like to make the point why I think the concept of self-organized systems should still be considered as potentially central when it comes to understanding the brain.

Self-organized systems

Over the last years, I have become more and more convinced that complex systems cannot be easily understood by simply describing their behavior. Even for very simple phenomena like the Lorenz equations, where the behavior of the system can be described by some sort of attractors, the low-dimensional description allows to predict the behavior of the system, but it does not tell much about the generative processes underlying the respective phenomena.

Low-dimensional descriptions of brain activity are one of the most active areas of current research in neuroscience. This can range from a description of brain dynamics in terms of oscillatory regimes, to default mode networks of the human brain, or more recently to attempts to break down the population activity of thousands of neurons into a low-dimensional manifold. These are beautiful descriptions of neuronal activities, and it is certainly useful to study the brain with these approaches. But does it provide us with a real understanding? From one perspective, one could say that such a condensed description (if it exists, which is not yet clear) would be a form of deep understanding, since any way to compress a description is some sort of understanding. But I think there should be a deeper way of understanding that focuses on the underlying generative processes.

Imagine you want to understand artificial neural networks (deep networks). One way would be to investigate information flows and how representations of the input evolve across different layers and become less similar to the input and more similar to the respective target label. This is an operative and valuable way to understand of what is going on. In my opinion, it would however allow for a deeper understanding to simply bring up the very organizing principles which underlie the generation of the network: back-propagation of errors during learning and the definition of the loss function.

Similarly, I think it would be equally more interesting in neuroscience to understand the generative and organizing principles which underlie the final structure of the brain, instead of studying the representations of information in neuronal activity. It is clear that a part of the organization of the brain is encoded in the genome (e.g., guidance cues for axonal growth, or the program of sequential migration of cell types, or the coarse specific connectivity across different cell types). However, the more flexible and possibly also more interesting part is probably not organized by an external designer (like a deep network) and also not directly organized by the genome. In the absence of an external designing instance, there must be self-organization at work.

Once we accept that this part of the generative principles underlying the brain structure and function is self-organization, it becomes immediately clear that it might be useful to get inspired by complexity science and the study of self-organized systems. This connection between neuroscience is probably evident to anybody working on complex systems, but I have the impression that this perspective is sometimes lost by systems neuroscience and in particular experimental neuroscience.

Self-organizing principles of neuronal networks: properties of single neurons

I believe that the most relevant building blocks of self-organization in the brain are single neurons (and not molecules, synapses or brain areas). A year ago, I have argued why I think this makes sense from an evolutionary perspective (Annual report of my intuition about the brain (2019)), and I have argued why it would be interesting to understand the objective function of a single cell. The objective function would be the single cell-specific generative principle that underlies the self-organization of biological neuronal networks.

Realistically speaking, this is too abstract a way of exploring biological neurons. What would be a less condensed way to describe the self-organizing principles underlying single neurons that are analogous to back-propagation and loss functions for deep networks? I would tend to mention main ingredients: First, the principles that determine the integration of inputs in a single neuron. Second, the principles that determine the way that neurons connect and modify their mutual connections – which is basically nothing but plasticity rules between neurons. I am convinced that the integrative properties of neurons and the plasticity rules of a neuron when interacting with other cells are the main ingredients that are together the self-organizing principles of neuronal networks.

This is a somewhat underwhelming conclusion, because both plasticity rules and integrative properties of neurons have been studied very intensely since the 1990s. The detour of this blog post about self-organization basically reframes what a certain branch of neuroscience has been studying anyway. However, in addition it makes – in my opinion – also clear why the study of population dynamics and low-dimensional descriptions of neuronal activity aims at a different level of understanding. And it makes the point that the deepest understanding of biosimilar networks can probably be achieved by studying the aspects of self-organizing agents, plasticity rules and single-cell integrative properties, and not by studying the pure behavior of animals or neuronal networks.

Studying self-organized neuronal networks and plasticity rules

So, how can we study these principles of self-organized agents? Unfortunately, the last 30 years have made it quite clear that there is not simply a single universal plasticity rule. Typical plasticity rules (spike-time dependent plasticity; fire-together-wire-together; NMDA-dependent potentiation) usually explain only a small fraction of the variance in the experimental data and can often be only studied in very specific experimental conditions, and in most cases only in slice work. Usually, the conditions of the experiment (number of presynaptic action potentials to induce plasticity, spike frequency, etc.) are tuned to achieve strong effects, and absence of effects in other conditions are not systematically studied and often go unreported. In addition, plasticity rules in vivo seem to be somewhat different. Neuromodulation and other state-dependent influences might affect plasticity rules in ways that make them almost impossible to study systematically. In addition, it is very likely that there is not a single plasticity rule that governs the same behavior across all all neurons, since diversity of properties has been shown in simulations to provide robustness to neuronal circuits at many different levels. And evolution has would be a fool not to make use of this property that is so easy to achieve – evolution does not care about being hard to reverse engineer. This however makes it merely impossible (although still very valuable!) to dissect these principles systematically in experiments.

That is why I think that simulations – not experiments – could be the best starting point for understanding these self-organized networks.

There is indeed a large body of work going into this direction. If you google for “self-organizing neuronal networks”, you will find a huge literature which goes back to the 60s and is often based on very simplistic models of neurons (still heavily inspired by condensed matter physics), but there are also some interesting more recent papers that directly combine modern plasticity rules with the idea of self-organization (e.g. Lazar et al., 2009). And there are not few computational labs that study plasticity rules and their effect on the organization of neuronal networks, which is also some kind of self-organization, e.g. the labs of Henning Sprekeler, Claudia Clopath, Tim Vogels, Friedemann Zenke, Richard Naud, all of them influenced by Wulfram Gerstner; or Sophie Deneuve; Christian Machens; Wolfgang Maass – to name just few out of many people who work on this topic. I think this is one of the most interesting fields of theoretical neuroscience. However, I would be personally very satisfied to see this field shift towards a better inclusion of the self-organizing perspective.

To give a random example, in a study from this year, Naumann and Sprekeler show how specific non-linear properties of neurons can mitigate a well-known problem associated with purely Hebbian plasticity rules (Presynaptic inhibition rapidly stabilises recurrent excitationin the face of plasticity, 2020). They basically take an experimental finding that has been made quite some time ago (pre-synaptic inhibition of the axonal boutons via GABAB receptors) and builds a model that explains how this could make sense in the light of plasticity rules. This is a very valuable way of doing research, also because it takes biological details of neurons into account and gives the experimentalists a potential context and explanation of their findings. However, this approach seems to be the perspective of a designer or engineer, rather than the approach of somebody who aims at an understanding of a self-organized system. What would be an alternative approach?

From engineered organization to self-organization

I think it would be useful to take the perspective of a neuron and in addition also an evolutionary perspective. Let’s say, a neuron with certain properties (rules on when and how to connect) joins the large pool of a recurrent network. The question which must be solved by the neuron is: How do I learn how to behave meaningfully?

I’d like to give an analogy on how I think this neuron should ideally behave: A human person that interacts with others in a social network, be it in real life or in the virtual world, must adjust their actions according to how they are received. Shouting loud all the time will isolate them, because they are blocking the receiving channels of others, and being silent all the time will let others equally drop the connections. To adjust the level of output and to adjust the appropriate content that will be well-received, it is crucial to listen to feedback.

This is what I think could be the central question from this self-organized perspective of neuronal circuits: How does the neuron get feedback on its own actions? With feedback, I do not mean global error signals about the behavior of the organism via neuromodulation channels, but feedback on the neuron’s action potentials and its other actions. Where does this feedback come from?

If we reduce the complex network to a single cell organism that lives by itself, we can immediately see the answer to this question. The feedback comes from the external world. A spike of this single cell organism has a direct impact on the world, and the world in return acts back upon the single cell organism. It is not clear how this scales up to larger networks, but I think that this inclusion of the external world, as opposed to a machine learning-style input-output task, could be the most important ingredient that makes the step from engineered network organizations to self-organized networks.

(There are many loose connections from here to reinforcement learning using learning agents and also to predictive processing, but let’s not go into that here.)

Conclusion and summary

I’m glad to get convinced by the opposite, but today I think that a very promising way to achieve a deep understanding of the brain could consist of the following ingredients, as motivated above:

1) To regard the brain as an at least partially self-organized network,

2) To use simulations together with evolutionary algorithms to explore the generative / self-organizing principles,

3) To consider properties and actions on the level of single neurons as the main parameters that can be modified during this evolutionary process and

4) To include an external world to transition from an externally organized to a self-organized system.

Posted in Data analysis, machine learning, Network analysis, Neuronal activity, Reviews | Tagged , , , | 4 Comments

Hodgkin-Huxley model in current clamp and voltage clamp

As a short modeling session for an electrophysiology course at the University of Zurich, I made a tutorial for students to play around with the Hodgkin-Huxley equations in a Colab Notebook / Python, which does not require them to install Python. You’ll find the code online on a small Github repository: https://github.com/PTRRupprecht/Hodgkin-Huxley-CC-VC

Using the code, the students can not only play around with the Hodgkin-Huxley equations, but also replicate in silico the experiments they have done when patching cells in slices (including voltage clamp experiment).

It is really rewarding to be able to reproduce current clamp (recording the membrane potential) and voltage clamp experiments (recording the currents while clamping the voltage to a constant value), because this also allows to replicate computationally the experiments and plots generated experimentally by Hodgkin and Huxley.

Below, you see a part of the code, the result of a simulation of a Hodgkin-Huxley model. The top configuration was run in current clamp, with a current pulse injected between 40 and 80 ms, which triggered a single action potential. The lower configuration was run in voltage clamp, with the holding potential stepping from -70 mV to -30 mV between 40 and 80 ms. You can clearly see the active conductanves (deactivating sodium conductance in blue and non-deactivating potassium conductance in orange):

Posted in electrophysiology, Neuronal activity | Tagged , , | Leave a comment

Interview with Bruno Pichler

Bruno Pichler studied medicine, obtained a PhD in neuroscience, worked in the labs of Arthur Konnerth, Tom Mrsic-Flogel and Troy Margrie, and was R&D manager at Scientifica, before founding his own company, INSS, “to provide the international neuroscience community with bespoke hard- and software solutions and other consulting services”. He is not only a highly experienced builder and designer of two-photon microscopes but also a very friendly and open human being. So I was very happy to have the opportunity to ask him a couple of questions.

The interview took place on September 8th 2020 in a virtual meeting and lasted around 1.5 hours. Afterwards, I transcribed, shortened and edited the recorded interview. For a brief orientation, here’s an ordered but non-exhaustive list of the topics discussed:

Why neuroscience?
How to get into optics and programming
Role models
Projects in academia
Why didn’t you become a PI?
At Scientifica
Founding one’s own company
Performance checks for a 2P scope
How to clean optics
The detection path
Teaching people how to solve problems
Fixed-wavelength lasers
Multiplexed acquisition
Advice to young scientists
Bluegrass music

If you find any questionable statements, you should consider blaming my editing first.

And since the entire interview is quite long, take a cup of tea, and take your time to read it


Peter Rupprecht: You studied medicine before you started your PhD. What was your motivation to do a PhD in neuroscience, and to continue in neuroscience afterwards?

Bruno Pichler: This was just something that happened: I really loved the first years of medical school, basic science, physics, biology, anatomy, physiology, and all that. But halfway through medical school when the clinical work started, I realized that it wasn’t for me. So I looked for new inspiration, and I stumbled upon the website of Arthur Konnerth’s lab. I called on a whim and half an hour later I was in his office, and he offered me a job. He said, “why don’t you take a year off from medical school and start working towards a PhD here?” So I worked full-time in the lab for a year, and then went back to finish medical school. But at that point I had no interest in a career in clinical medicine, I just wanted to complete the medical degree and then work in the lab and continue my PhD. – So, not much thought behind it, it’s just how things transpired.

(c) Bruno Pichler

PR: The medical studies did not really prepare you well for the more technical aspects of what you have been doing afterwards. How did you learn all of this, for example optics, or programming?

BP: Again, it was just something that happened: there were microscopes that needed troubleshooting, there were things I didn’t understand, and every time I didn’t understand something, I tried to find more information about it. Some of the information sticks with you – and that’s how you learn and how you get into optics and software.
For example, there was some custom-written image analysis software in the lab and I didn’t really understand what it did to the data, so I sat down with the guy who wrote it and asked about all the calculations it made and then I cautiously started to make some changes to it – and it just naturally emerged from there. I never consciously sat down and said, “oh, I want to learn programming!” I had a problem in front of me that I needed to solve, and so I solved it. And whatever I learned while solving it is now part of my knowledge.

“People who inspired me were often those in support roles, like the lab technicians who taught me how to pipette, or the engineers in the electronic and mechanical workshops.”

PR: So would you describe yourself as a problem-solver?

BP: I think that is probably an accurate description. My main driving force is that whenever I see a technical problem, I want to find an elegant solution for it. I can’t help it.

PR: Did you have a role model as a scientist, or somebody who inspired you to continue with neuroscience?

BP: I definitely have a few people who inspired me, but I wouldn’t say I had a ‘role model’. There’s obviously the intellectual giants like Richard Feynman or Horace Barlow. Then there are the scientists that I worked with, Arthur Konnerth, Tom Mrsic-Flogel, Troy Margrie, and of course all the colleagues in those labs. But, on a very practical level, people who inspired me were often those in support roles, like the lab technicians who taught me how to pipette, or the engineers in the electronic and mechanical workshops. For example, Werner Zeitz, the electronics guy in Arthur Konnerth’s lab, who is known for his famous Zeitz puller (https://www.zeitz-puller.com/). We were building a two-photon resonant scanner with Werner back in 2004/2005, and he built a 19” rack-mountable device box – no labels on it, just unlabeled pots and BNC connectors – which transformed the scanning data into a TV image and sent it to a frame grabber card. Nowadays, we do this in software but it was all done in hardware at the time. Same with the mechanical guy, Dietmar Beyer: He was such a skilled manual machinist, and he would just make whatever we needed without any CNC machining. Another guy that really inspired me back as a PhD student was Yury Kovalchuk. He was a senior postdoc at the time. He knew everything about two-photon microscopy, and he was building an AOD scanner back in 2004/2005. It was the way he understood these systems and explained everything to me whenever I had any questions – those kind of people inspired me.

PR: From your entire academic career, what was your most rewarding project, big or small?

BP: That’s so difficult to say, because everything is kind of rewarding. What I can certainly say is that I don’t believe in putting off reward for a long time, and the idea that ‘the more you suffer, the bigger the reward’. I like it when you have smaller rewards, but more frequently.

PR: I can definitely relate to this… but at least scientific publications usually do not come so frequently. If you have to choose, which scientific publication where you took part in would you like to highlight, and what was your contribution?

BP: There was a paper in 2012 by Kamilla Angelo in Troy Margrie’s lab (paper link). I came very late to the party, all the experiments had already been done, the first version of the manuscript had been completed, and Troy just asked me to read it and give some comments. I noticed something in the analysis where the manuscript didn’t actually show unambiguously one aspect of the claim in the paper. We tried to come up with some way to design new experiments to prove that unambiguously, but at some point it occurred to me that you could just do it with the existing data, just with a different type of analysis. And once the idea had come up of doing this pair-wise scrambling of all the data points and then calculating pair-wise differences, it was very quick and easy to write some code to analyze it. And it supported exactly what we thought it would support, but now unambiguously. That felt really rewarding, to be able to nail something that would have otherwise required more experiments with a bit of clever analysis, that was really cool.

PR: Sounds like that! Especially your PI was probably really happy about this, because it saved a lot of trouble.

BP: I guess so. The paper would have been highly publishable without my input, but it was just ever so slightly better with my input; and that’s good enough for me.

“I was always more of a Malcolm Young than an Angus Young.”

PR: Why did you not become a PI yourself?

Continue reading
Posted in Calcium Imaging, Data analysis, Imaging, Microscopy | Tagged , , | 1 Comment

Simultaneous calcium imaging and extracellular recording from the same neuron

Calcium imaging is a powerful method to record from many neurons simultaneously. But what do the recorded signals really mean?

This question can only be properly addressed by experiments which record both calcium signals and action potentials from the same neuron (ground truth recordings). These recordings are technically quite challenging. So we assembled several existing ground truth datasets, and in addition recorded ground truth datasets ourselves, totaling >200 neuronal recordings.

This blog blog posts contains raw movies together with recorded action potentials (black; also turn on your speakers for the spikes!) and the recorded ΔF/F of the calcium recording (blue). These ground truth data are a very direct way for everybody into calcium imaging to get an intuition about what is really going on. (Scroll down if you want to see recordings in zebrafish!)

Recording from a L2/3 neuron in visual cortex with GCaMP6f, tg(Emx1), from Huang et al., bioRxiv, 2019; a very beautiful recording. Replayed with 2x speed.

Recording from a L2/3 neuron in visual cortex with GCaMP6f, tg(Emx1), from Huang et al., bioRxiv, 2019. Stronger contamination from surrounding neuropil.

Recording from a L2/3 neuron in visual cortex with GCaMP6f, tg(Emx1), from Huang et al., bioRxiv, 2019. Note that single action potentials don’t seem to have any impact at all. – The negative transients in the calcium trace stem from center-surround neuropil decontamination (activity of the surround is subtracted).

Recording from a L2/3 neuron in visual cortex with GCaMP6s, tg(Emx1), from Huang et al., bioRxiv, 2019.

Recording from a L2/3 neuron in visual cortex with GCaMP6s, tg(Emx1), from Huang et al., bioRxiv, 2019.

Recording from a L2/3 neuron in visual cortex with GCaMP6f, virally induced, from Chen et al., Nature, 2013. From the left, you can see the shadow of the patch pipette used for recording of extracellular signals.

Something completely different: recording from a pyramidal neuron in Ca3 with R-CaMP1.07, virally induced, recorded by Stefano Carta, from Rupprecht et al., bioRxiv, 2020. What appears as single events are actually bursts of 5-15 action potentials with inter-spike-intervals of <6 ms.

A recording that I performed myself in adult zebrafish, in a subpart of the homolog of olfactory cortex (aDp) with GCaMP6f, tg(neuroD), in Rupprecht et al., bioRxiv, 2020. Around second 20, it is visible that even a single action potential can be seen in the calcium signal. However,this was not always the case in other neurons that I recorded from the same brain region.

Again a recording that I did in adult zebrafish, in the dorsal part of the dorsal telencephalon with GCaMP6f, tg(neuroD), in Rupprecht et al., bioRxiv, 2020.

What can you do if you want to detect single isolated action potentials with calcium imaging? GCaMP, due to its sigmoid non-linearity, is by often a bad choice and will be strongly biased towards bursts. Synthetic indicators, however, are very linear in the low-calcium regime. – This is a recording that I did myself in adult zebrafish, in a subpart of the homolog of olfactory cortex (pDp) with the injected synthetic indicator OGB-1 in Rupprecht et al., bioRxiv, 2020. Although the temporal resolution of the calcium recording is rather low, the indicator clearly responds to single action potentials. As another asset, the indicator not only fills the cytoplasm of the neuron in a ring-like shape, which makes neuropil-contamination much less of an issue compared to GCaMPs.

Another recording that I performed in adult zebrafish, in a subpart of the homolog of olfactory cortex (pDp) with the injected synthetic indicator Cal-520 in Rupprecht et al., bioRxiv, 2020. This indicator is much more sensitive compared to OGB-1, but also diffuses less well after bolus injection. – These two minutes of recording only contain 4 spikes (this brain region really is into low firing rates in general), but you can clearly see all of them. If this were a GCaMP recording, you would probably see only a flat line throughout the entire recording.

For more information, including all 20 datasets with >200 neurons (rather than these excerpts from 11 neurons), check out the following resources:

Posted in Calcium Imaging, Data analysis, electrophysiology, machine learning, Microscopy, Neuronal activity, zebrafish | Tagged , , , , | Leave a comment

Discrepancies between calcium imaging and extracellular ephys recordings

To record the activity from a population of neurons, calcium imaging and extracellular recordings with small electrodes are the two most widely used methods that are still able to disentangle the contributions from single units. Here, I would like to briefly mention two papers that try to connect these two approaches by comparing them more or less directly.

  1. Wei et al., A comparison of neuronal population dynamics measured with calcium imaging and electrophysiology, bioRxiv, 2019
    [Update, 2020-09-15: The paper just came out in PLoS Comp Biology just one day after the blog post!]
  2. Siegle, Ledochowitsch et al., Reconciling functional differences in populations of neurons recorded with two-photon imaging and electrophysiology, bioRxiv, 2020

Both papers compare calcium imaging datasets and extracellular ephys datasets, try to connect the results and point out the difficulties in reconciling the approaches.

Wei et al. use datasets recorded in mouse anterio-lateral motor cortex (ALM). They first focus on approaches to reconstruct spike rates from calcium imaging data (deconvolution) and find several limitations of this approach. On the other hand, they find that a forward model that transforms spiking activity to calcium fluorescence data can reconcile most of these differences.

The authors also provide a user-friendly website which can be used to explore the transformations between ephys and imaging data (also including datasets with simultaneous ephys-imaging): http://im-phys.org. (Understanding the figures of the paper is however quite useful before exploring the website.)

From Wei et al., bioRxiv (2019), used under CC BY 4.0 license (excerpt from Fig. 7).

A large part of the paper focuses on high-level analyses (principal component analysis, decoding of behavior and decision). I would take it as an educational tale of caution which highlights wrong conclusions that could be made based on standard analyses. For example, the slow and variable decay times of calcium imaging data can lead to a dispersion of peak activity that was absent in the ephys data. This dispersion can smear out clearly timed activations of neuronal population into something more similar to a sequence (see Figure 7, of which I have pasted an excerpt above).

Siegle, Ledochowitsch et al. from the Allen Institute, rather than investigating the effects on higher-level population analyses, focus their attention on the effects seen in single neurons. When comparing a calcium imaging and an ephys dataset recorded in the same brain region (visual cortex V1) in mice that do the same standardized tasks, what differences can be seen in the firing properties of single neurons?

Due to the high standardization requirements at the Allen Institute, their datasets are probably uniquely qualified to be the basis for such a comparison. Interestingly, they find a couple of clear differences. For example, extracellular ephys data suggest typical firing rates of around 3 Hz (see Figure 2A), which is almost an order of magnitude higher than what has been recorded and estimated from calcium imaging data (see also Figure 7 in our preprint, which estimates spike rates of the same dataset).

The authors go to great lengths to use forward transformations (similar to Wei et al.) in order to reconcile differences seen for various response metrics (responsiveness, tuning preference, selectivity). However, their conclusion seemed to me quite a bit less optimistic compared to the Wei et al. paper. The authors go into more detail when discussing the potential reasons for the discrepancies, and focus on the recording methods themselves rather than on methods to transform between them. In particular their analysis of inter-spike-interval (ISI) violations in ephys recordings (which indicate that spikes from different neurons contaminate the recording of the neuron of interest) was, in my opinion, particularly interesting and convincing. I also really recommend to anybody the last paragraph of their discussion, from which I only want to cite their note of caution about extracellular ephys recordings:

From this study, we have learned that extracellular electrophysiology overestimates the fraction of neurons that elevate their activity in response to visual stimuli, in a manner that is consistent with the effects of selection bias and contamination. – Siegle, Ledochowitsch et al.

One of the reasons why I am writing about these two studies is that I have been working at the interface of calcium imaging and ephys myself, addressing the question, How much information about spike rates can we get out of calcium imaging data? Wei et al. and Siegle, Ledochowitsch et al. take a slightly broader perspective. And, in some way, they show how hard it is to reconcile two (methodological) perspectives on the same phenomenon. (I noticed this in my PhD lab as well, when it came to reconciling results from EM reconstructions of neuronal anatomy and calcium measurements of the same neurons.) Since almost any method in systems neuroscience is technically challenging, we often have in a single lab only a single perspective of a phenomenon, and I think it’s important to always be aware that the conclusions drawn from this perspective might be strongly biased.

In general, calcium imaging and extracellular ephys are extremely valuable tools to observe the living brain, and we better do anything to understand the properties and limitations of these tools. These studies sometimes might feel a bit like negative results and therefore not very attractive to those who want to advance neuroscience, and I therefore understand why not many are willing to undertake these projects. So I am glad to be able to highlight these two publications here.

Posted in Calcium Imaging, Data analysis, electrophysiology, Imaging, Network analysis, Neuronal activity, Reviews | Tagged , | 1 Comment

Alignment tools

This blog posts covers some tools and techniques that I’m typically using to align two-photon microscopes. If you’re an expert, you will probably find nothing new, but if you haven’t been doing this for years, this might offer you some pieces of inspiration.

Aligning a microscope is the process of optimizing the parts to produce better images than before. A useful overview of basic alignment procedures for two-photon microscopes has been put together in this #LabHacks blog post by Scientifica. It also includes safety advice (which I will not repeat here; keep in mind that lasers, especially pulsed IR-laser, are really dangerous, and all safety instructions of your institute and lab must always be obeyed!)

Microscope alignment still seems to be a secret art that most microscope users are afraid of. It is not by principle difficult to learn, but it is a practical skill and requires both patience and a mentor who shows where to buy stuff and how to touch the mechanical and optical elements. Most people working with microscopes are prevented from learning because they are afraid to touch anything. This blog post is intended to help lower the fear of optical alignment – by showing that the tools used for alignment can be very simple.

1. Adjust the beam to a given height above the optical breadboard

After the beam comes out of the laser, you often want to keep the beam in one single plane parallel to the table, in order to keep things simple. In other words, the distance of the beam from the optical table should be the same everywhere. One way to achieve this is to use mounted pinholes (e.g., these ones from Thorlabs). However, it is sometimes difficult to properly see where the beams hits the pinhole, which results in imprecise alignment and unnecessary uncertainty. When I worked with Robert Prevedel in 2013/2014, he showed me a simple trick which makes it very easy to adjust all beam positions to the same height. He used a small hex key and two washers to clamp it horizontally onto an inverted post with a screw, as shown below. The surfaces of the hex key lead to a very nice horizontal alignment, and the precise height indicated by the hex key can be used to (1) adjust the beam itself or (2) to consistently adjust the height of a set of pin holes. It is useful to have and very simple to make.

2. Printed-out resolution targets

Is the beam centered in a given aperture, for example the cage system of the microscope’s tube lens? Normally, I would use a threaded or cage-mounted iris (e.g., this one), but in other cases spatial constraints do not allow this, or the beam can only be viewed from an angle, and it can be difficult to judge whether the beam scattering from the iris is centered or not.

However, if the laser power is low enough not to burn paper, I simply paint resolution targets with Powerpoint or Inkscape and print them out on a sheet of paper resolution targets. When I’m in a hurry, I simply draw it with a pen.

For example, if I want to check whether a beam is collimated (that means, the beam does not change its diameter a lot over the distance), I use these alignment targets as a reference and as a guide for the eye.

Or I use some scotch to fix it into a given aperture, allowing me to check whether the beam is centered or not. Here illustrated for the aperture which is approximately located at the back focal plane of the objective. Does not require much work but is quite helpful.


3. Using mirrors to look around corners

One more practical problem in the above case is the viewing angle. Ideally, I would like to look at the alignment target from the top, but this would at the same time block the beam. To solve this (and many very similar problems), you can simply use a mirror shard. The photo below (left) shows a hand-held piece of mirror which allows to look at the alignment target in a relaxed way. It is difficult to see this from a single picture, but mirrors like this one (either hand-held or fixed in the setup) often make life much easier.

For more convenience, dental mirrors like this mirror (photo below, right) are designed to conveniently look around corners and are of great use to look at pinholes from angles that are difficult to get at without mirrors.

However, be very careful with hand-held and any other moving mirrors! By chance, you might reflect the laser beam into your own eyes! Always be careful and think three times whether there is any chance that any reflection might hit your eyes.

4. Retro-reflecting mirrors and lenses

Usually, the laser beam should hit a lens at its center and at a 90° angle. To ensure that the beam is centered, one can use a) a cage system with a pinhole, or b) a threaded pinhole that can be screwed onto the lens directly, or c) a printed-out removable resolution target (see above). To make sure that, in addition, the laser beam hits the lens with a 90° angle, it is helpful to use back-reflections of the beam. Since a small fraction of the beam is back-reflected by the lens surface, it should ideally coincide with the incoming beam. This can be checked with an IR detection card or, for a visible beam, a piece of paper held close to (but not hiding the) incoming beam path.

Sometimes it is necessary to align a caged element or something similar without lenses that provide back-reflecting surfaces. In this case, you can simply use a mirror that reflects the entire beam back. This mirror can be screwed into a thread of a cage system. More often, it suffices to put a small and flat mirror shard at the flat back of the caged system. This does not provide the highest precision alignment, but is usually good enough for most purposes.

Be aware that back-reflections that go directly back into the laser can make the laser unstable. If a pulsed laser stops pulsing, first check whether any back-reflections could be the reason for it.

5. Retro-reflecting gratings

The main problem with back-reflections from lenses and mirrors is that the beam is often small and coincides with the incoming beam, making it difficult to properly identify.

In my PhD lab, I inherited a really cool tool that was used for alignment of a Sutter MOM scope and which I was, unfortunately, unable to find afterwards on the internet. It is basically a mirror, but with a sort of grating scratched into the mirror surface. Due to interference, the back-reflections were not simply a single beam, but a symmetric diffraction pattern that extended over several centimeters and could be conveniently used for alignment – much more useful and easier to use than the back-reflections of a lens or mirror. I guess that any flat reflective gratings (maybe even compact discs? I have to tried that) could be used for this same purpose.

6. Wedge plates

At several points of the beam path (before entering the back of the objective; after exiting a beam expander) the laser beam should ideally be collimated. The standard method that I used for years was to print out a resolution target made of paper (described above). I used it to check whether the beam changes its diameter when propagating freely over several meters. To this end, it is often necessary to deflect the beam with a mirror that must be temporarily inserted into the beam path.

Fabian Voigt from my current lab in Zürich also showed me the more professional way to check for collimation, using a wedge plate. Wedge plates can be used for shearing interferometers (Thorlabs product, EO product). They generate an interference pattern which can be used to very precisely check the collimation of the beam.

Fabian also kindly pointed me to a paper (Tsai et al., 2015) which mentions that the pattern seen from a shearing interferometer can also be used to analyze less obvious properties of the beam, like coma and astigmatism aberrations (Okuda et al., 2000). It would be cool to have a look-up table of typical interferograms and the corresponding wavefront shapes and aberrations!

7. Alignment lasers

Another tool that Fabian showed to me was an alignment laser.

This alignment laser is basically a continuous wave-visible light laser that goes through a optical fiber and afterwards enters a beam coupler. I used a shearing interferometer (described above) to make sure that the outgoing beam was collimated, and then used the collimated beam for backward alignment.

Backward alignment means to insert the collimated beam of the alignment laser at the location of the microscope’s back focal plane and then aligning the microscope’s components in a backward manner (tube lens -> scan lens -> scanners -> etc.; instead of forward alignment, which starts at the pulsed laser and goes forward until it ends up at the objective). This is helpful for example when two or more separate incoming beams are combined. A second advantage of the alignment laser is that the laser beam is, unlike the near-IR pulsed laser, visible to the human eye.

8. Continuous wave (cw) mode for a pulsed Ti:Sa laser

Standard two-photon microscopes are based on pulsed lasers that have a center wavelength adjusted between 800 and 1000 nm. The light is therefore invisible to the human eye (except for some faint spectral components when the center wavelength is adjusted to 800 nm) and comes with high average power (>1 W, or often much more) and in ultra-short high-energy pulses, making it a very dangerous thing to deal with. IR viewer cards and IR viewers, or less expensive solutions based on simple cameras/webcams, can make the beam visible to our human eye, but I always found this very exhausting to work with. And being exhausted is not a good pre-condition for optical alignment, which requires the ability to stay focused and a sharp mind. Therefore, it would be great if it was possible to make the laser less dangerous: by preventing it from pulsing, by lowering the overall power, and by making it visible…

Fortunately, all of this is possible for the tunable wavelength Ti:Sa lasers (although not for the fixed 1040 nm lasers). For the Spectra Physics MaiTai lasers, the slightly old-fashioned user-interface (screenshot below) allows to manually lower the pump power (also called the “green power”). Change the wavelength to something visible (something between 700 and 750 nm), lower the green power until the pulsing (indicated by the green light in the main control window) stops and continue lowering the green power until the output IR power reaches something like 30-40 mW. This is still much more than the average laser pointer and still dangerous, but less so than a pulsed invisible beam.

It is however important to keep in mind that the control panel is not always 100% reliable. It switches between pulsing and non-pulsing as if these were binary states. But there is a sometimes continuous transition between the two, and sometimes the laser also becomes a bit unstable and oscillates between states – so, better watch the laser for a while before you start working with the beam.

I once talked to a laser engineer from Spectra Physics who told me that it would not be ideal to leave the laser in this non-pulsing state for too long (>> 1 hours). Probably this has something to do with inefficient energy conversion and the heat generated in this process. But I have to admit that I follow this recommendation simply because I don’t know it better. Lasers are complex things, and I mostly treat them in a pragmatic way, like a black box.

9. Improvised test samples

One of the most common mistakes that I keep seeing is to set up a microscope and then try to test it with a real sample, like a living brain of a transgenic mouse. That’s really not the best way to proceed. Instead, always test your microscope with a simple, bright and dead sample. (And, ideally in addtion also with sub-diffraction beads.)

For example, you can use colored plastic items (for example plastic slides), perfect to test how homogeneous the FOV is. Or pollen grains from any flowers outdoors during the pollen season. Or dead, small insects that you caught from your desk. Their surface will often be super bright under a two-photon microscope, especially if you remove the emission filters. Or simply a single strand of your hair (especially if you have still some pigments in them).

In general, most solid things are somehow autofluorescent, and it can be really fun to look at random things below the two-photon microscope. Pay attention not to kill the PMT detector (lower the initial PMT gain), since autofluorescence of natural things can be very bright.

10. More resources on (two-photon or other) optics alignment

Posted in Imaging, Microscopy | Tagged , , , , | 4 Comments

Matlab magic spells

Most neuroscientists who analyze their data themselves use either Matlab or Python or both – the use of R is much less common than in other fields of biology. I’ve been working with Matlab on a daily basis for >10 years, and I have started using Python regularly, although less frequently, about 5 years ago. I’d like to encourage everybody to learn Python, because the language is much more flexible, more beautiful and more powerful. But still I use Matlab often to get a first impression of data, because it is simply perfect for browsing intermediately sized (MB to GB) datasets and for doing simple things in general, also due to the plotting interfaces which I find are easier to use for data exploration than matplotlib/seaborn in Python.

Here, I would like to share a collection of useful Matlab commands which are not complicated or fancy but still nice to know. Don’t expect too much, and don’t read this if you have spent a couple of years with Matlab already. To be entirely honest, I’m writing this also for my own reference, because I keep forgetting the exact wording of some of the commands, and it’s nice to have everything in a single place. Without further ado, here is my list of favorite Matlab spells:

1. Leading zeros with sprintf()
2. Automatization with dir()
3. Exclude file patterns based on regular expressions
4. Callback functions to speed up image inspection
5. Regionprops for image segmentation
6. The curve fitting toolbox
7. Manually modifying colormap
8. Turning the background of a figure white
9. Make ticks point outwards instead of inwards
10. Rotate x-tick labels
11. Reverse y-axis direction
12. Save large figures to vectorized format (eps)
13. Get raw data from figures
14. Position the figure window at a given screen location
15. Profiling code with tic/toc

Continue reading

Posted in Data analysis | Tagged , , | Leave a comment

Annual report of my intuition about the brain (2019)

How does the brain work and how can we understand it? I want to make it a habit to report some of the thoughts about the brain that marked me most during the past twelve month at the end of each year – with the hope to advance and structure the progress in the part of my understanding of the brain that is not immediately reflected in journal publications. Enjoy the read! And check out previous year-end write-ups: 2018, 2019, 2020, 2021..

The purpose of neuronal spikes is not to represent …

It is a common goal of many neuroscientists to “decode” the activity patterns of the brain. But can the brain be simply understood by finding observables like visual objects or hand movements that are correlated with these activity patterns? Nobody would deny that neuronal activity is correlated with sensory inputs, motor outputs or internal states. But what does it help to look at these correlations? In other words, are these representations of sensory, motor or inner world key to understanding the brain? In the following, I will argue against this “representational” view of the brain.

First, what do I mean with “representational”? In the following, a neuron represents e.g. an external stimulus if its activity pattern is correlated with the occurrence of this external stimulus (for a stricter definition, see below). For example, a neuron in visual cortex fires when a drifting grating of a 55° orientation is presented to the animal. One could say that “the neuron codes for 55° drifting gratings” or “the activity of the neuron represents 55° drifting gratings”.

To give another example, a neuron in hippocampal CA1 fires reliably when the animal is at a certain location (“the neuron codes for this and that place” or “the neuron represents this and that place”). This representational view is probably the most intuitive approach when it comes to characterizing the activity of the brain. However, simply representing the internal or external world is not enough. What else does a brain do?

Before trying to answer this question, let’s speculate why the “representational” view is so deeply engrained in systems neuroscience. First, many thinkers and also neuroscientists have been biased towards a view of the brain as a passive receiving device because they have been sitting passively at a desk in front of an empty sheet of paper while shaping their thoughts. Second, the representational view is also very intuitive: finding something that can be observed, like a visual stimulus, to be reflected directly in neuronal activity seems to explain why this neuronal activity happened. Animals or humans subject to brain experiments are often told to hold still while waiting for a stimulus or cue. It is clear from this typical experimental design that it would be more likely to see “receptive fields”, which is a classic finding of representations in the brain, because the experimental design biased the experience of the subject towards passive receiving. Torsten Wiesel and David Hubel, who have pursued this line of thinking in the most compelling way in the early era of neuroscience, are probably the best examples for this approach:

Apart from that, artificial neuronal networks like the attractor networks put forward by John Hopfield and, more recently, deep convolutional networks, are typically based on classification tasks, e.g. to identify written numbers on an envelope, or to correctly categorize the object seen in an image.

For a device that is supposed to perform such a task, it is clearly sufficient to represent the inputs and the possible outputs. But this is not the main task of a brain.

… but to act upon something

The brain, unlike a pattern classifier, is in a closed loop with its environment, being moved by it and acting upon it at the same time. It is not a passive observer, but an involved device, an organism. The brain is not designed to represent its environment. Instead, it has evolved to act upon its environment.

How could a model of the brain account for this fact? An approach to circumvent the limitation of the “representational” view of the brain is to focus on its generative abilities. The brain is clearly capable of forming mental models of the external world, of other minds, of physical laws, and also of abstract concepts. “Predictive processing” is a term covering a variety of models also applied to the level of single neurons. In these models, predictions and prediction errors are not purely representational, but process information as a bi-directional interaction between sensory (bottom-up) and generative (top-down) content. However, these models consists of hierarchical networks that, again, perform classification tasks or other well-known toy problems from the machine learning community. Therefore, these models often appear to be representational models in disguise, rather than something completely different.

A researcher that inspired me to think into a slightly different direction is Romain Brette, who published an opinion article in 2018 called “Is coding a relevant metaphor for the brain?” (In this article, Brette also defines more precisely the meaning of “representations” and its relationship to “coding” and “causality”; but since I do not think there is a semantic agreement among neuroscientists, I decided not to use this more precise terminology here.) Brette’s article is first and foremost an article against the idea of “coding” and “representations” as useful concepts for understanding the brain. Naturally, it is a bit less clear what alternative he is actually suggesting and how this alternative could be implemented or understood. (In a comment to the opinion piece by Brette, Baltieri and Buckley convincingly argue that a set of predictive processing models that use active inference is an interesting alternative to the representative view and is close to the “subjective physics suggested by Brette.) Although some concepts in the opinion article remain a bit vague, Brette also touches upon the idea of understanding neurons not by analyzing their representational properties, but by analyzing their actions, i.e., their spikes:

“[C]oding variables are observables tied to the temporality of experiments, whereas spikes are timed actions that mediate coupling in a distributed dynamical system.” (Romain Brette)

From this paper and from posts on his blog, it seems to me that he is trying to circumvent the natural and historical representational bias of systems neuroscience by investigating systems which are simple and peripheric enough to be amenable to descriptions that are not based on representations, but actions/reactions. One area of his research, which I find particularly interesting, is the study of single-cellular organisms.

The basic unit of life is the single cell

During my PhD, I gradually went from large-scale calcium imaging to single-cell whole-cell recordings. I was fascinated by the complexity of active conductances and the variability between neurons, and as I mentioned in last year’s write-up, I also got interested in complex signal processing in a single neuron.

Therefore, investigating unicellular organisms in order to understand neurons and ultimately the brain was not so far from my intuition anyway. And although neurons do not move their processes a lot and seem to be rather immobile and static at first glance, there are many examples of single-cell organisms that exhibit quite intriguing behaviors and remind us of what single cells are capable of. Just watch this movie of this tear drop-shaped ciliate that uses a single long process to search for food:

And it’s just a single cell! Interestingly, these ciliates not only have an elaborate behavioral repertoire, but also use spikes to trigger these behaviors. (Check out this short review on “Cell Learning” to get an idea about what single-cell organisms are able to learn.)

At least in the beginning of life, the single cell was the basic unit of life, receiving information about the external world, using its own memory and acting upon the world. Of course, many cell types in humans are far more specialized and cannot really be considered a basic unit of life, since they serve a very specific task like forming a part of a muscle or sitting in the germline. However, information-processing cells like cells in the immune system or neurons in the brain, which live at the interface between inputs and outputs of an organism, are much more likely to be similar to these unicellular forms of life. They make most of their decisions on their own, based on local signals, without asking a higher entity what to do.

If we look at life with human eyes, the mind and the human body tell us to search for an understanding of neurons from the perspective of the whole brain (or the whole body). However, if we look at life from a greater distance, life shows its highest richness at the level of single cells, and it would therefore make sense to search for an understanding of neurons from the perspective of a single neuron.

The objective function of a single cell

What does it mean to understand a single cell? A very biologist way would be to make an as complete as possible list of all its behaviors and states when exposed to certain conditions. For example, a hypothetical cell senses a higher concentration of nutrients at one of its distal processes and will therefore move into this direction. Conversely, if it senses a lower concentration in this direction, it will move into the opposite direction. If the nutrient level is overall high, it will not move at all. If it is overall low, it will try to move away in a random direction. This is a biologist’s list of behaviors.

A more compressed and therefore better understanding would be in my opinion to write down the “goals” of this single cell. Or, to abuse an expression by mathematicians and machine learning people, to write down the objective function (or loss function) of the single cell. For the above-mentioned hypothetical cell, this would be simply: move to places with high nutrient concentrations.

If we shift back from ciliates in a pond to neurons in the brain, this leads to a very obvious, but also surprisingly difficult-to-answer question: what is the objective function of a single neuron?

Of course it is not clear whether it is reasonable to assume that the objective function or optimization goal is set on the single-cell level. But given that cognition is most likely a self-organized mess that is only coarsely orchestrated by reward and other feedback mechanisms, I would tend to say: no, it’s not unreasonable. (But check out below for more detailed criticism.)  Therefore, the question is, what are the goals of a single neuron? What does it want?

Let’s assume that this question is a good one. How can it be answered? I see three possible approaches:

  1. To investigate the evolution of self-sustaining cells into neurons of the central nervous system. Since objective functions of uni-cellular organisms seem to be rather accessible and objective functions of neurons seem to be difficult to guess, one could investigate intermediate stages and try to understand the evolutionary development.
  2. To understand the development of the brain and the rules that govern migration and initial wiring of a single neuron. This would possibly allow to make a list of development behaviors for a certain neuronal type and from that extract an objective function. This would be an objective function not of regular operation, but of development. That’s exactly what developmental neurobiology is actually doing.
  3. To observe the full local environment of a neuron (for example, its electrical inputs) and to try to extract rules that govern changes in its behavior (for example, synaptic strength or output spikes)

The third approach is probably the most difficult one because it is currently technically impossible to observe a significant fraction of the inputs of a neuron together with its internal dynamics. There have been many studies on the plasticity of very few synapses in artificial conditions in slices using paired recordings, but it is currently impossible to perform these paired recordings in a behaving animal. Even in slices, these experiments are daunting given the myriad of different neuronal cell types and the expected variability in these experiments. Effects like long-term potentiation, long-term depression, short-term facilitation, short-term depression, spike-timing dependent plasticity, immediate early genes, receptor trafficking and neuromodulatory effects are just some of many processes that are essential to understand what is happening to a single cell.

A different approach that, however, still resonates with the idea of single cells as actors has been put forward by Lansdell and Kording in 2018. Their idea, as I understand it, is that a neuron could try to estimate its own causal effect, that is, the effect of its actions. This is possible because its actions are a discontinuous function of their state, the membrane potential. If external conditions are almost identical in two situations but the neuron fires a spike only in one of the two situations, the neuron could extract from the received input the effect of this single spike. Therefore, the neuron would measure the causal impact of its actions on itself. This idea is very close to that of a unicellular organism that immediately feels the effects of its own actions.

But what could be the objective function of such a neuron? – For example, the goal of a neuron that receives recurrent feedback could be to find a regime of a certain level of recurrent excitation. Recurrence could be measured by estimating the effect of the spikes of the neuron on the received feedback, possibly also in a dendrite-specific manner. I could imagine that objective functions almost simple as that could make a neuronal net work, once it is embedded in a closed-loop environment with sensory input and motor output. Also, I think that this way of thinking could be close to ideas put forward by e.g. Sophie Denève about predictive processing in spike-based balanced recurrent networks (for example, check out this proposal), which heavily relies on the self-organizing properties of adaptive recurrent circuits with a couple of plasticity rules.

An additional strength of the idea to use spikes to measure a neuron’s causal impact is that it could explain why neurons fire stochastically: this way, they acquire information about their causal impact that is hidden in a regime of deterministic firing.


Clearly, these ideas need more refinement and restructuring, and it is obvious that some of the feasible experiments (spike-timing dependent plasticity, behavioral time scale plasticity) have been done already. But I still like the idea of reframing these experiments by analyzing a potential objective function of a single neuron.

Beyong that, it would be interesting to try to understand self-organized systems of artificial neurons that are not governed by global loss functions, but by neuron-intrinsic and possibly evolving objective functions. I’m really looking forward to seeing theoretical work and simulations that do not focus on the behavior and objectives of the whole network, but on the behavior and objectives of single units, of single cells.


Unicellular organisms and neurons are both single cells. Left: Unicellular ciliate lacrymaria olor, adapted from Wikicommons (CC 2.0). Right: Two-photon image of hippocampal pyramidal neurons in mice.

Appendix: the devil’s advocate

Criticism 1: Representations and coding seem to be still the best way to think about neuronal activity

Although I find the ideas above appealing conceptually, it is clear from past research that most of our understanding of the brain comes from correlating the observed brain activity with sensory or motor information, because it allows to conclude when certain brain areas or neuronal types are active and what happens if they fail. This functional anatomy of the brain might not reveal all the design principles of the brain, but it could be necessary to make an educated guess about those principles.

In addition, our human languages are based on concepts (“you”, “house”, “hope”, “cold”) rather than on dynamical processes. For example, the science of language, linguistics, has coined the term ‘signified’ which describes the content of such a concept as a central element of language. From this converging view from both naive usage and scientific study of languages, it seems to me likely that high- and low-level representations indeed exist and are accessible in the human mind – probably as an evolutionary byproduct of mental processes that were more directly connected to actions or sensation. – It is still interesting that some sensory modalities are more vaguely represented in thought and language (olfaction) than others (vision).

Criticism 2: It is so far technically impossible to record the perspective of a single neuron

It seems to be relatively easy, although not trivial, to monitor a unicellular ciliate and its environment. However, the environment of a neuron consists of all of its synaptic partners, which can be in the range of thousands, and any other source of electrical or chemical neuromodulation. A single neuron itself is widespread over such a large volume that is very far from being amenable to any electrophysiological or imaging technique that provides sufficient resolution in time and space.

The best candidate technique, voltage imaging of both soma and dendrites, cannot be used over periods longer than minutes due to bleaching, suffers from low signal-to-noise as well as difficult calibration and requires imaging speeds and photon yields that are impossible to achieve with any imaging technique and fluorescence sensor that I know of.

But researchers have proved before that imperfect methods can be used successfully. I’m curious about all technical developments and what they will bring.

Criticism 3: Regarding neurons as actors is a case of anthropomorphism and ignores the interactions in complex systems that can lead to emergence

If I scrutinize my own ideas, I realize that one reason why I like the idea of neurons as actors is that this idea is simple and connects to the human desire to have an intuitive, empathic understanding of things. For example, we tend to personify even the smallest animals like ants, bees or ciliates in movies, children books or youtube videos. We tend to personify items of our personal households like a cup, a table, or an old house; or our plants in the garden; or the ocean, the wind, a dead tree. On the other hand, I feel that we are largely unable to feel the same empathy with distributed systems like the mycelium of a mushroom that spreads over kilometers in a forest, or the distributed power supply grid, or a large fish swarm, or a deep artificial neuronal network. We can feel amazement, but no empathy. I can imagine to be single water molecule that vibrates, rotates and is tossed around by the Brownian motion of the environment and by the convection of the ocean, but I cannot imagine to be the emergent phenomenon of water waves breaking at the shoreline which are the complex environment of this single water particle.

I think that there is a tendency in human thinking to avoid complexity and to impose a simple human-like personality onto all actors. Therefore, we should observe ourselves and try to avoid our tendency to personify parts of complex systems when they actually cannot be disentangled from their environment. It is easy to tell a story about specific neurons as important actors and what they want to do and how they feel immediate feedback of their actions like we do when seizing an apple, but it is difficult to tell a story about complexity, which is over the top of our heads. But maybe this, which we can hardly understand, is the truth, and this is good to keep in mind.

Posted in Calcium Imaging, Data analysis, electrophysiology, machine learning, Network analysis, Neuronal activity, Review | Tagged , , , , , , , | 8 Comments