Open PhD position in my research group

Are you a finishing Master’s student with a quantitative background and are interested in neuroscience? This is your opportunity.

Project: You will be supervised by Dr. Peter Rupprecht and Prof. Fritjof Helmchen at the Brain Research Institute, University of Zurich. The project will be to study animal behavior in mice (left) together with the dynamics of subcellular calcium signals in neurons and astrocytes (middle), and how they are controlled by axonal projections from the brainstem (right). The activation of astrocytes is important for arousal and stress and therefore also for stress-related disorders. You will study the biological principles underlying subcellular activation of astrocytes, building upon previous work (Rupprecht et al., Nat Neuro, 2024). In a second branch of the project, you will study the role of astrocytes for memory consolidation and neuronal oscillations in a joint project with an optics engineer in the lab. Of course, there will always be possibilities to adapt projects to your specific skills.

Scope: You will learn how to use high-end two-photon microscopes to image the activity of neurons and astrocytes in the brain of a living animal. You will analyse large datasets in Python and/or Matlab. You will build models of subcellular signal integration to explain your experiments. You will have the opportunity to become creative and develop new methods for data analysis or microscopy. You will receive direct guidance from an expert on new methods for calcium imaging and data analysis (Peter Rupprecht) and from an experienced and world-reknown professor of neuroscience (Fritjof Helmchen).

Requirements: You have a background in physics, biophysics, neuronal engineering, or another quantitative discipline, and you are interested in performing neurophysiological experiments as well as extensive analyses and modeling.

Application: Please send your application including a CV and your transcript of records to Peter Rupprecht (rupprecht [snailsymbol] hifo.uzh.ch). Please include a letter of motivation that covers the following aspects: What talents or previous experience of yours are possibly relevant to this project? If you have coding or experimental experience, what was your most challenging previous program or project so far?

Starting date: Anytime.

Posted in Calcium Imaging, Data analysis, hippocampus, Imaging, machine learning, Microscopy, Neuronal activity, neuroscience | Tagged , , , , , | Leave a comment

How to compute ΔF/F from calcium imaging data

Many neuroscientists use calcium imaging to record the activity from neurons (or other cells in the brain). The video below was recorded by Sian Duss and me in hippocampal CA1 pyramidal cells in mice.

To make calcium imaging traces comparable across cells and recordings, most researchers use a method called ΔF/F (“delta F over F”) to normalize the fluorescence traces. However, there are different ways to compute ΔF/F, and it is not always obvious how to choose the best method and how to interpret the result.

In a guest blog post for the Scientifica Resource Center, I summarized key suggestions and guidelines on how to compute ΔF/F and how to interpret the results, in particular for GCaMP calcium indicators: How to compute ΔF/F from calcium imaging data.

Let me know if you have any feedback on my suggestions and guidelines!

Posted in Calcium Imaging, Data analysis, hippocampus, Imaging, Microscopy, neuroscience | Tagged , | Leave a comment

Online spike inference with GCaMP8

Calcium imaging is used to record the activity of neurons in living animals. Often, these activity patterns are analyzed after the experiments to investigate how the brain works. Alternatively, it is also possible to extract the activity patterns in real time, decode them and control a device or computer with them. Such brain-computer-interface (BCI) or closed-loop paradigms have one important limiting factor: the delay between the neuronal activity and the control that this activity exerts over the device or computer.

For calcium imaging, this delay comprises the time to record the calcium images from the brain, but it is also limited by the slowness of calcium indicators. How are calcium indicators limiting such an online processing step in practice? And how is this limitation potentially mitigated by the family of GCaMP8 indicators, which have been shown to exhibit faster rise times than previous genetically indicators (<10 ms)?

We addressed this question using a supervised algorithm for spike inference (CASCADE), which extracts spiking information from calcium transients. We slightly redesigned the algorithm such that it only has access to a small fraction of the time after the time point of interest:

Spike inference using ony a few frames after the time point of interest (“Time after AP”). From Rupprecht et al, 2025, under CC BY 4.0 license (Figure 5c).

This modification of the algorithm was very simple since it is a simple 1D convolutional network, and we simply shifted the receptive field in time very slightly. The time defined in the scheme, which we call “integration time”, determines the delay for closed-loop application like BCI paradigms for calcium imaging. To achieve a very good performence in inferring the spiking activity from calcium signals, an integration time of 30-40 ms was required for GCaMP8 and 50-150 ms for GCaMP6.

Time after AP (see scheme above) to reach 90% of the maximal performance for spike inference. From Rupprecht et al, 2025, under CC BY 4.0 license (Figure 5d).

The CASCADE models trained for online spike inference are available online on our GitHub repository. The model names are starting with an “Online”, are listed as usual in this text file, and can be used as any other CASCADE models. For example, you can perform spike inference with your normal CASCADE model, and then perform the same spike inference with an “online” model, which will give you an impression how well the model performs. The only difference is that the “online” model takes only a few time points from the future, while the regular CASCADE model uses typically 32 time points from the future.

What does “a few time points” mean? Let’s look at a model’s name to break this down, for example the model Online_model_30Hz_integration_100ms_smoothing_25ms_GCaMP8. This model is trained for calcium imaging data acquired at 30 Hz with the calcium indicators GCaMP8 (the ground truth consisted of all GCaMP8 variants), with a smoothing of 25 ms. The crucial parameter is the integration time, here indicated as 100ms. This means that the model uses 100 ms from the future, which is 3 frames for the 30 Hz model.

If you are not sure which model to select for your application, or if you need another pretrained model for online spike inference, just drop me an email or open an issue for the GitHub repository.

For more details about the analysis and how the best choice of integration time for online spike inference depends on the noise levels of the calcium imaging recordings or potentially also other conditions such as calcium indicator induction methods or temperature, check out our recent preprint, where we also analyzed several other aspects of spike inference with a specific focus on GCaMP8: Spike inference from calcium imaging data acquired with GCaMP8 indicators.

Posted in Brain machine interface, Calcium Imaging, closed-loop, Data analysis, electrophysiology, Imaging, machine learning, neuroscience | Tagged , , , , , | Leave a comment

Detecting single spikes from calcium imaging

There are two mutually exclusive holy grails of calcium imaging: First, recording from the highest number of neurons simultaneously. Second, detecting spike patterns with single-spike precision. This blog post focuses on the latter.

Many studies have claimed to demonstrate single-spike detection, but often only under specific conditions or for a subset of neurons. At the same time, nearly as many other studies have demonstrated that such single-spike detection is not possible under their respective conditions.

In our recent preprint, we’ve added systematic analyses based on ground truth recordings as our contributions to this debate. Specifically, we analyzed how single-spike detection depends on calcium indicators (GCaMP8s, GCaMP8m, GCaMP8f; GCaMP6f, GCaMP6m; XCaMP-Gf) and on the noise levels of the recordings.

What I particularly like about our approach is that it does not rely on arbitrary thresholds for false-positive vs. false-negative detections of action potentials. Instead, we trained a deep network (CASCADE) to predict spiking activity in general – optimizing for mean squared error loss when compared to ground truth spike rates.We then applied this network to individual single spike-related calcium transients, allowing us to quantify single-spike detection across calcium indicators and noise levels.

Fraction of correctly detected single, isolated action potentials. From Rupprecht et al, 2025, under CC BY 4.0 license (Figure 4e).

Without giving away all the details, I’ll say that I was pleasantly surprised by the performance of GCaMP8s and GCaMP8m! For the full analyses and more context, check out our preprint: Spike inference from calcium imaging data acquired with GCaMP8 indicators.

Posted in Calcium Imaging, Data analysis, electrophysiology, Imaging, machine learning, neuroscience | Tagged , , , , | Leave a comment

Protocols to check the performance of your multiphoton microscope

In an exceptionally useful paper, Lees et al. provide a set of protocols for checking the performance of your multiphoton microscope: Standardized measurements for monitoring and comparing multiphoton microscope systems (link to preprint).

The paper covers the following procedures:

  • Measuring laser power at the sample
  • Measuring FOV size
  • Assessing FOV homogeneity
  • Measuring spatial resolution
  • Optimizing group delay dispersion
  • Measuring PMT performance
  • Estimating absolute magnitudes of fluorescence signals

Check it out – it is definitely one of those reference papers that are great to have around in the lab space!

Posted in Calcium Imaging, Microscopy, Reviews | Tagged , , | 2 Comments

Non-linearity of calcium indicators: history-dependence of spike reporting

Calcium indicators are used to report the calcium concentration inside single cells. In neurons, calcium imaging can be used as a readout of neuronal activity (action potentials). However, some calcium indicators like GCaMP transform the calcium concentration of a cell into a fluorescence trace in a non-linear manner, following a sigmoidal curve:

Nonlinear relationship between the calcium concentration and the fluorescence of the calcium indicator. From Rupprecht et al, 2025, under CC BY 4.0 license (Figure 3a). Scheme loosely inspired by Rose et al. (2014).

This means that a small change in calcium concentration (ΔC1) may increase the fluorescence (ΔF1) only slightly, while the same change at a higher starting calcium concentration (ΔC2) leads to a much more prominent increase (ΔF2).

What many people do not consider is that this property has an effect on complex events with multiple bursts or action potentials in a sequence. The first spikes may only elicit small fluorescence changes, while the later spikes – with a large fraction of calcium ions still bound to the calcium indicators – result in much higher fluorescence changes. As a consequence, this results in a history-dependent bias that underestimates the early and overestimates the late phases of neuronal activity, as demonstrated with these simulated data:


Simulated fluorescence traces for non-linear sigmoidal (top, gray) and linear (bottom, black) transfer curves. From Rupprecht et al, 2025, under CC BY 4.0 license (Figure 3d).

It is striking to see that for the non-linear calcium indicator, the first spike is barely visible, while the later ones are disproportionally amplified compared to the linear fluorescence trace.

Such history-dependent effects are also important to keep in mind when comparing the neuronal activity dynamics across calcium indicators. GCaMP6f, for example, is highly non-linear, showing a strong history-dependent effect. GCaMP8m, on the other hand, seems to behave more linearly in cortical pyramidal neurons. Therefore, dynamics of complex events cannot be compared across these calcium indicators without taking these effects into account. And spike inference (e.g., using CASCADE) must also take these history-dependent effects into account!

Read more about this (and several other related analyses!) in Figure 3 of our new preprint: Spike inference from calcium imaging data acquired with GCaMP8 indicators.

Posted in Calcium Imaging, Data analysis, Imaging, machine learning, Microscopy, neuroscience | Tagged , , , , , , | Leave a comment

How to use Google Scholar as a neuroscientist

Google Scholar is a search engine for scientific publications. There are alternatives like PubMed (not a search engine but a database, often used in the medical field), Semantic Scholar (also a database, but with richer annotations), Citation Gecko (to discover networks of forward- and backward-citations, which is handy to check for missed papers), all of which I’m using from time to time. Or new tools based on large language models like perplexity.ai or litmaps, which still have to show their value in the long run. Personally, I prefer Google Scholar. It casts the widest net among all these tools, covering not only journal publications but also conference preceedings, thesis publications, patents. This strategy also indexes crap from predatory journals, which is usually easy to spot for a scientist, but also some hidden gems that you would otherwise miss.

Google Scholar can be used not only as a search engine but is at the same time a tool to assemble the publications associated with a single researcher in a Google Scholar profile. Here are my pieces of advice on how to make the best use of your Google Scholar profile as a researcher in neuroscience (and related disciplines). Let me know if I missed something important!

1. Create a public Google Scholar profile

Once you have a Google account, it is pretty straightforward to create a Google Scholar profile. As soon as you have own publications associated with this profile, it also makes sense to make the profile public – this renders you searchable via Google scholar as a person with an ID. Since most people in the neurosciences are using Google Scholar (with a larger minority of medically-oriented or traditional researchers still preferring PubMed), this small step makes you more visible as a identifiable researcher, without having to go back to OrcID or other identifiers.

2. Curate your publications

From time to time, you will be informed by Google Scholar about new publications that are associated with your name. You can tell the algorithm to automatically update new publications (I’d choose this option only if you are very busy) or to decide by yourself each time (recommended). In the latter case, don’t forget to forward updates from Google Scholar in case you are not using the associated gmail account for your daily business.

Such a manual curation is a good use of your time to make your Google Scholar page useful to others and more readable. To do so, delete publications that are erroneously associated with your account. If publications that are listed separately are versions of the same study, merge them as shown below:

Sometimes, Google Scholar decides to include non-peer-reviewed scientific work in its search results, for example your thesis. I noticed that Google Scholar also includes posts of this blog post if they are structured like a scientific article and include a list of references. For example, the following email showed up when I posted a blog post reviewing papers on astrocytic physiology:

I believe it is nice that Google Scholar also picks up these instances of scientific output in its results, but I don’t include them on my Scholar profile in order not to confuse the profile’s typical human viewer.

In theory, it is possible to add your own items to Google Scholar, based on Github repositories or other scientific output that is not found by Google Scholar. However, I would not recommend it since it may appear as if you were trying to artifically inflate the number of publications in your profile.

3. Annotate shared first authorships

As most tools for literature search, Google Scholar normally does not display “equally contributing” first authors. For many projects in experimental neuroscience, however, two or more authors are “equally contributing”, without being reflected by Google Scholar’s author list. To fix this issue and to make your Google Scholar profile a more accurate reflection of your contributions to publications, you can click on an article, hit the “Edit” button and add asterisks (*) to the equally contributing first authors. Note that this procedure will update the author list in your Google Scholar profile but not in the Google Scholar search results. For example, an item may then appear like this:

4. Set up personalized alerts to stay up to date

Apart from being a useful web display of your publications, Google Scholar can also be used to keep you updated about current research related to your own publications. To do so, go to your Google Scholar profile page and click on the “Follow” button in the top right corner. This window will pop up:

Hit the checkbox “New articles related to my research”, and you will be informed via email about publications related to your own (published) research. From my experience, >90% of these “related publications” are irrelevant and can be ignored, while the remaining <10% are useful. It’s certainly better than relying on bluesky or other social media or going through table of contents of journals. The updates are quite independent of journal names and not biased by the usage of social media by a specific researcher. Therefore, it is also possible to spot relevant and well-done research published in smaller journals or by less prominent researchers, without losing track of “high profile” studies.

Of course, if you don’t have yet any publications, it does not make so much sense to receive alerts about publications related to your work. In this case, go to the Google Scholar profile page of your supervisor, scientific hero, postdoc colleague or anybody who in your opinion does great research. Then, hit the “Follow” button on his/her Google Scholar profile page and select “New articles related to this author’s research”, and you will be updated about research close to your interests.

Of course, getting updates about new research can be quite stressing sometimes, and it is impossible to fully stay up to date with the literature. If you have the feeling that you cannot keep up with the “related” literature anymore, that’s okay. Just cancel your Google Scholar alerts, do some real science and come back to literature search at a later timepoint. Of course, you may miss some of the hottest developments. But following all the newest trends and hot topics can also be stressful, and we all should do our very best to purge any sources of unnecessary stress and distraction from our work life as researchers.

5. Conclusion

I’m a big fan of Google Scholar. But it is always imporant not to get sidetracked by pure citation counts. Don’t judge a person based on their profile with its citation counts per year and numbers of publications. Always pick one or two first-author publications that you can judge scientifically, and check out what is behind the title, in the abstract, the figures, the methods sections, or the acknowledgements. Citation metrics can be manipulated and gamed, and nothing replaces the deep dive into real science.

Google Scholar also comes with quite a good documentation. Check it out! For further information about the background of Google Scholar, check out this article on Wikipedia.

Posted in neuroscience, Review, Reviews, writing | Tagged , , | Leave a comment

Annual report of my intuition about the brain (2024)

How does the brain work and how can we understand it? To view this big question from a broad perspective, I’m reporting some ideas about the brain that marked me most during the past twelve months and that, on the other hand, do not overlap with my own research focus. Enjoy the read! And check out previous year-end write-ups: 20182019202020212022, 2023.

If you want to understand the brain, should you work as a neuroscientist in the lab? Or teach the subject for students? Or simply read textbooks and papers while having a normal job? In this blog post, I will share some thoughts onthis topic.

1. Why do I do research?

There are three reasons why I’m doing research in neuroscience:

First, I like this job and what it entails: working with technology, building microscopes, working with animals, coding, analyzing data, and exploring new things every week. If you are familiar with this blog, you probably know how much I like these aspects of my work.

Second, with my research I want to contribute to the basic knowledge about the brain within the science community. I believe that this deepened basic understanding of the brain will ultimately have positive impact on our society, how we see ourselves and how we treat our own and other minds during health and disease. In this line of thought, I see myself as a servant to the public.

Third, and this is the focus of today’s blog post, doing research in neuroscience also enables me to increase and improve my own understanding of the cellular events in the brain, of thought processes, and of life in general. This is what drew me to science in the beginning.

In contrast to this last point, the work as a researcher embedded into the science machinery of the 21st century tends to be focused on something else, and for understandable reasons. Most of the daily work of scientists is focused on making discoveries, on having impact, and on being seen as successful. It almost seems a natural idea to believe that making discoveries is the same as better understanding the brain. And although discoveries might be the best way to increase the overall insight into the brain for humanity, it might not be the best way to increase your own understanding of the brain.

2. Understanding by reading from others (“passive” research)

One of the tasks that kept me most busy in early 2024 was the final preparations for our publication on centripetal propagation of calcium signals in astrocytes. I still believe that this is my most important contribution to science so far, and I do think that I learnt much during this research project. However, the final steps of publication consisted of rather tedious interactions with copy editors, formatting of figures and file types, preparing journal cover suggestions, as well as advertisement of the work through discussions, talks or public announcements. All of this may be helpful for my career and useful for the potential audience, and can be fun as well, but certainly does not advance my own understanding of the brain. I felt as though I was treading water – being very busy and working hard, yet at the same time, I got the impression that I had temporarily lost touch with the heart of research.

At the same time, I had to step in on short notice to give a lecture to medical students about the motor system, covering cortex, basal ganglia, cerebellum and brainstem. Since I didn’t find the existing lecture materials fully satisfying, I began researching the topic myself. Coming from physics and having been interested in the principles of neuroscience like the dynamics of recurrent networks or principles of dendritic integration rather than in the anatomical details, I had never fully had a firm grasp of the motor system. Now, I was forced to understand the connections between the motor cortex, cerebellum, and basal ganglia, and their effect on brainstem and spinal cord, in just a few days. What role do the “indirect” and “direct” pathway play in the striatum, and how are they influenced by dopamine? What is the opinion of current research about these established ideas? What role does the cerebellum play? How should we treat David Marr’s idea of the cerebellum? What is currently known about the plasticity of Purkinje cells in the cerebellum? My understanding of these topics had been superficial at best. The challenge of delivering a two-hour lecture on this topic – and being able to answer any question from students – forced me to seriously think about these topics.

After an intense weekend that had been full of struggles, textbooks and review papers, not only had I prepared an acceptable lecture, but I had also made real progress in my personal understanding of how the brain works. It wasn’t directly relevant to my own research, but over the following months, I noticed that far more studies from the field of motor systems suddenly caught my attention – because now I understood better what they were about. Retrospectively, I could now also better understand the work on the brain stem done in the neighboring lab of Silvia Arber during my PhD.

Therefore, the most important progress in my understanding of the brain during this time in early 2024 didn’t come from my own active research or from a conference where I could catch up on the latest developments. Instead, it came from a weekend when I was forced to dive into a topic slightly outside of my comfort zone. Let’s call this “passive” research, as it did not involve my own lab activities but ‘only’ researching the conclusion from other scientists.

If I wanted to really understand the brain, wouldn’t it make sense to work this way more often? That is, instead of spending five years applying my time and expertise on a narrowly defined scientific question, shouldn’t I first aim to better comprehend the available knowledge?

3. The limitations of a “passive” research to understanding the brain

A few years ago (around 2007-2009), I had already tried to understand the brain in such a “passive” way: by systematically absorbing and mapping out the existing knowledge, while being myself not yet an active researcher. At the time, I was in the middle of my physics studies. Physics itself wasn’t my main interest but rather a means to learn the methods and ways of thinking that are necessary to penetrate neuroscience and ultimately understand the brain. At the same time, however, I realized that I lacked some basic knowledge in neuroscience. For example, I had some notions about the different brain regions, but these notions were very vague and consisted of not much more than names.

To change this lack of knowledge, I started a project that I called “The brain on A4” (A4 is a paper page format similar to the US letter). The idea was to search the literature for a given brain region, summarize the findings, and condense them onto a single A4 page. My vision was to eventually wallpaper my room with these 100 A4 pages so that the summarized facts and essential insights about all relevant brain regions would always be present as anchor points for my thought processes. This way, they would gradually sink into my mind and provide the foundation for a deeper understanding through reflection and the integration of new thoughts.

For illustration, here are two pages of the “Brain on A4” project, written in German (with a few French textbook excerpts included since I was studying also at a French university then).

In short, the idea didn’t work. Using this approach, I covered some brain regions, and when I read through these A4 drafts today, they don’t seem completely off base. But the concepts that are now familiar to me and connected to other topics had only vague meanings back then when I copied them from Wikipedia, Scholarpedia or review articles that took me several days to go through. I could recall the keywords and talk about them, but I couldn’t truly grasp them when I put myself to the test.

Why? Because knowledge must grow organically. It needs both contextual embedding and emotional anchorage. This embedding can be a discussion partner, or a project where this knowledge is applied or tested, or, at the minimum, it can be the exam at the end of the semester where the knowledge is finally “applied”.

In addition, I was also lacking the toolset to probe the knowledge. Different from maths, physics or foreign languages, the knowledge about the brain comes in vague and confusing sentences that are difficult to evaluate. That is, for most of the statements of the brain, it is difficult to say whether it is indeed true or what it means. The equation for potential energy E = m·g·h can fully understood by derivations or examples. On the other hand, the statement “The major input to the hippocampus is through the entorhinal cortex” (source) only makes sense (if at all) when the entorhinal cortex is well-understood (which is not the case). In addition, neuroscientific publications are full of wrong conclusions and overinterpretations. For example, if I randomly take the latest article about neuroscience published in Nature, I get this News article of a research article by Chang et al. . The main finding of the study is indeed interesting and worthwhile reporting: recent memories are replayed and therefore consolidated during special sleep phases of mice where the pupil is particularly small. The News article, however, stresses that this segregation of recent memories into distinct phases may prevent an effect called catastrophic forgetting, when existing memories are overwritten because they use the same synapses. This interpretation, however, is quite misleading. Catastrophic forgetting and the finding of this study are vaguely connected but not closely linked, which becomes quite clear after reading the wikipedia page on catastrophic forgetting. As a lay person, it is almost impossible to understand that this tiny part of the discussion, which prominently featured in the subheadings of the News article, is only a vague connection that does not really reflect the core experimental findings of the study.

Similarly, when I made the A4-summaries, I meticulously listed the inputs and outputs of brain regions. But what could I make out of the fact that the basal ganglia receives input from cortex, substantia nigra, raphe and formatio reticularis (as in my notes above), if all of those brain areas were similarly defined as receivers of input from many other areas? Back then in 2008, I wrote about the cortico-thalamic-basal ganglia loops and how they enable gating (see screenshot above), but only when I worked through the details again with a broader knowledge base 16 years later, I managed to see the context where I could embed the facts and make them stick in my memory. And it took me this many years working in neuroscience to slowly grow this context.

4. The limitation of a systematic approach to understanding the brain

A second reason why this approach for mapping the brain to A4 pages didn’t work may have been its systematic nature. A systematic approach is often useful, especially when coordinating with collaborators, or when orchestrating a larger project. However, I’ve come to believe that a more organic – or even chaotic – approach can be  more effective, especially when it comes to understanding something. A systematic approach assumes that you can determine in advance what needs to be done, in order to faithfully follow this structure afterwards. For the A4 project, the systematic structure was the division into “brain regions.” Of course, the brain can not be understood by just listing the properties of brain regions; many important concepts take place on a different level, between brain regions, or on the cellular level. I noticed the limitations of my approach myself soon enough and added A4 pages on other topics that I deemed relevant like “energy supply to the brain” and “comparison with information processing in computers.” The project lost its structure, for good reason. And soon after, before the structure became completely chaotic, I abandoned the project entirely.

One thing that I learnt from this failure is how a systematic approach can sometimes hinder understanding. In formal education, this truth is often hidden because curricula and instructors provide the structuring of knowledge already. But when it comes to acquiring new knowledge and insight, rather than merely processing and assembling pre-existing knowledge, this systematic approach must be continuously interrupted and re-invented to enable real progress. When I first heard about the hermeneutic circle, invented by hermeneutics (the science of understanding a text), I had the impression that this concept was an accurate description of the process of understanding. Following the hermeneutic circle, deeper understanding is approached not on a direct path but iteratively, by drawing circles around the object of understanding, maybe constructing a web of context and possibly an eventual path towards the goal. In this picture, the process of understanding is diffuse and unguided, and is corrected only occasionally by counscious deliberation and a more systematic mind. As a consequence, the object of interest can only be treated and laid down systematically once its understanding has been reached, but not on the way to this point.

5. The limitation of a non-systemic approach to understanding the brain

However, the unsystematic approach to understanding the brain has a major drawback: you lose the sense of your own progress, and you lose the overview of the whole. Often, progress is incremental and, over years, so slow that you hardly notice you’ve learned something new – leading to a lack of satisfaction. And, even more importantly, you lose the big picture.

This may also be one of the greatest barriers to understanding the brain: the possible inability of our minds to comprehend the big picture of such a complex system where different scales are so intricately intertwined. A few years ago, I wrote a blog post about this topic (“Entanglement of temporal and spatial scales in the brain, but not in the mind”), which I still find relevant today. Can we, as humans with limited information-processing capacity and working memory, understand a complex system? Or more precisely: what level of understanding is possible with our own working tool – the brain -, and what kind of understanding lies beyond our reach?

Recently, Mark Humphries wrote a blog post to address a similar question. He speculated that the synthesis and understanding of scientific findings may, in the future, no longer be carried out by humans but by machines – for example, by a large language model or a machine agent tasked with understanding the brain. Personally, I find this scenario plausible but not desirable. An understanding of the brain by an artificial agent that is beyond my own ability may be practically useful, but it doesn’t satisfy my scientific curiosity. Therefore, I believe that we should focus on how to support our own understanding in its chaotic nature and, perhaps retrospectively, wrest structure and meaning from this chaos. How? By writing for yourself.

6. The importance of writing things up for yourself

As I mentioned earlier, I believe that understanding the brain is a chaotic and iterative process that does not proceed systematically or in a predictable trajectory. Instead, it involves trying out different approaches and constantly adopting new perspectives. For me, these approaches include reading textbooks and preparing lectures; reading and summarizing current papers; and conducting my own “active” research in the lab and in silico.

During this process, I found that shifting one’s perspective can be particularly helpful in gaining a better understanding. To gain such new perspectives, I regularly read open reviews, which often present a picture different from the scientific articles themselves. Or, I like to explore new, ambitious formats that shake off the dust of the traditional publication system and attempt to take a more naive view of specific research topics. A venue that is still fresh in spirit and that I can recommend for this purpose is thetransmitter.org.

However, the best method to adopt knowledge and integrate it into one’s own world model is by processing the knowledge in an active manner. The two methods I find most useful are mind maps and writing. Usually, I use mind maps when I’m completely confused, either about the direction of a project, or about my approach to neuroscience in general. Usually, I just start with a single word in the center of a paper within a circle and then add thoughts in an associative manner for 20-30 minutes. The result of this process is not necessarily useful for others. However, seeing the keywords laid out before me, I can often see the missing links or identify things that should be grouped together, or grouped differently.

Below is an example of a such mind-map. I drew this map in 2016 a bit more than two years into my PhD, at a stage where I was quite struggling to shape my PhD project. Unlike most of my mind maps which are hand-drawn and therefore almost illegible to others, this one was drawn on a computer (in adobe illustrator, to play around with the program). I was brain-storming about the role of oscillations in the olfactory bulb of zebrafish (check out this paper if you’re interested). Although I did not follow up on this project, some of the ideas are clearly predecessors of analyses in my main PhD paper on the olfactory cortex homolog in zebrafish. The mind-map is basically a loosely connected assembly of concepts and ideas that had been living in my thoughts, often inspired by some reading of computational neuroscience papers or by discussions with Rainer Friedrich, my PhD supervisor. I used this map to visualize, re-group and connect these ideas:

The second method is writing, and I believe that it is the only true method to really understand something. In contrast to reading, writing is an active process, and so much more powerful in embedding and anchoring knowledge in your own mind. You may have heard of the illusion of explanatory depth, the tendency of our brain to trick us into thinking we understand something simply because we’ve heard about it or can name it. Only when we attempt to explain or describe a concept, we realize how superficial our thoughts were and how shaky our mental models really are. Writing is a method for systematically destroying these ill-founded mental structures. (Expressing an idea in mathematical or algorithmic terms is even more precise and therefore even better for this purpose!) When we have destroyed such an idea, we shouldn’t mourn the loss of a seemingly brilliant concept but instead celebrate the progress we’ve made in refining our understanding.

In addition, writing has always been a form of storytelling. By putting our understanding of the brain into words – even if those words are initially fragmented, scattered, and contradictory – writing seeks to find meaning, identify patterns, and embed details into a larger whole. With a bit of practice, writing does all of this for you.

Importantly, I’m not talking about writing papers or grant proposals here. In those cases, you have a clear audience in mind (editors or reviewers) and eventually tailor your writing to meet their expectations. And you will be happy and satisfied when you produce something that meets the standards for publication. Instead, I’m talking about writing for oneself. This mode of writing confronts your own critical voice and follows ideas without regard for how the text would look like. And I believe that this way of writing, which is not directly rewarded by supervisors or reviewers, is the most useful in the long run.

I believe that many researchers in neuroscience (and maybe you as a reader) initially started to work as neuroscientists not because they wanted to be famous or successful or well-known but because they wanted to understand how the brain works. So if you want to take this seriously, write for yourself.

Posted in Data analysis, Neuronal activity, neuroscience, Reviews | Tagged , , , , , | 9 Comments

A resource paper for building two-photon microscopes

Building microscopes in the lab is a skill that is rarely taught at university. It is no coincidence that most people who have learnt to build microscopes have done so in the lab from other researchers or engineers. Usually, one needs to be lucky to find pieces of knowledge in random papers, in discussions with experts, or when browsing blogs like the one you’re currently reading. A part of the problem is that most papers that describe the novel design and construction of microscopes are often challenging to translate into practice even for experts because they describe the ideas and concepts and do rarely provide precise assembly instructions together with the rationale behind.

In their manuscript on An open-source two-photon microscope for teaching and research by Schottdorf et al., the authors go a different route. Instead of presenting a novel microscope design that shall be used for future experiments, they describe the rationale and principles behind a microscope design that was already used, tested and refined over many years in a very successful systems neuroscience lab.

Assembly of lens groups for tube lens and scan lens. From (Schottdorf et al, 2024), under CC BY 4.0 license (Supplementary Material, Figure 8).

The paper includes many interesting and useful pieces of knowledge. Among those, I would like to highlight only a few:

  • The assembly instructions and rationale for a custom scan lens and tube lens based on off-the-shelf components (see the Results section, but also the Discussion with comments on ideas from astronomy and photography).
  • A similar design suggestion and analysis for the detection path.
  • A discussion why an axial FWHM of 5 μm is, in the view of the authors, a pragmatic compromise for in vivo-imaging with movement artifacts.
  • The interesting side-note that ±28 V instead of ±24 power supply specifications are advantageous for galvo scanner performance.
  • The measurement of the dispersion for this specific two-photon system, which can be considered typical for two-photon microscopes (approx. -20000 fs2).

In addition, the manuscript comes with a very nice documentation on Github that includes a detailed assembly protocol, a list of parts including prices of laser, objective, shutter, power meter, PMTs, etc.

Altogether, a great resource that is definitely worth a look.

Posted in Calcium Imaging, Imaging, Microscopy, neuroscience | Tagged , , , , , , | 1 Comment

Spike inference with GCaMP8: new pretrained models available

Calcium imaging is only an indirect readout of neuronal activity via fluorescence signals. To estimate the true underlying firing rates of these neurons, methods for “spike inference” have been developed. They are useful to denoise calcium imaging data and make them more interpretable. A few years ago, I developed CASCADE, a supervised method for spike inference based on deep networks. I have been been updating and maintaining CASCADE ever since, and this maintenance work has been a starting point for several collaborations and friendly interactions during the last years (for example, check out this recent preprint on spike inference from spinal cord neurons).

Originally, the CASCADE algorithm was trained on a ground truth database that consisted primarily of recordings with GCaMP6. However, how will the algorithm perform on new indicators like GCaMP8, with its much faster rise times? To address this question, I now trained CASCADE models on GCaMP8 ground truth and evaluated whether these models performed better than previous models. The short answer is – the retrained models performed clearly better:

I’m currently in the process of dissecting this improvement: Is it due to differences in rise times, different non-linearities or other differing properties of the two indicator families? The results of these analyses turned out to be more fascinating than I expected and therefore take more time to understand and analyze, but I’m planning to have this analysis written up within the next 3-6 months.

In the meantime, however, feel already free to use the new CASCADE models specifically trained with and for GCaMP8 – they really make predictions better! (And please only apply these models to GCaMP8 data; the previous models are still better for anything with GCaMP6!)

You will find the new models for CASCADE as usual in the list of available pretrained models. For example, instead of the GCaMP6-trained model Global_EXC_30Hz_smoothing25ms, indicate the GCaMP8-trained model GC8_EXC_30Hz_smoothing25ms_high_noise to infer spike rates with the predict() function of CASCADE.

A technical note: These models are pretrained on all available GCaMP8-ground truth, mixing together GCaMP8f, GCaMP8m and GCaMP8s. This procedure results in more robust models due to the larger ground truth database, but absolute inferred spike rates are slightly biased due to the different spike-event fluorescence amplitudes for the three indicators (-30% underestimate for GCaMP8f and GCaMP8m, +60% overestimate for GCaMP8s). In the near future, CASCADE will also include models specific for each of those indicators. These models will probably be slightly less robust but will provide less biased spike rates. However, if you are not specifically interested in very precise absolute spike rates, I would for now recommend the general GCaMP8 models that are already available. They are not only robust and very powerful but also provide a good rough estimate of absolute spike rates.

Update 2024-09-19: Models pretrained with specific indicators (e.g., GC8s_EXC_30Hz_smoothing25ms_high_noise for GCaMP8s) are now available online. Check out the full list of models via the Cascade code or via this continuously updated list on Github.

Update 2025-03-11: A preprint describing the analyses, the pretrained models and their applications to GCaMP8 is now on bioRixv: https://www.biorxiv.org/content/10.1101/2025.03.03.641129.

If you have questions, please reach out via comments on the blog, issues on Github or via email!

Posted in Calcium Imaging, Data analysis, Imaging, machine learning, Network analysis, Neuronal activity | Tagged , , , , | 4 Comments