Hodgkin-Huxley model in current clamp and voltage clamp

As a short modeling session for an electrophysiology course at the University of Zurich, I made a tutorial for students to play around with the Hodgkin-Huxley equations in a Colab Notebook / Python, which does not require them to install Python. You’ll find the code online on a small Github repository: https://github.com/PTRRupprecht/Hodgkin-Huxley-CC-VC

Using the code, the students can not only play around with the Hodgkin-Huxley equations, but also replicate in silico the experiments they have done when patching cells in slices (including voltage clamp experiment).

It is really rewarding to be able to reproduce current clamp (recording the membrane potential) and voltage clamp experiments (recording the currents while clamping the voltage to a constant value), because this also allows to replicate computationally the experiments and plots generated experimentally by Hodgkin and Huxley.

Below, you see a part of the code, the result of a simulation of a Hodgkin-Huxley model. The top configuration was run in current clamp, with a current pulse injected between 40 and 80 ms, which triggered a single action potential. The lower configuration was run in voltage clamp, with the holding potential stepping from -70 mV to -30 mV between 40 and 80 ms. You can clearly see the active conductanves (deactivating sodium conductance in blue and non-deactivating potassium conductance in orange):

Posted in electrophysiology, Neuronal activity | Tagged , , | Leave a comment

Interview with Bruno Pichler

Bruno Pichler studied medicine, obtained a PhD in neuroscience, worked in the labs of Arthur Konnerth, Tom Mrsic-Flogel and Troy Margrie, and was R&D manager at Scientifica, before founding his own company, INSS, “to provide the international neuroscience community with bespoke hard- and software solutions and other consulting services”. He is not only a highly experienced builder and designer of two-photon microscopes but also a very friendly and open human being. So I was very happy to have the opportunity to ask him a couple of questions.

The interview took place on September 8th 2020 in a virtual meeting and lasted around 1.5 hours. Afterwards, I transcribed, shortened and edited the recorded interview. For a brief orientation, here’s an ordered but non-exhaustive list of the topics discussed:

Why neuroscience?
How to get into optics and programming
Role models
Projects in academia
Why didn’t you become a PI?
At Scientifica
Founding one’s own company
Brexit
Performance checks for a 2P scope
Preamplifiers
How to clean optics
The detection path
Teaching people how to solve problems
Mesoscopes
mesoSPIM
AODs
Fixed-wavelength lasers
Multiplexed acquisition
Scanimage
Okinawa
Advice to young scientists
Bluegrass music

If you find any questionable statements, you should consider blaming my editing first.

And since the entire interview is quite long, take a cup of tea, and take your time to read it

.

Peter Rupprecht: You studied medicine before you started your PhD. What was your motivation to do a PhD in neuroscience, and to continue in neuroscience afterwards?

Bruno Pichler: This was just something that happened: I really loved the first years of medical school, basic science, physics, biology, anatomy, physiology, and all that. But halfway through medical school when the clinical work started, I realized that it wasn’t for me. So I looked for new inspiration, and I stumbled upon the website of Arthur Konnerth’s lab. I called on a whim and half an hour later I was in his office, and he offered me a job. He said, “why don’t you take a year off from medical school and start working towards a PhD here?” So I worked full-time in the lab for a year, and then went back to finish medical school. But at that point I had no interest in a career in clinical medicine, I just wanted to complete the medical degree and then work in the lab and continue my PhD. – So, not much thought behind it, it’s just how things transpired.

(c) Bruno Pichler

PR: The medical studies did not really prepare you well for the more technical aspects of what you have been doing afterwards. How did you learn all of this, for example optics, or programming?

BP: Again, it was just something that happened: there were microscopes that needed troubleshooting, there were things I didn’t understand, and every time I didn’t understand something, I tried to find more information about it. Some of the information sticks with you – and that’s how you learn and how you get into optics and software.
For example, there was some custom-written image analysis software in the lab and I didn’t really understand what it did to the data, so I sat down with the guy who wrote it and asked about all the calculations it made and then I cautiously started to make some changes to it – and it just naturally emerged from there. I never consciously sat down and said, “oh, I want to learn programming!” I had a problem in front of me that I needed to solve, and so I solved it. And whatever I learned while solving it is now part of my knowledge.

“People who inspired me were often those in support roles, like the lab technicians who taught me how to pipette, or the engineers in the electronic and mechanical workshops.”

PR: So would you describe yourself as a problem-solver?

BP: I think that is probably an accurate description. My main driving force is that whenever I see a technical problem, I want to find an elegant solution for it. I can’t help it.

PR: Did you have a role model as a scientist, or somebody who inspired you to continue with neuroscience?

BP: I definitely have a few people who inspired me, but I wouldn’t say I had a ‘role model’. There’s obviously the intellectual giants like Richard Feynman or Horace Barlow. Then there are the scientists that I worked with, Arthur Konnerth, Tom Mrsic-Flogel, Troy Margrie, and of course all the colleagues in those labs. But, on a very practical level, people who inspired me were often those in support roles, like the lab technicians who taught me how to pipette, or the engineers in the electronic and mechanical workshops. For example, Werner Zeitz, the electronics guy in Arthur Konnerth’s lab, who is known for his famous Zeitz puller (https://www.zeitz-puller.com/). We were building a two-photon resonant scanner with Werner back in 2004/2005, and he built a 19” rack-mountable device box – no labels on it, just unlabeled pots and BNC connectors – which transformed the scanning data into a TV image and sent it to a frame grabber card. Nowadays, we do this in software but it was all done in hardware at the time. Same with the mechanical guy, Dietmar Beyer: He was such a skilled manual machinist, and he would just make whatever we needed without any CNC machining. Another guy that really inspired me back as a PhD student was Yury Kovalchuk. He was a senior postdoc at the time. He knew everything about two-photon microscopy, and he was building an AOD scanner back in 2004/2005. It was the way he understood these systems and explained everything to me whenever I had any questions – those kind of people inspired me.

PR: From your entire academic career, what was your most rewarding project, big or small?

BP: That’s so difficult to say, because everything is kind of rewarding. What I can certainly say is that I don’t believe in putting off reward for a long time, and the idea that ‘the more you suffer, the bigger the reward’. I like it when you have smaller rewards, but more frequently.

PR: I can definitely relate to this… but at least scientific publications usually do not come so frequently. If you have to choose, which scientific publication where you took part in would you like to highlight, and what was your contribution?

BP: There was a paper in 2012 by Kamilla Angelo in Troy Margrie’s lab (paper link). I came very late to the party, all the experiments had already been done, the first version of the manuscript had been completed, and Troy just asked me to read it and give some comments. I noticed something in the analysis where the manuscript didn’t actually show unambiguously one aspect of the claim in the paper. We tried to come up with some way to design new experiments to prove that unambiguously, but at some point it occurred to me that you could just do it with the existing data, just with a different type of analysis. And once the idea had come up of doing this pair-wise scrambling of all the data points and then calculating pair-wise differences, it was very quick and easy to write some code to analyze it. And it supported exactly what we thought it would support, but now unambiguously. That felt really rewarding, to be able to nail something that would have otherwise required more experiments with a bit of clever analysis, that was really cool.

PR: Sounds like that! Especially your PI was probably really happy about this, because it saved a lot of trouble.

BP: I guess so. The paper would have been highly publishable without my input, but it was just ever so slightly better with my input; and that’s good enough for me.

“I was always more of a Malcolm Young than an Angus Young.”

PR: Why did you not become a PI yourself?

Continue reading
Posted in Calcium Imaging, Data analysis, Imaging, Microscopy | Tagged , , | 1 Comment

Simultaneous calcium imaging and extracellular recording from the same neuron

Calcium imaging is a powerful method to record from many neurons simultaneously. But what do the recorded signals really mean?

This question can only be properly addressed by experiments which record both calcium signals and action potentials from the same neuron (ground truth recordings). These recordings are technically quite challenging. So we assembled several existing ground truth datasets, and in addition recorded ground truth datasets ourselves, totaling >200 neuronal recordings.

This blog blog posts contains raw movies together with recorded action potentials (black; also turn on your speakers for the spikes!) and the recorded ΔF/F of the calcium recording (blue). These ground truth data are a very direct way for everybody into calcium imaging to get an intuition about what is really going on. (Scroll down if you want to see recordings in zebrafish!)

Recording from a L2/3 neuron in visual cortex with GCaMP6f, tg(Emx1), from Huang et al., bioRxiv, 2019; a very beautiful recording. Replayed with 2x speed.

Recording from a L2/3 neuron in visual cortex with GCaMP6f, tg(Emx1), from Huang et al., bioRxiv, 2019. Stronger contamination from surrounding neuropil.

Recording from a L2/3 neuron in visual cortex with GCaMP6f, tg(Emx1), from Huang et al., bioRxiv, 2019. Note that single action potentials don’t seem to have any impact at all. – The negative transients in the calcium trace stem from center-surround neuropil decontamination (activity of the surround is subtracted).

Recording from a L2/3 neuron in visual cortex with GCaMP6s, tg(Emx1), from Huang et al., bioRxiv, 2019.

Recording from a L2/3 neuron in visual cortex with GCaMP6s, tg(Emx1), from Huang et al., bioRxiv, 2019.

Recording from a L2/3 neuron in visual cortex with GCaMP6f, virally induced, from Chen et al., Nature, 2013. From the left, you can see the shadow of the patch pipette used for recording of extracellular signals.

Something completely different: recording from a pyramidal neuron in Ca3 with R-CaMP1.07, virally induced, recorded by Stefano Carta, from Rupprecht et al., bioRxiv, 2020. What appears as single events are actually bursts of 5-15 action potentials with inter-spike-intervals of <6 ms.

A recording that I performed myself in adult zebrafish, in a subpart of the homolog of olfactory cortex (aDp) with GCaMP6f, tg(neuroD), in Rupprecht et al., bioRxiv, 2020. Around second 20, it is visible that even a single action potential can be seen in the calcium signal. However,this was not always the case in other neurons that I recorded from the same brain region.

Again a recording that I did in adult zebrafish, in the dorsal part of the dorsal telencephalon with GCaMP6f, tg(neuroD), in Rupprecht et al., bioRxiv, 2020.

What can you do if you want to detect single isolated action potentials with calcium imaging? GCaMP, due to its sigmoid non-linearity, is by often a bad choice and will be strongly biased towards bursts. Synthetic indicators, however, are very linear in the low-calcium regime. – This is a recording that I did myself in adult zebrafish, in a subpart of the homolog of olfactory cortex (pDp) with the injected synthetic indicator OGB-1 in Rupprecht et al., bioRxiv, 2020. Although the temporal resolution of the calcium recording is rather low, the indicator clearly responds to single action potentials. As another asset, the indicator not only fills the cytoplasm of the neuron in a ring-like shape, which makes neuropil-contamination much less of an issue compared to GCaMPs.

Another recording that I performed in adult zebrafish, in a subpart of the homolog of olfactory cortex (pDp) with the injected synthetic indicator Cal-520 in Rupprecht et al., bioRxiv, 2020. This indicator is much more sensitive compared to OGB-1, but also diffuses less well after bolus injection. – These two minutes of recording only contain 4 spikes (this brain region really is into low firing rates in general), but you can clearly see all of them. If this were a GCaMP recording, you would probably see only a flat line throughout the entire recording.

For more information, including all 20 datasets with >200 neurons (rather than these excerpts from 11 neurons), check out the following resources:

Posted in Calcium Imaging, Data analysis, electrophysiology, machine learning, Microscopy, Neuronal activity, zebrafish | Tagged , , , , | Leave a comment

Discrepancies between calcium imaging and extracellular ephys recordings

To record the activity from a population of neurons, calcium imaging and extracellular recordings with small electrodes are the two most widely used methods that are still able to disentangle the contributions from single units. Here, I would like to briefly mention two papers that try to connect these two approaches by comparing them more or less directly.

  1. Wei et al., A comparison of neuronal population dynamics measured with calcium imaging and electrophysiology, bioRxiv, 2019
    [Update, 2020-09-15: The paper just came out in PLoS Comp Biology just one day after the blog post!]
  2. Siegle, Ledochowitsch et al., Reconciling functional differences in populations of neurons recorded with two-photon imaging and electrophysiology, bioRxiv, 2020

Both papers compare calcium imaging datasets and extracellular ephys datasets, try to connect the results and point out the difficulties in reconciling the approaches.

Wei et al. use datasets recorded in mouse anterio-lateral motor cortex (ALM). They first focus on approaches to reconstruct spike rates from calcium imaging data (deconvolution) and find several limitations of this approach. On the other hand, they find that a forward model that transforms spiking activity to calcium fluorescence data can reconcile most of these differences.

The authors also provide a user-friendly website which can be used to explore the transformations between ephys and imaging data (also including datasets with simultaneous ephys-imaging): http://im-phys.org. (Understanding the figures of the paper is however quite useful before exploring the website.)

From Wei et al., bioRxiv (2019), used under CC BY 4.0 license (excerpt from Fig. 7).

A large part of the paper focuses on high-level analyses (principal component analysis, decoding of behavior and decision). I would take it as an educational tale of caution which highlights wrong conclusions that could be made based on standard analyses. For example, the slow and variable decay times of calcium imaging data can lead to a dispersion of peak activity that was absent in the ephys data. This dispersion can smear out clearly timed activations of neuronal population into something more similar to a sequence (see Figure 7, of which I have pasted an excerpt above).

Siegle, Ledochowitsch et al. from the Allen Institute, rather than investigating the effects on higher-level population analyses, focus their attention on the effects seen in single neurons. When comparing a calcium imaging and an ephys dataset recorded in the same brain region (visual cortex V1) in mice that do the same standardized tasks, what differences can be seen in the firing properties of single neurons?

Due to the high standardization requirements at the Allen Institute, their datasets are probably uniquely qualified to be the basis for such a comparison. Interestingly, they find a couple of clear differences. For example, extracellular ephys data suggest typical firing rates of around 3 Hz (see Figure 2A), which is almost an order of magnitude higher than what has been recorded and estimated from calcium imaging data (see also Figure 7 in our preprint, which estimates spike rates of the same dataset).

The authors go to great lengths to use forward transformations (similar to Wei et al.) in order to reconcile differences seen for various response metrics (responsiveness, tuning preference, selectivity). However, their conclusion seemed to me quite a bit less optimistic compared to the Wei et al. paper. The authors go into more detail when discussing the potential reasons for the discrepancies, and focus on the recording methods themselves rather than on methods to transform between them. In particular their analysis of inter-spike-interval (ISI) violations in ephys recordings (which indicate that spikes from different neurons contaminate the recording of the neuron of interest) was, in my opinion, particularly interesting and convincing. I also really recommend to anybody the last paragraph of their discussion, from which I only want to cite their note of caution about extracellular ephys recordings:

From this study, we have learned that extracellular electrophysiology overestimates the fraction of neurons that elevate their activity in response to visual stimuli, in a manner that is consistent with the effects of selection bias and contamination. – Siegle, Ledochowitsch et al.

One of the reasons why I am writing about these two studies is that I have been working at the interface of calcium imaging and ephys myself, addressing the question, How much information about spike rates can we get out of calcium imaging data? Wei et al. and Siegle, Ledochowitsch et al. take a slightly broader perspective. And, in some way, they show how hard it is to reconcile two (methodological) perspectives on the same phenomenon. (I noticed this in my PhD lab as well, when it came to reconciling results from EM reconstructions of neuronal anatomy and calcium measurements of the same neurons.) Since almost any method in systems neuroscience is technically challenging, we often have in a single lab only a single perspective of a phenomenon, and I think it’s important to always be aware that the conclusions drawn from this perspective might be strongly biased.

In general, calcium imaging and extracellular ephys are extremely valuable tools to observe the living brain, and we better do anything to understand the properties and limitations of these tools. These studies sometimes might feel a bit like negative results and therefore not very attractive to those who want to advance neuroscience, and I therefore understand why not many are willing to undertake these projects. So I am glad to be able to highlight these two publications here.

Posted in Calcium Imaging, Data analysis, electrophysiology, Imaging, Network analysis, Neuronal activity, Reviews | Tagged , | 1 Comment

Alignment tools

This blog posts covers some tools and techniques that I’m typically using to align two-photon microscopes. If you’re an expert, you will probably find nothing new, but if you haven’t been doing this for years, this might offer you some pieces of inspiration.

Aligning a microscope is the process of optimizing the parts to produce better images than before. A useful overview of basic alignment procedures for two-photon microscopes has been put together in this #LabHacks blog post by Scientifica. It also includes safety advice (which I will not repeat here; keep in mind that lasers, especially pulsed IR-laser, are really dangerous, and all safety instructions of your institute and lab must always be obeyed!)

Microscope alignment still seems to be a secret art that most microscope users are afraid of. It is not by principle difficult to learn, but it is a practical skill and requires both patience and a mentor who shows where to buy stuff and how to touch the mechanical and optical elements. Most people working with microscopes are prevented from learning because they are afraid to touch anything. This blog post is intended to help lower the fear of optical alignment – by showing that the tools used for alignment can be very simple.

1. Adjust the beam to a given height above the optical breadboard

After the beam comes out of the laser, you often want to keep the beam in one single plane parallel to the table, in order to keep things simple. In other words, the distance of the beam from the optical table should be the same everywhere. One way to achieve this is to use mounted pinholes (e.g., these ones from Thorlabs). However, it is sometimes difficult to properly see where the beams hits the pinhole, which results in imprecise alignment and unnecessary uncertainty. When I worked with Robert Prevedel in 2013/2014, he showed me a simple trick which makes it very easy to adjust all beam positions to the same height. He used a small hex key and two washers to clamp it horizontally onto an inverted post with a screw, as shown below. The surfaces of the hex key lead to a very nice horizontal alignment, and the precise height indicated by the hex key can be used to (1) adjust the beam itself or (2) to consistently adjust the height of a set of pin holes. It is useful to have and very simple to make.

2. Printed-out resolution targets

Is the beam centered in a given aperture, for example the cage system of the microscope’s tube lens? Normally, I would use a threaded or cage-mounted iris (e.g., this one), but in other cases spatial constraints do not allow this, or the beam can only be viewed from an angle, and it can be difficult to judge whether the beam scattering from the iris is centered or not.

However, if the laser power is low enough not to burn paper, I simply paint resolution targets with Powerpoint or Inkscape and print them out on a sheet of paper resolution targets. When I’m in a hurry, I simply draw it with a pen.

For example, if I want to check whether a beam is collimated (that means, the beam does not change its diameter a lot over the distance), I use these alignment targets as a reference and as a guide for the eye.

Or I use some scotch to fix it into a given aperture, allowing me to check whether the beam is centered or not. Here illustrated for the aperture which is approximately located at the back focal plane of the objective. Does not require much work but is quite helpful.

.

3. Using mirrors to look around corners

One more practical problem in the above case is the viewing angle. Ideally, I would like to look at the alignment target from the top, but this would at the same time block the beam. To solve this (and many very similar problems), you can simply use a mirror shard. The photo below (left) shows a hand-held piece of mirror which allows to look at the alignment target in a relaxed way. It is difficult to see this from a single picture, but mirrors like this one (either hand-held or fixed in the setup) often make life much easier.

For more convenience, dental mirrors like this mirror (photo below, right) are designed to conveniently look around corners and are of great use to look at pinholes from angles that are difficult to get at without mirrors.

However, be very careful with hand-held and any other moving mirrors! By chance, you might reflect the laser beam into your own eyes! Always be careful and think three times whether there is any chance that any reflection might hit your eyes.

4. Retro-reflecting mirrors and lenses

Usually, the laser beam should hit a lens at its center and at a 90° angle. To ensure that the beam is centered, one can use a) a cage system with a pinhole, or b) a threaded pinhole that can be screwed onto the lens directly, or c) a printed-out removable resolution target (see above). To make sure that, in addition, the laser beam hits the lens with a 90° angle, it is helpful to use back-reflections of the beam. Since a small fraction of the beam is back-reflected by the lens surface, it should ideally coincide with the incoming beam. This can be checked with an IR detection card or, for a visible beam, a piece of paper held close to (but not hiding the) incoming beam path.

Sometimes it is necessary to align a caged element or something similar without lenses that provide back-reflecting surfaces. In this case, you can simply use a mirror that reflects the entire beam back. This mirror can be screwed into a thread of a cage system. More often, it suffices to put a small and flat mirror shard at the flat back of the caged system. This does not provide the highest precision alignment, but is usually good enough for most purposes.

Be aware that back-reflections that go directly back into the laser can make the laser unstable. If a pulsed laser stops pulsing, first check whether any back-reflections could be the reason for it.

5. Retro-reflecting gratings

The main problem with back-reflections from lenses and mirrors is that the beam is often small and coincides with the incoming beam, making it difficult to properly identify.

In my PhD lab, I inherited a really cool tool that was used for alignment of a Sutter MOM scope and which I was, unfortunately, unable to find afterwards on the internet. It is basically a mirror, but with a sort of grating scratched into the mirror surface. Due to interference, the back-reflections were not simply a single beam, but a symmetric diffraction pattern that extended over several centimeters and could be conveniently used for alignment – much more useful and easier to use than the back-reflections of a lens or mirror. I guess that any flat reflective gratings (maybe even compact discs? I have to tried that) could be used for this same purpose.

6. Wedge plates

At several points of the beam path (before entering the back of the objective; after exiting a beam expander) the laser beam should ideally be collimated. The standard method that I used for years was to print out a resolution target made of paper (described above). I used it to check whether the beam changes its diameter when propagating freely over several meters. To this end, it is often necessary to deflect the beam with a mirror that must be temporarily inserted into the beam path.

Fabian Voigt from my current lab in Zürich also showed me the more professional way to check for collimation, using a wedge plate. Wedge plates can be used for shearing interferometers (Thorlabs product, EO product). They generate an interference pattern which can be used to very precisely check the collimation of the beam.

Fabian also kindly pointed me to a paper (Tsai et al., 2015) which mentions that the pattern seen from a shearing interferometer can also be used to analyze less obvious properties of the beam, like coma and astigmatism aberrations (Okuda et al., 2000). It would be cool to have a look-up table of typical interferograms and the corresponding wavefront shapes and aberrations!

7. Alignment lasers

Another tool that Fabian showed to me was an alignment laser.

This alignment laser is basically a continuous wave-visible light laser that goes through a optical fiber and afterwards enters a beam coupler. I used a shearing interferometer (described above) to make sure that the outgoing beam was collimated, and then used the collimated beam for backward alignment.

Backward alignment means to insert the collimated beam of the alignment laser at the location of the microscope’s back focal plane and then aligning the microscope’s components in a backward manner (tube lens -> scan lens -> scanners -> etc.; instead of forward alignment, which starts at the pulsed laser and goes forward until it ends up at the objective). This is helpful for example when two or more separate incoming beams are combined. A second advantage of the alignment laser is that the laser beam is, unlike the near-IR pulsed laser, visible to the human eye.

8. Continuous wave (cw) mode for a pulsed Ti:Sa laser

Standard two-photon microscopes are based on pulsed lasers that have a center wavelength adjusted between 800 and 1000 nm. The light is therefore invisible to the human eye (except for some faint spectral components when the center wavelength is adjusted to 800 nm) and comes with high average power (>1 W, or often much more) and in ultra-short high-energy pulses, making it a very dangerous thing to deal with. IR viewer cards and IR viewers, or less expensive solutions based on simple cameras/webcams, can make the beam visible to our human eye, but I always found this very exhausting to work with. And being exhausted is not a good pre-condition for optical alignment, which requires the ability to stay focused and a sharp mind. Therefore, it would be great if it was possible to make the laser less dangerous: by preventing it from pulsing, by lowering the overall power, and by making it visible…

Fortunately, all of this is possible for the tunable wavelength Ti:Sa lasers (although not for the fixed 1040 nm lasers). For the Spectra Physics MaiTai lasers, the slightly old-fashioned user-interface (screenshot below) allows to manually lower the pump power (also called the “green power”). Change the wavelength to something visible (something between 700 and 750 nm), lower the green power until the pulsing (indicated by the green light in the main control window) stops and continue lowering the green power until the output IR power reaches something like 30-40 mW. This is still much more than the average laser pointer and still dangerous, but less so than a pulsed invisible beam.

It is however important to keep in mind that the control panel is not always 100% reliable. It switches between pulsing and non-pulsing as if these were binary states. But there is a sometimes continuous transition between the two, and sometimes the laser also becomes a bit unstable and oscillates between states – so, better watch the laser for a while before you start working with the beam.

I once talked to a laser engineer from Spectra Physics who told me that it would not be ideal to leave the laser in this non-pulsing state for too long (>> 1 hours). Probably this has something to do with inefficient energy conversion and the heat generated in this process. But I have to admit that I follow this recommendation simply because I don’t know it better. Lasers are complex things, and I mostly treat them in a pragmatic way, like a black box.

9. Improvised test samples

One of the most common mistakes that I keep seeing is to set up a microscope and then try to test it with a real sample, like a living brain of a transgenic mouse. That’s really not the best way to proceed. Instead, always test your microscope with a simple, bright and dead sample. (And, ideally in addtion also with sub-diffraction beads.)

For example, you can use colored plastic items (for example plastic slides), perfect to test how homogeneous the FOV is. Or pollen grains from any flowers outdoors during the pollen season. Or dead, small insects that you caught from your desk. Their surface will often be super bright under a two-photon microscope, especially if you remove the emission filters. Or simply a single strand of your hair (especially if you have still some pigments in them).

In general, most solid things are somehow autofluorescent, and it can be really fun to look at random things below the two-photon microscope. Pay attention not to kill the PMT detector (lower the initial PMT gain), since autofluorescence of natural things can be very bright.

10. More resources on (two-photon or other) optics alignment

Posted in Imaging, Microscopy | Tagged , , , , | 4 Comments

Matlab magic spells

Most neuroscientists who analyze their data themselves use either Matlab or Python or both – the use of R is much less common than in other fields of biology. I’ve been working with Matlab on a daily basis for >10 years, and I have started using Python regularly, although less frequently, about 5 years ago. I’d like to encourage everybody to learn Python, because the language is much more flexible, more beautiful and more powerful. But still I use Matlab often to get a first impression of data, because it is simply perfect for browsing intermediately sized (MB to GB) datasets and for doing simple things in general, also due to the plotting interfaces which I find are easier to use for data exploration than matplotlib/seaborn in Python.

Here, I would like to share a collection of useful Matlab commands which are not complicated or fancy but still nice to know. Don’t expect too much, and don’t read this if you have spent a couple of years with Matlab already. To be entirely honest, I’m writing this also for my own reference, because I keep forgetting the exact wording of some of the commands, and it’s nice to have everything in a single place. Without further ado, here is my list of favorite Matlab spells:

1. Leading zeros with sprintf()
2. Automatization with dir()
3. Exclude file patterns based on regular expressions
4. Callback functions to speed up image inspection
5. Regionprops for image segmentation
6. The curve fitting toolbox
7. Manually modifying colormap
8. Turning the background of a figure white
9. Make ticks point outwards instead of inwards
10. Rotate x-tick labels
11. Reverse y-axis direction
12. Save large figures to vectorized format (eps)
13. Get raw data from figures
14. Position the figure window at a given screen location
15. Profiling code with tic/toc

Continue reading

Posted in Data analysis | Tagged , , | Leave a comment

Annual report of my intuition about the brain (2019)

How does the brain work and how can we understand it? I want to make it a habit to report some of the thoughts about the brain that marked me most during the past twelve month at the end of each year – with the hope to advance and structure the progress in the part of my understanding of the brain that is not immediately reflected in journal publications. Enjoy the read! And check out previous year-end write-ups: 2018, 2019, 2020, 2021..

The purpose of neuronal spikes is not to represent …

It is a common goal of many neuroscientists to “decode” the activity patterns of the brain. But can the brain be simply understood by finding observables like visual objects or hand movements that are correlated with these activity patterns? Nobody would deny that neuronal activity is correlated with sensory inputs, motor outputs or internal states. But what does it help to look at these correlations? In other words, are these representations of sensory, motor or inner world key to understanding the brain? In the following, I will argue against this “representational” view of the brain.

First, what do I mean with “representational”? In the following, a neuron represents e.g. an external stimulus if its activity pattern is correlated with the occurrence of this external stimulus (for a stricter definition, see below). For example, a neuron in visual cortex fires when a drifting grating of a 55° orientation is presented to the animal. One could say that “the neuron codes for 55° drifting gratings” or “the activity of the neuron represents 55° drifting gratings”.

To give another example, a neuron in hippocampal CA1 fires reliably when the animal is at a certain location (“the neuron codes for this and that place” or “the neuron represents this and that place”). This representational view is probably the most intuitive approach when it comes to characterizing the activity of the brain. However, simply representing the internal or external world is not enough. What else does a brain do?

Before trying to answer this question, let’s speculate why the “representational” view is so deeply engrained in systems neuroscience. First, many thinkers and also neuroscientists have been biased towards a view of the brain as a passive receiving device because they have been sitting passively at a desk in front of an empty sheet of paper while shaping their thoughts. Second, the representational view is also very intuitive: finding something that can be observed, like a visual stimulus, to be reflected directly in neuronal activity seems to explain why this neuronal activity happened. Animals or humans subject to brain experiments are often told to hold still while waiting for a stimulus or cue. It is clear from this typical experimental design that it would be more likely to see “receptive fields”, which is a classic finding of representations in the brain, because the experimental design biased the experience of the subject towards passive receiving. Torsten Wiesel and David Hubel, who have pursued this line of thinking in the most compelling way in the early era of neuroscience, are probably the best examples for this approach:

Apart from that, artificial neuronal networks like the attractor networks put forward by John Hopfield and, more recently, deep convolutional networks, are typically based on classification tasks, e.g. to identify written numbers on an envelope, or to correctly categorize the object seen in an image.

For a device that is supposed to perform such a task, it is clearly sufficient to represent the inputs and the possible outputs. But this is not the main task of a brain.

… but to act upon something

The brain, unlike a pattern classifier, is in a closed loop with its environment, being moved by it and acting upon it at the same time. It is not a passive observer, but an involved device, an organism. The brain is not designed to represent its environment. Instead, it has evolved to act upon its environment.

How could a model of the brain account for this fact? An approach to circumvent the limitation of the “representational” view of the brain is to focus on its generative abilities. The brain is clearly capable of forming mental models of the external world, of other minds, of physical laws, and also of abstract concepts. “Predictive processing” is a term covering a variety of models also applied to the level of single neurons. In these models, predictions and prediction errors are not purely representational, but process information as a bi-directional interaction between sensory (bottom-up) and generative (top-down) content. However, these models consists of hierarchical networks that, again, perform classification tasks or other well-known toy problems from the machine learning community. Therefore, these models often appear to be representational models in disguise, rather than something completely different.

A researcher that inspired me to think into a slightly different direction is Romain Brette, who published an opinion article in 2018 called “Is coding a relevant metaphor for the brain?” (In this article, Brette also defines more precisely the meaning of “representations” and its relationship to “coding” and “causality”; but since I do not think there is a semantic agreement among neuroscientists, I decided not to use this more precise terminology here.) Brette’s article is first and foremost an article against the idea of “coding” and “representations” as useful concepts for understanding the brain. Naturally, it is a bit less clear what alternative he is actually suggesting and how this alternative could be implemented or understood. (In a comment to the opinion piece by Brette, Baltieri and Buckley convincingly argue that a set of predictive processing models that use active inference is an interesting alternative to the representative view and is close to the “subjective physics suggested by Brette.) Although some concepts in the opinion article remain a bit vague, Brette also touches upon the idea of understanding neurons not by analyzing their representational properties, but by analyzing their actions, i.e., their spikes:

“[C]oding variables are observables tied to the temporality of experiments, whereas spikes are timed actions that mediate coupling in a distributed dynamical system.” (Romain Brette)

From this paper and from posts on his blog, it seems to me that he is trying to circumvent the natural and historical representational bias of systems neuroscience by investigating systems which are simple and peripheric enough to be amenable to descriptions that are not based on representations, but actions/reactions. One area of his research, which I find particularly interesting, is the study of single-cellular organisms.

The basic unit of life is the single cell

During my PhD, I gradually went from large-scale calcium imaging to single-cell whole-cell recordings. I was fascinated by the complexity of active conductances and the variability between neurons, and as I mentioned in last year’s write-up, I also got interested in complex signal processing in a single neuron.

Therefore, investigating unicellular organisms in order to understand neurons and ultimately the brain was not so far from my intuition anyway. And although neurons do not move their processes a lot and seem to be rather immobile and static at first glance, there are many examples of single-cell organisms that exhibit quite intriguing behaviors and remind us of what single cells are capable of. Just watch this movie of this tear drop-shaped ciliate that uses a single long process to search for food:

And it’s just a single cell! Interestingly, these ciliates not only have an elaborate behavioral repertoire, but also use spikes to trigger these behaviors. (Check out this short review on “Cell Learning” to get an idea about what single-cell organisms are able to learn.)

At least in the beginning of life, the single cell was the basic unit of life, receiving information about the external world, using its own memory and acting upon the world. Of course, many cell types in humans are far more specialized and cannot really be considered a basic unit of life, since they serve a very specific task like forming a part of a muscle or sitting in the germline. However, information-processing cells like cells in the immune system or neurons in the brain, which live at the interface between inputs and outputs of an organism, are much more likely to be similar to these unicellular forms of life. They make most of their decisions on their own, based on local signals, without asking a higher entity what to do.

If we look at life with human eyes, the mind and the human body tell us to search for an understanding of neurons from the perspective of the whole brain (or the whole body). However, if we look at life from a greater distance, life shows its highest richness at the level of single cells, and it would therefore make sense to search for an understanding of neurons from the perspective of a single neuron.

The objective function of a single cell

What does it mean to understand a single cell? A very biologist way would be to make an as complete as possible list of all its behaviors and states when exposed to certain conditions. For example, a hypothetical cell senses a higher concentration of nutrients at one of its distal processes and will therefore move into this direction. Conversely, if it senses a lower concentration in this direction, it will move into the opposite direction. If the nutrient level is overall high, it will not move at all. If it is overall low, it will try to move away in a random direction. This is a biologist’s list of behaviors.

A more compressed and therefore better understanding would be in my opinion to write down the “goals” of this single cell. Or, to abuse an expression by mathematicians and machine learning people, to write down the objective function (or loss function) of the single cell. For the above-mentioned hypothetical cell, this would be simply: move to places with high nutrient concentrations.

If we shift back from ciliates in a pond to neurons in the brain, this leads to a very obvious, but also surprisingly difficult-to-answer question: what is the objective function of a single neuron?

Of course it is not clear whether it is reasonable to assume that the objective function or optimization goal is set on the single-cell level. But given that cognition is most likely a self-organized mess that is only coarsely orchestrated by reward and other feedback mechanisms, I would tend to say: no, it’s not unreasonable. (But check out below for more detailed criticism.)  Therefore, the question is, what are the goals of a single neuron? What does it want?

Let’s assume that this question is a good one. How can it be answered? I see three possible approaches:

  1. To investigate the evolution of self-sustaining cells into neurons of the central nervous system. Since objective functions of uni-cellular organisms seem to be rather accessible and objective functions of neurons seem to be difficult to guess, one could investigate intermediate stages and try to understand the evolutionary development.
    .
  2. To understand the development of the brain and the rules that govern migration and initial wiring of a single neuron. This would possibly allow to make a list of development behaviors for a certain neuronal type and from that extract an objective function. This would be an objective function not of regular operation, but of development. That’s exactly what developmental neurobiology is actually doing.
    .
  3. To observe the full local environment of a neuron (for example, its electrical inputs) and to try to extract rules that govern changes in its behavior (for example, synaptic strength or output spikes)
    .

The third approach is probably the most difficult one because it is currently technically impossible to observe a significant fraction of the inputs of a neuron together with its internal dynamics. There have been many studies on the plasticity of very few synapses in artificial conditions in slices using paired recordings, but it is currently impossible to perform these paired recordings in a behaving animal. Even in slices, these experiments are daunting given the myriad of different neuronal cell types and the expected variability in these experiments. Effects like long-term potentiation, long-term depression, short-term facilitation, short-term depression, spike-timing dependent plasticity, immediate early genes, receptor trafficking and neuromodulatory effects are just some of many processes that are essential to understand what is happening to a single cell.

A different approach that, however, still resonates with the idea of single cells as actors has been put forward by Lansdell and Kording in 2018. Their idea, as I understand it, is that a neuron could try to estimate its own causal effect, that is, the effect of its actions. This is possible because its actions are a discontinuous function of their state, the membrane potential. If external conditions are almost identical in two situations but the neuron fires a spike only in one of the two situations, the neuron could extract from the received input the effect of this single spike. Therefore, the neuron would measure the causal impact of its actions on itself. This idea is very close to that of a unicellular organism that immediately feels the effects of its own actions.

But what could be the objective function of such a neuron? – For example, the goal of a neuron that receives recurrent feedback could be to find a regime of a certain level of recurrent excitation. Recurrence could be measured by estimating the effect of the spikes of the neuron on the received feedback, possibly also in a dendrite-specific manner. I could imagine that objective functions almost simple as that could make a neuronal net work, once it is embedded in a closed-loop environment with sensory input and motor output. Also, I think that this way of thinking could be close to ideas put forward by e.g. Sophie Denève about predictive processing in spike-based balanced recurrent networks (for example, check out this proposal), which heavily relies on the self-organizing properties of adaptive recurrent circuits with a couple of plasticity rules.

An additional strength of the idea to use spikes to measure a neuron’s causal impact is that it could explain why neurons fire stochastically: this way, they acquire information about their causal impact that is hidden in a regime of deterministic firing.

Conclusion

Clearly, these ideas need more refinement and restructuring, and it is obvious that some of the feasible experiments (spike-timing dependent plasticity, behavioral time scale plasticity) have been done already. But I still like the idea of reframing these experiments by analyzing a potential objective function of a single neuron.

Beyong that, it would be interesting to try to understand self-organized systems of artificial neurons that are not governed by global loss functions, but by neuron-intrinsic and possibly evolving objective functions. I’m really looking forward to seeing theoretical work and simulations that do not focus on the behavior and objectives of the whole network, but on the behavior and objectives of single units, of single cells.

2cells

Unicellular organisms and neurons are both single cells. Left: Unicellular ciliate lacrymaria olor, adapted from Wikicommons (CC 2.0). Right: Two-photon image of hippocampal pyramidal neurons in mice.

Appendix: the devil’s advocate

Criticism 1: Representations and coding seem to be still the best way to think about neuronal activity

Although I find the ideas above appealing conceptually, it is clear from past research that most of our understanding of the brain comes from correlating the observed brain activity with sensory or motor information, because it allows to conclude when certain brain areas or neuronal types are active and what happens if they fail. This functional anatomy of the brain might not reveal all the design principles of the brain, but it could be necessary to make an educated guess about those principles.

In addition, our human languages are based on concepts (“you”, “house”, “hope”, “cold”) rather than on dynamical processes. For example, the science of language, linguistics, has coined the term ‘signified’ which describes the content of such a concept as a central element of language. From this converging view from both naive usage and scientific study of languages, it seems to me likely that high- and low-level representations indeed exist and are accessible in the human mind – probably as an evolutionary byproduct of mental processes that were more directly connected to actions or sensation. – It is still interesting that some sensory modalities are more vaguely represented in thought and language (olfaction) than others (vision).

Criticism 2: It is so far technically impossible to record the perspective of a single neuron

It seems to be relatively easy, although not trivial, to monitor a unicellular ciliate and its environment. However, the environment of a neuron consists of all of its synaptic partners, which can be in the range of thousands, and any other source of electrical or chemical neuromodulation. A single neuron itself is widespread over such a large volume that is very far from being amenable to any electrophysiological or imaging technique that provides sufficient resolution in time and space.

The best candidate technique, voltage imaging of both soma and dendrites, cannot be used over periods longer than minutes due to bleaching, suffers from low signal-to-noise as well as difficult calibration and requires imaging speeds and photon yields that are impossible to achieve with any imaging technique and fluorescence sensor that I know of.

But researchers have proved before that imperfect methods can be used successfully. I’m curious about all technical developments and what they will bring.

Criticism 3: Regarding neurons as actors is a case of anthropomorphism and ignores the interactions in complex systems that can lead to emergence

If I scrutinize my own ideas, I realize that one reason why I like the idea of neurons as actors is that this idea is simple and connects to the human desire to have an intuitive, empathic understanding of things. For example, we tend to personify even the smallest animals like ants, bees or ciliates in movies, children books or youtube videos. We tend to personify items of our personal households like a cup, a table, or an old house; or our plants in the garden; or the ocean, the wind, a dead tree. On the other hand, I feel that we are largely unable to feel the same empathy with distributed systems like the mycelium of a mushroom that spreads over kilometers in a forest, or the distributed power supply grid, or a large fish swarm, or a deep artificial neuronal network. We can feel amazement, but no empathy. I can imagine to be single water molecule that vibrates, rotates and is tossed around by the Brownian motion of the environment and by the convection of the ocean, but I cannot imagine to be the emergent phenomenon of water waves breaking at the shoreline which are the complex environment of this single water particle.

I think that there is a tendency in human thinking to avoid complexity and to impose a simple human-like personality onto all actors. Therefore, we should observe ourselves and try to avoid our tendency to personify parts of complex systems when they actually cannot be disentangled from their environment. It is easy to tell a story about specific neurons as important actors and what they want to do and how they feel immediate feedback of their actions like we do when seizing an apple, but it is difficult to tell a story about complexity, which is over the top of our heads. But maybe this, which we can hardly understand, is the truth, and this is good to keep in mind.

Posted in Calcium Imaging, Data analysis, electrophysiology, machine learning, Network analysis, Neuronal activity, Review | Tagged , , , , , , , | 8 Comments

A practical guide for adaptive optics

There is no standard curriculum to learn practical procedures about microscopy: how to align a setup, how to identify misalignments, how to identify broken parts, where to buy components, how to check their performance, and much more. How to learn all of this?

Often, research papers contain close to zero information about these topics, and even if they do, the information is scattered in the methods sections or supplementary information. This is not a surprise since papers are mostly expected to be “research” and not engineering manuals or step-by-step protocols. This makes some papers quite impressive, but close to useless from a practical perspective.

Therefore, the most useful practical information about microscope construction and maintenance can be found in sources other than papers. Some examples:

  • On labrigger and other blogs (including my blog), useful but sometimes a bit anecdotal information is distributed over many pages, making it available via search machines.
  • A nice tutorial on alignment by Rainer Heintzmann can be found in the web – it’s an appendix to a book, but clearly more widely shared than the book itself.
  • Some projects, like the mesoSPIM project (by my labmate Fabian Voigt), provide a lot of useful practical information via dedicated websites or Github repositories.
  • Andrew York took advantage of his independence to provide less formal work-in-progress publications via Github, which allowed to focus more on practical and useful information, instead of trying to prove novelty and impact with beautiful figures.

These webpages are more flexible than PDF papers, can be easily updated and enhanced and allow for simple integration of animations or interactive elements. Since the webpage does not need to convince editors and reviewers with polished writing and impressive claims, it allows for more practical information to slip in.

Recently, I discovered another project that goes into a similar direction, initiated by the lab of Martin Booth, on the topic of adaptive optics. As the authors describe it themselves:

“The documents posted here will include tutorials, experimental protocols, and software. This will range from simple hints and tips through to extensive documentation of procedures. We intend to post material whenever it is ready, and documents will be updated with newer versions when we have them. We chose to set up the site in this way, to be free from the constraints of the traditional publishing process, which is ill-suited to the dissemination of this type of material, particularly when content could be frequently updated as our own approaches develop.” – aomicroscopy.org/about

I really enjoyed the protocols uploaded so far. For example, despite quite some experience with microscopes, I have never aligned a confocal pinhole so far, and I found it interesting to read the instructions on how to do this, even though it’s a very simple procedure. I’m really looking forward to seeing this form of resource becoming more frequently used and also appreciated not only by other researchers, but also by funding sources.

Of course, all of this does not replace the best source for practical training in microscopy: doing it yourself, spending a lot of time solving problems – and having somebody in the lab who knows everything.

Posted in Imaging, Microscopy | Tagged , , | 1 Comment

Review: An artificial ground truth for calcium imaging

Selected paper: Charles, Song, Tank et al., Neural Anatomy and Optical Microscopy Simulation (NAOMi) for evaluating calcium imaging methods, bioRxiv (2019).

What is the paper about? Calcium imaging is a central method to observe neuronal activity in the brain of animal models. Many labs use rather complicated algorithms to extract meaningful information from these imaging data, but the results of those algorithms are often hard to judge: is this an artifact of the algorithm or the imaging method, or is it a biologically meaningful signal of a single neuron? The paper by Charles et al. addresses this question by simulating both neuronal activity in anatomically realistic neurons and the imaging process that makes these activity patterns visible. By knowing both the true simulated biological signals and the simulated imaging data, the procedure can benchmark analysis algorithms against a known ground truth. In addition, the method can be used to evaluate imaging modalities and their impact on resolution, movement artifacts and other factors for a specific use case.

More details: Neurons are simulated as realistic 3D structures with dendrites and axons, based on 3D EM and light microscopy data of neuronal tissue. Their activity is simulated as spikes. The spikes are transformed into slower calcium events, which in turn are transformed by the binding kinetics of the calcium indicator. Binding to the calcium indicator gives rise to a change in fluorescence, based on the Hill model of cooperative binding. Then, things are brought together by simulating the excitation laser beam and the light emitted by the fluorescent calcium sensors, assuming a typical signal to noise level and shot noise induced by the limited number of photons seen by the microscope. Altogether, this is a quite impressive simulation pipeline, covering anatomy, calcium sensors and binding kinetics as well as light simulation and relevant noise induced by instruments and other variables (for example, residual artifacts induced by the motion of the animal).

Evaluating demixing algorithms: In my opinion, the most interesting part of the paper is section 2.3 (“Evaluation of automated segmentation”). Here, the authors use their simulations to benchmark three commonly used algorithms for automated source extraction. These algorithms are used to automatically extract regions of interest (ROIs) from an imaging dataset, and are often used as a repeatable and hopefully more objective and more reliable replacement of the manual drawing of ROIs by a human being. Unfortunately, the authors do not benchmark those methods also against a human expert who manually selects ROIs.
Anyway, with respect to the three state-of-the-art algorithms, the evaluation using the calcium imaging simulations reveals a rather limited precision of all investigated methods. First, the different algorithms find rather different sets of neuronal activity units with relatively little overlap between the units found by each algorithm. In addition, also the absolute number of units found by each algorithm is quite variable (for a specific simulation of L2/3 imaging in visual cortex, the number of found components varies between 265 and 1091 for the different algorithms). More details can be found in the relevant section in the paper.
These are interesting findings. They do not disqualify demixing algorithms, but they should lower our trust that the large majority of the extracted components is based on activity signals of single neurons.

The devil’s advocate: The strength of the simulation in this paper (its realism) is also its weakness. The authors themselves state:

“Simulation-based approaches, however, often suffer from being either too simple or too complex.”

How can we be sure that they have found exactly the right level of detail? We can’t, or not very easily. There is an endless list of possible details which could have been omitted: For example, the calcium indicator concentration was set to 10 μM for the simulations, but these values might differ between neuronal types and depend on other factors as well. To give another example, the simulation assumes that the calcium concentration is constant across the dendritic tree, therefore omitting the known effect of localized calcium dynamics in dendritic spines.
And there are many more approximations (optics, physiology, anatomy) made by the authors that are difficult to judge. For example, I would have been glad to see a simulated excitation PSF at a certain tissue depth alongside with a measured PSF at the same depth (e.g., using beads injected into cortex), to make sure that the scattering simulation, which is too complex to be judged without investing a lot of time, is realistic or not. – However, these are details, and for most of the analyses performed by the paper (like the discussion of demixing algorithms), this level of detail of the simulation is probably not important – and I’m looking really forward to having this simulation tool publicly available. If it is user-friendly and easy to adapt, it could become a standard tool to check in silico what to expect for in vivo experiments.

Conclusion: Very interesting and possibly useful work, although it is difficult to understand the limitations and details of the simulation.

Further reading: A paper from he same lab on a related topic: Gauthier, J. L., Tank, D. W., Pillow, J. W. & Charles, A. S. Detecting and Correcting False Transients in Calcium Time-trace Inference.

Other paper reviews on this blog: on precise balance in the hippocampus [1]; on L5 apical dendrites [2].

Posted in Calcium Imaging, Data analysis, Imaging, Microscopy, Neuronal activity, Reviews | Tagged , , , , | 2 Comments

The cell-attached soundtrack of calcium imaging

Old-school electrophysiologists like to listen to the ephys signals during experiments. For example, this allows to precisely hear when the patch pipette approaches a target neuron. The technique is discussed in the Axon Guide: “Audio Monitor: Friend or Foe?”.

The following is something very similar: Calcium imaging of neuronal activity (GCaMP6f), and an audio track that is derived from a simultaneously performed cell-attached recording of the same neuron (detected action potentials are convolved with a particular sound event, like two metals hitting each other, the sound of an anvil, and the sound of gunshots – the “firing” neuron).

Three times the same calcium recording, but with different soundtracks:

CC-BY 3.0, http://soundbible.com/1750-Hitting-Metal.html

CC-BY 3.0, http://soundbible.com/1742-Anvil-Impact-1x.html

CC-BY 3.0, http://soundbible.com/2123-40-Smith-Wesson-8x.html

Posted in Calcium Imaging, electrophysiology, Imaging, Neuronal activity, zebrafish | Tagged , , , | Leave a comment