Large-scale calcium imaging & noise levels

Calcium imaging based on two-photon scanning microscopy is a standard method to record the activity of neurons in the living brain. Due to the point-scanning approach, sampling speed is limited and the dwell time on a single neuron reduces with the number of recorded neurons. Therefore, one needs to trade off the number of quasi-simultaneously imaged neurons versus the shot noise level of these recordings.

To give an simplified example, one can distribute the laser power in space and time over 100 neurons at 30 Hz, or 1000 neurons at 3 Hz. Due to the lower sampling rate, the signal-to-noise-ratio (SNR) of the 1000 neurons will decrease as well.

A standardized noise level

To compare the shot noise levels across recordings, in our recent paper (Rupprecht et al., 2021) we took advantage of the fact that the slow calcium signal is typically very similar between adjacent frames. Therefore, the noise level can be estimated by

\nu  = \frac{Median_t \mid \Delta F/F_{t+1} - \Delta F/F_t \mid}{\sqrt{f_r}}

The median makes sure to exclude outliers that stem from the fast onset dynamics of calcium signals. The normalization by the square root of the frame rate f_r renders the metric comparable across datasets with different frame rates.

Why the square root? Because shot noise decreases with the number of sampling points with a square root dependecy. The only downside of this measure is that the units seem a bit arbitrary (% for dF/F, divided by the square root of seconds), but this does not make it less useful. To compute it on a raw dF/F trace, simple use this simple one-liner in Matlab:

noise_level = median(abs(diff(dFF_trace)))/sqrt(framerate)

Or in Python:

import numpy as np
noise_level = np.median(np.abs(np.diff(dFF_trace)))/np.sqrt(framerate)

If you want to know more about this metric, check out the Methods part of our paper on more details (bioRxiv / Nature Neuroscience, subsection “Computation of noise levels”).

The metric \nu comes in handy if you want to compare the shot noise levels between calcium imaging datasets and understand whether noise levels are relatively high or low. So, what is a “high” noise level?

Comparison of noise levels and neuron numbers across datasets

I collected a couple of publicly available datasets (links and descriptions in the appendix of the blog post) and extracted both the numbers of simultaneously recorded neurons and the shot noise level \nu. Each data point stands for one animal, except for the MICrONS dataset, where each dataset stands for a separate session in the same animal.

As a reference, I used the Allen Brain Institute Visual Coding dataset. For excitatory neurons, typically 100-200 neurons were recording with a standard noise level of 1 (units omitted for simplicity). If you distribute the photons across an increasing number of neurons, the shot noise levels should increase with the square root of this multiple (indicated by the black line). Datasets with inhibitory neurons (de Vries et al., red) have by experimental design fewer neurons and therefore lie above the line.

A dataset that I recorded in zebrafish with typically 800-1500 neuron per recording lies pretty much on this line, similar to the MICrONS dataset where they used a mesoscope to record from several thousand cortical neurons simultaneously, at the cost of lower frame rate and therefore higher noise levels, similar to the dataset by Sofroniew et al., which recorded ca. 3000 neurons, but all from one plane in a large FOV.

Two datasets acquired by Pachitariu and colleagues stands out a bit by pushing the number of simultaneously recorded neurons. In 2018, this came at the expense of increased noise levels (pink). In 2019 (a single mouse; grey), despite a dataset with ca. 20,000 simultaneously recorded neurons, the noise level was impressively low.

In regular experiments, in order to mitigate possible laser-induced photodamage or problems due to overexpression of indicators, noise levels should not be maximized at the cost of physiological damage. For example, the mouse from the MICrONS dataset was later used for dense EM reconstruction; any sort of damage to the tissue, which might be invisible at first glance, could complicate subsequent diffusive penetration with heavy metals or the cutting of nanometer-thick slices. As a bottom line, there are often good reasons not to go for the highest signal yield.

Spike inference for high noise levels

To give an idea about the noise level, here is an example for the MICrONS dataset. Due to the noisiness of the recordings (noise level of ca. 8-9), only large transients can be reliably detected. I used spike inference through CASCADE to de-noise the recording. It is also clear from this example that CASCADE extracts useful information, but won’t be able to recover anything close to single-spike precision for such a noise level.

Above are shown the smooth inferred spike rates (orange) and also the discrete inferred spikes (black). The discrete spikes (black) are nice to look at, but due to the high noise levels, the discretization into binary spikes is mostly overfitting to noise and should be avoided for real analyses. For analyses, I would use the inferred spike rate (orange).

Conclusion

The noise level \nu can be used to quantitatively compare noise levels across recordings. I hope that other people can use this noise level metric \nu for their work.

As a note of caution, \nu should never be the sole criterion for data quality. Other factors like neuropil contamination, spatial resolution, movement artifacts, potential downsides of over-expression, etc. also play important roles. Low shot noise levels is not a guarantee for anything. However, high shot noise levels on the other hand are always undesirable.

.

Appendix: Details about the data shown in the scatter plot

de Vries et al. (2020; red and black) describes the Allen Visual Coding Observatory dataset. It includes recordings from more than 100 mice with different transgenic backgrounds in different layers of visual-related cortices. Red dots are datasets from mice that only expressed calcium indicators in interneurons, while black dot are datasets with cortical principal neurons of different layers. The datasets are highly standardized and of low shot noise levels (standardized level of ca. 1.0), with relatively few neurons per dataset (100-200).

Rupprecht et al. (unpublished; green) is a small dataset in transgenic Thy-1 mice in hippocampal CA1 that I recorded as a small pilot earlier this year. The number of manually selected neurons is around 400-500, at a standardized noise level of 2.0-3.0. With virally induced expression and with higher laser power (here, I used only 20 mW), lower noise levels and higher cell counts could be easily achieved in CA1.

Rupprecht et al. (2021; violet) is a dataset using the small dye indicator OGB-1 injected in the homolog of olfactory cortex in adult zebrafish. At low laser powers of ca. 30 mW, 800-1500 neurons were recorded simultaneously at a standardized noise level of 2.0-4.0.

Sofroniew et al. (2016; light green) recorded a bit more than 3000 neurons simultaneously at a relatively low imaging rate (1.96 Hz). Different from all other datasets with >1000 neurons shown in the plot, they recorded only from one single but very large field of view. All neuronal ROIs had been drawn manually, which I really appreciate.

Pachitariu et al. (2018; pink) is a dataset recorded at a relatively low imaging rate (2.5 Hz), covering ca. 10,000 neurons simultaneously. The standardized noise level seems to be rather high according to my calculations.

Pachitariu et al. (2019; black) is a similar dataset that contains ca. 20,000 neurons, but at a much lower standardized noise level (4.0-5.0). The improvement compared to the 2018 dataset was later explained by Marius Pachitariu in this tweet.

MICrONS et al. (2021; red) is a dataset from a single mouse, each dot representing a different session. 8 imaging planes were recorded simultaneously at laser powers that would not damage the tissue, in order to preserve the brain for later slicing, with the ultimate goal to image the ultrastructure using electron microscopes. The number of simultaneously imaged neurons comes close to 10,000, resulting in a relatively high standardized noise level of 7.0-10.0.

The black line indicates how the noise level scales with the number of neurons. For n_1 = 150 neurons (Allen dataset, de Vries et al.), a standardized noise level of \nu_1 = 1.0 can be assumed. For higher numbers of neurons n_2, the noise level \nu_2 scales with \nu_2 = \nu_1*\sqrt{n_2/n_1}. Deviations from the line indicate where recording conditions were better or worse compared to these “typical” conditions.

This entry was posted in Calcium Imaging, Data analysis, Imaging, Microscopy, Neuronal activity and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.