The excitation PSF in 2P point scanning

For quite some time, I was unsure about the reasons why images degrade when going to deeper layers with 2P point scanning. This also has remained largely unclear to me until the present point, after having done the estimates presented below, but at least now I’m getting a feeling for it.

The first difficulty lies in estimating whether the degradation comes from a degradation of the excitation PSF, or from low signal due to fluorescent photons that do not reach the detector due to scattering. The mean free path is around 200 um for the IR excitation light, and around 50 um for the to-be-detected visible light. Continue reading

Posted in Uncategorized | Leave a comment

Odd opinions in neurobiology

I have the impression that neurobiology is the single field in biology where many people with sometimes rather selective knowledge of the field have strong opinions about how it should work (i.e. how the brain should work). I don’t want to say that these opinions are worthless. In contrary, I enjoy them as an inspiring read that can question some of my beliefs that trickled down during the last years due to habituation. As a common sign, these theories present their new approach as a game-changer that offers a – until now – overlooked point of view on the brain. Nobody should be surprised by this, since, despite all efforts, an understanding of how specific information processing in the brain could look like is somewhat completely dark, and the most likely explanation for this lack of progress in understanding is a generally misguided paradigm in neurobiology, and the solution would be a paradigm change.

One example for a strong opinion I came across roughly two years ago, was the blog neuroelectrodynamics.blogspot.ch. (The author of the blog is apparently a professional neuroscientist.) On his blog, he highlights the importance of spike shape and propagation direction, and the possibility that these details of ion currents may lead to computations in neurons that are far more sophisticated than computations possible for simple firing rate models. It’s an interesting read, because it questions common places in neuroscience, although some of these opinions (firing rate models) are not necessarily part of the way most neuroscientists think of information processing anymore.

[Update 2018.] One example I came across recently is the blog mythsofvisionscience.wordpress.com, which questions the underlying assumptions that are driving the field of vision neuroscience (especially V1). It does not come up with a own crackpot theory of how the visual system should work, but instead points out many small and big weaknesses and flaws of current research in vision neuroscience. The blog is run by Lydia Maniatis.

Another (very different) example I found recently in the web is this homepage written in german: www.straktur.de It’s also available in english, but I have the feeling that it has been translated using Google Translate …
It stems from a mathematician who emphasizes the role of Glia cells; more precisely, he hypothesizes that Glia cells, responsible for nourishing neurons, evaluate the performance of neurons and support them accordingly. A dysfunctional neuron would therefore simply be discarded by cutting the support coming from Glia cells. By this, optimized information processing would arise naturally from this simple energy constraint (and this is where you can see the handwriting of a mathematician) – a basically interesting idea.
The author doesn’t really explain how the Glia cells might be able to evaluate the performance of surrounding neurons, but for simple circuits, one can imagine that this is possible.

At least for me, those kind of opinions are sometimes much more inspiring than a typical Nature/Neuron/Cell paper, because the presented opinion is strong, wants to convince he reader, questions the authorities in the field and often presents its theory as the holy grail, reminding you that there is something bigger out there (the holy grail) and that there is still a lack of understanding when it comes to specific information processing in neurons (or glia).

Posted in Uncategorized | Leave a comment

Genetically encoded voltage sensors

Genetically encoded calcium indicators (GECIs) are nowadays commonly used to report activity of many cells in transgenic animals; similarly, injected dyes like Rhod-2 can act as optical calcium reporters. The main shortcoming of this method is that it measures neuronal activity indirectly (via the calcium concentration), and, which is partially connected to the latter fact, at low temporal resolution (rise and decay times between 20 and 100 ms, or even more).

How nice would it be to have a genetically encoded fluorescent protein that changes its fluorescence not in response to calcium concentration, but to the electrophysiologically more relevant observable, the transmembrane voltage, better known as membrane potential. Indeed, these proteins exist. A number of recent papers made me aware of this fact; this week, I gave a short Journal Club about these indicators, and here I want to briefly summarize what is, according to my best knowledge, the state of the art.

Continue reading

Posted in Uncategorized | Leave a comment

Neuroengineering blogs

Some neuroscience blogs that do not focus on biological questions (here’s a list on circuit neuroscience blogs), but rather on technical hurdles of lab work and how to overcome them. Here, I’d like to point out some blogs that are to some extent incarnations of a spirit of DIY and costum-built solutions with an inclination for openness and willingness to share:

Labrigger is the most comprehensive and helpful blog on technical problems and solutions around neuroscience and neuroengineering, with a focus on two-photon microscopy. It is run by Spencer Smith from UCSB: http://labrigger.com/blog/

This blog, run by Dario Ringach from UCLA, covers technical aspects of resonant scanning 2P microscopy and is affiliated with Neurolabware, a 2P microscope vendor: https://scanbox.wordpress.com/

[Update 2017:] A blog by neuroscientist and “lab hacker” Bill Connelly from Australia National University: http://www.billconnelly.net/

[Update 2018:] A blog by neuroscientist Jakob Voigts, lab head in Janelia from 2022 on, and co-founder of the Open Ephys: http://jvoigts.scripts.mit.edu/blog/

[Update 2019:] A blog highlighting cheap and often open solutions for technical problems in the lab: http://www.labonthecheap.com/

[Update 2020:] The Neurowire blog powered by the company Scientifica for detailed and competent advice on everything about imaging and electrophysiology in neuroscience. Especially the #labhacks are very informative.

[Discontinued since March 2017:] Microscopy development summaries and some toy project descriptions, written by Kurt Thorn, who was running the Nikon microscopy facility at UCSF – he’s not a neuroscientist, but imaging methods are always relevant: http://nic.ucsf.edu/blog/

Posted in Imaging | 2 Comments

PhD at the FMI

In April, I started my PhD in neuroscience in the Friedrich lab at the FMI. Topic will be the investigation of brain areas for higher olfactory processing in the zebrafish, and I’ll be working with different physiological methods.

Posted in Uncategorized | Tagged , | Leave a comment

Beyond correlation analysis: Dynamic causal modeling (DCM)

I was surprised to find a method like DCM in Olav Stetter’s list (link) for neural network methods (even as a so-called ‘standard method’), because it differs from those I discussed before. I will now describe why I don’t think that this method is my method of choice for analyzing activity data and for understanding neuronal networks.

Continue reading

Posted in Data analysis, Network analysis | Leave a comment

Beyond correlation analysis: Transfer entropy

When reading through the first informative web pages on transfer entropy, it turns out how closely its concept is related to mutual information, and even closer to incremental mutual information; and, although it’s based on a totally different approach, it tries to create a measure of time-shifted influences similar to Granger-causality. The main difference: the latter is based on simple linear fit prediction, whereas the former is based on information theory.

I haven’t found something in the net which explains transfer entropy in simple pictures for the layman – quite a shame, considering the attention transfer entropy has recently gained in neuroscience. So I will refer to a highly cited article by Thomas Schreiber, which is freely available in the arXiv (link). On the first two pages, almost everything which is needed is explained. I suppose, however, that Schreiber’s background is theoretical physics.

It’s instructive to compare mutual information with transfer entropy. Continue reading

Posted in Data analysis, Network analysis | 1 Comment

Beyond correlation analysis: Granger causality

Granger causality has been named after the econometrician Clive Granger and has been adapted in the last 10-15 years as time-series analysis tool for neuroscience. The best account for this topic that I have found, is on scholarpedia again (link). The idea is quite simple: you have a timeseries (e.g. activity trace) X, and a timeseries Y. You want to know if the past of timeseries Y can, in addition to the past of timeseries X itself, help to predict the future of timeseries X. Prediction here is nothing but linear regression, somehow a mixture of auto- and cross-regression (copied from scholarpedia):

X_1(t) = \sum_{j=1}^p{A_{11,j}X_1(t-j)} + \sum_{j=1}^p{A_{12,j}X_2(t-j)+E_1(t)} X_2(t) = \sum_{j=1}^p{A_{21,j}X_1(t-j)} + \sum_{j=1}^p{A_{22,j}X_2(t-j)+E_2(t)}

Continue reading

Posted in Uncategorized | 1 Comment

Some interesting 2-photon microscopy papers

In the last few months, I built a special kind of 2P-microsope. In the meantime, I encountered some papers on microscope techniques which I found interesting and worth a side-note.

  • Using AODs instead of galvoscanners for point-scanning: High-speed in vivo calcium imaging reveals neuronal network activity with near-millisecond precision. In contrast to resonant galvoscanners, you can still define arbitrary scanning paths. In a more recent paper which I don’t find right now, it is shown that by defining a scanning path for the AODs not by the path but only the endpoints of the path, you can go effectively as fast as you want. (This is not shown for population calcium imaging, but imaging of dendrites/spines.) The perspective of not being limited by scanning in principle is quite promising.
  • Using several beams for scanning several z-layers: Simultaneous two-photon calcium imaging at different depths with spatiotemporal multiplexing. The idea is quite simple, it’s based on the fact that the typical fluorophor lifetime (1-3 ns) is shorter than the time window between two laser pulses (typically 12.5 ns), so that pulses with different z-focus are delayed by some nanoseconds; the fluorescence can be gated.
  • Instead of using a moving objective, they used a moving mirror to do the z-scan: Aberration-free three-dimensional multiphoton imaging of neuronal activity at kHz rates. I liked very much the idea of putting the mirror on two two galvanometers instead of a fast piezo as I would have done it in the first place. Piezos at the objective holder as the standard method to change the z-focus are quite fast right now (settling time 2-4 ms at maximum), but they induce vibrations in the setup and are only that fast if they carry a light load and have a limited traveling range (ca. 100 µm).
  • Temporal Focusing is a method to provide z-sectioning in a widefield setup. This was the reason why I came to Vienna in 2013: Brain-wide 3D imaging of neuronal activity in Caenorhabditis elegans with sculpted light. I spent five months on improving this setup.
Posted in Uncategorized | Tagged , , , | Leave a comment

Beyond correlation analysis: incremental mutual information

Incremental Mutual Information: A New Method for Characterizing the Strength and Dynamics of Connections in Neuronal Circuits is a 2010 paper by A. Singh and N. Lesica in PLoS Computational Biology that describes a method which can be used as an alternative to correlation analysis for some cases.*

What is the promise of Incremental Mutual Information (IMI), compared to correlation analysis and correlation functions? First, similar to mutual information, which I have discussed before, it also considers non-linear dependencies of neuronal activities. Second, “it has the potential to disambiguate statistical dependencies that reflect the connection between neurons from those caused by other sources (e.g. shared inputs or intrinsic cellular or network mechanisms) provided that the dependencies have appropriate timescales” (taken from the abstract). This sounds interesting, but we have to consider ‘appropriate timescales’ in more detail later.

Continue reading

Posted in Uncategorized | Tagged , | Leave a comment