Entanglement of temporal and spatial scales in the brain, but not in the mind

In physics, many problems can be solved by a separation of scales and thereby become tractable. For example, let’s have a look at surface waves on water: they are rather easy to understand when the water wave-length is much larger or much smaller than the depth of the water, but not if both scales are similar (wikipedia).

To give another example, light scattered by small particles (like fat bubbles in milk, or water drops in a cloud) can be described more easily if the wavelength of the light is much larger (Rayleigh scattering) or much smaller than the particles, but not if it is of the same order of magnitude (Mie scattering). Separation of scales is often key to making a problem tractable by mathematics.

What physicists like even more than separation of spatial scales, is the separation of different temporal scales. For example, consider two variables A(t) and B(t) that are influenced by each other:

\tau_1 \frac{d A}{d t} = f(A,B) \\ \\ \tau_2 \frac{d B}{d t} = g(A,B)

If the timescales separate, for example \tau_1 \gg \tau_2, the variable B(t) is basically seen as constant by the variable A(t). In this case, the variables can be decoupled, and the problem is often solvable. (Sidenote: In very simple and idealized systems without separations of scales, for example during some sort of phase transitions, mathematical physics can still come to the rescue and provide some clean solutions. But in most systems, this is not the case.)

I am convinced that problems do not become easier by a separation of scales only for physics or mathematics. I think that this applies even more to our intuition and our own understanding of the world. Automatically, we try to disentangle systems by using hierarchies and separations of length- and timescales, and if we are unable to do so, our intuition fails, as does the physics analysis.

What about the brain? In my opinion, the brain is one of those system that will defy human attempts to understand it by separating temporal processes or spatial modules. The brain consists of an enormous amount of different temporal and spatial scales that, however, overlap with each other and cannot be easily segregated. For example, on the timescale of few 100 ms, many different processes are non-stationary and therefore relevant at the same time: neuromodulation of many kinds; spike frequency adaptation and  presynaptic adaptation and facilitation; diffusion of proteins across spines, or ions across dendrites; calcium spikes; NMDA currents; et cetera. At a timescale of 1000 ms or 10 ms, it is a different but overlapping set of processes that are non-stationary. To put it short, it seems likely to me that the brain consists of a temporal and spatial continuum of processes, rather than a hierarchy.

Why would this be so? Because, as far as I can see, there is no incentive for nature to prevent the entanglement of temporal and spatial scales of all those processes. In contrast, those interactions may offer advantages that emerge randomly by evolution, at the cost of a higher complexity. Nature, which does not need to understand itself, probably does not care much about an increase of complexity, unlike the biologists working to disentangle the chaos.
It is perhaps misleading to personify ‘nature’ and to speak of an ‘incentive’. It is probably more acceptable to derive these processes from ‘entropic forces’, which make any ordered system, including the organic and cellular systems invented by evolution, less ordered and therefore more chaotic over time. Even if there was order once (think of a glass of water which is strictly colored green in the left and blue in the right half), random changes, which is the driving force of evolution, will undo this order (nothing can prevent that green and blue water will mix over time by random motion of its molecules, that is, diffusion).

In addition to the deficiency of our mind and of mathematical tools when it comes to entangled scales, I suspect based on personal experience that humans are to some extent unable to bring together knowledge from different hierarchies. In neuroscience, most researchers stick to one small level of observation and the related processes; and in most cases it is very difficult to bridge the gaps between levels. For example, “autism” can be addressed by a neurologist who thinks about case studies and very specific behavioral observations of her patients; by a geneticists looking for combinations of genes that make a certain autistic feature in humans more likely; or by a neurophysiologist studying neurons in animals or in vitro models of autism, trying to dissect the contribution of neuronal connectivity or ion channel expression. Many people believe (or hope) that with sufficient knowledge and understanding, these different levels of observation will fuse together, resulting in a complete understanding that pervades all levels. I would argue – and I’d like to be disproven – that a more pessimistic view seems to be more realistic and that humans will probably never achieve an understanding of neuronal circuits and the brain that is deep enough to bridge the gaps between the levels.

The limitations of both our mathematical tools and our mind when it comes to complex systems is obvious when we think of deep learning. For this field of machine learning, other than for the brain, we know all the basic principles (because we have defined them ourselves): Back-propagation of errors, gradient descent algorithms for optimization, weight-sharing in convolutional networks, rectified linear units (or maybe LSTM units), and a few more. Compared with the brain, the system is not very complex, and we can observe everything throughout the process without interfering with its operation. Still, although the process is 100% transparent, people struggle and fail to understand what is happening and why. There does not seem to be a simple answer to the question how it works. “What I cannot create, I do not understand”, Feynman famously wrote. But the act of creation does not automatically come with understanding.

Experimental neuroscience might face similar, but probably even more complex problems. The way to “understand” a neuronal process that is accepted by most researchers is a (mathematical or non-mathematical) model that can both reproduce and predict experimental results. However, if biology indeed consists of many processes and components that are entangled in space and time, also a model needs to be built that is entangled on several temporal and spatial scales. This can be done – no problem. However, this model will again resist attempts by mathematics or human intuition to understand it, similar to our current lack of understanding of the less complex deep networks. Therefore, the machine (the model, the computer program) will still be able to deal with the complexity and “understand” the brain, but I am not sure that human intuition will be able to follow.

I don’t want to deny all the pieces of progress that have been made to achieve a better understanding of the brain. I rather want to point out the limitation of the human mind when it comes to putting the pieces together.

Posted in Data analysis, machine learning, Network analysis | Tagged , , , , | 2 Comments

Blue light-induced artifacts in glass pipette-based recording electrodes

Recently, I was carrying out whole-cell voltage-clamp and LFP recordings with simultaneous optogenetic activation of a channelrhodopsin using blue light. Whole-cell voltage clamp techniques can record the input currents seen by a neuron (previously on this blog [1], [2]); an LFP records the very small synaptic currents in bulk brain tissue (nicely reviewed by Oscar Herreras); and optogentics with genetically encoded rhodopsins can make neurons fire using light pulses.

For the LFP recordings, I used the same glass pipette that I had used before for the whole-cell recording of a nearby neuron. In the LFP, I saw a light-evoked response which I first thought was a rhodopsin-evoked synaptic current. However, it turned out that I could make the same observation when positioning the pipette tip in the bath instead of in the tissue, which meant that this was clearly not a synaptic current, but an artifact. When changing the pipette resistance by gently breaking the pipette tip, the light-evoked voltage remained the same, whereas the evoked currents changed proportionally with the pipette resistance Rp, or more generally with the resistance between the two electrodes:

experiment_photovoltaic_effect

I found out that this sort of artifact has been described in the context of tetrode recordings several years ago by Han et al. (2009; supplementary figure 1) and has been sort of explained with the Becquerel effect (here [update 2022: linked website deleted, therefore the link points to an archived version]), which is better known as the photovoltaic effect. According to Han et al., the effect is stronger for blue light and affects the recorded currents on a slow timescale, such that highpass-filtering of the recorded signal, which is used to detect spikes in tetrode recordings, gets rid of this artifact.

In addition, Han et al. state:

We have not seen the artifact with pulled glass micropipettes (such as previously used in Boyden et al., 2005 and Han and Boyden, 2007, or in the mouse recordings described below). Thus, for recordings of local field potentials and other slow signals of importance for neuroscience, hollow glass electrodes may prove useful.

Contrary to this suggestion, my above measurements indicate that using a glass electrode does not or not always get rid of the artifact. To better understand this artifact, I checked whether it was mediated by the chloride silver electrode in the glass pipette or rather by the ground electrode, and found that both contributed more or less equally to the artifact in this experiment. Protection of the electrode by some sort of cover reduced the magnitude of the artifact.

What does this mean for whole-cell or LFP recordings using a glass pipette? For whole-cell recordings, the resistance between the two electrodes is much larger than for the two traces shown in the plots above, typically between 50 and 2000 MΩ. This reduces the artifact-induced current recorded in voltage-clamp to something less than 5 pA for 50 MΩ cells, and much less for neurons with higher membrane resistance. In most cases, this is negligible.

For glass pipette-based LFP recordings or loose seal cell-attached recordings, however, the light-induced voltage change (few hundred μV, as shown above) is of the same magnitude as a strong LFP signal (see for example figure 1 in Friedrich et al., 2004). Therefore, in order to measure a LFP signal in response to blue light-activated rhodopsins, one needs to take into account the artifacts induced by the photovoltaic effect. This can for example be done by measuring the light-evoked voltage change with the glass pipette both in the tissue and in the bath and subtracting the latter measurement from the previous one on a pipette-by-pipette basis.

I would also be curious about other reports (if there are any) on light-induced artifacts with recording electrodes and under which circumstances (if there are any) they might play a non-negligible role.

Posted in Data analysis, electrophysiology, Neuronal activity | Tagged , | 2 Comments

Open access 3D electron microscopy datasets of brains

One of the coolest technical developments in neuroscience during the last decade has been driven by 3D electron microscopy (3D EM). This allowed to cut large junks of small brains (or small junks of big brains) into 8-50 nm thick slices, which are then imaged with nanometer resolution, resulting in 3D stacks of imaged tissue. Here, I want to highlight some of those datasets which are easily accessible in the internet but, at least from my impression, under-used by other researchers.

But also the technical concepts and breakthroughs underlying 3D EM are very interesting. The three main approaches, serial block-face electron microscopy (SBEM), serial section transmission or scanning electron microscopy (ssTEM or ssSEM) and focused ion beam SEM (FIB-SEM) have been very nicely reviewed by a colleague of mine, Benjamin Titze, including some very beautiful and instructive figures (special recommendation for Fig. 4). Of course, this is only a part of the challenge: First, the brain tissue must be stained with heavy metals to be visible for electrons. Second, after data acquisition, human annotators or algorithms have to extract neuronal morphologies or synapse distributions from the huge datasets.

However, I find the raw 3D EM data very interesting by itself. Those datasets are still rare, but many people do not know that some of them are easily accessible to anyone with an internet connection. And it is a true pleasure to have the full screen filled with the overwhelming clutter of neuronal dendrites and to follow them in 3D just by scrolling with the mouse.

Neurodata.io is probably the best place to start. After a simple registration, one can directly access some of those EM datasets in the browser: ndwebtools.neurodata.io/coll_list, or through other tools. Not all of the datasets are of the highest quality (and it is not always easy to judge data quality for a lay person), but most of them offer highly interesting views into the complexity of the brain (scroll wheel for going through the slice, ctrl + scroll wheel for zooming). Here I want to highlight a few of them. They can be accessed by clicking on the neurodata/ndwebtools link above.

The following excerpt by Lee et al. (2016) shows a small zoom-in into the somato-sensory cortex in mouse. A thick dendrite (between the red arrows) is passing vertically through the image. In this ssSEM datasets, synapses look really nice (yellow arrow, with a beautiful vesicle cloud below), but they look even nicer in 3D, so you should have a look at the 3D data yourself.

Lee16_X.png

The following picture from a dataset from the Cardona lab shows a small zoom-in of the drosophila brain. (I assume that the scale bar generated for this dataset is a bit off; the 100 nm shown here probably correspond to 500 nm.) The red arrow highlights a filament of the cytoskeleton, probably a microtubule in charge of transport along the dendrite. The pink arrow indicates one of the many mitochondria with its cristae. The yellow arrow indicates a local darkening at the contact site between two neurites, and I have no idea what this is. A gap junction? A strange synapse? A precipitate, i.e., an artifact of the staining procedure?

Cardona.png

In hippocampus CA1, things look very similar, in a ssTEM dataset used by Bloss et al. (2018). This study focuses on clustering of synapses from single axons. Axons can easily be recognized by their dark and thick myelin sheath (red arrows). If you have a lot of time, you can scroll through the dataset and try to find a node of Ranvier. – As in almost every 3D EM dataset, there are planes or entire regions with low quality staining or low signal to noise imaging or something else that went wrong. Sometimes this is very local, just a blurring of boundaries (yellow arrows) that is difficult to interpret.

Spruston.png

And here is a zoomed-out view of a single plane of a dataset by Wanner et al. (2016) of the olfactory bulb of larval zebrafish. Here, the large roundish shapes are not cross-sections of dendrites, but neuronal somata:

Wanner.png

I just want to encourage people to browse through these datasets. Browsing in 3D is much more interesting than watching these still images. – Or if you are teaching students about neuroscience, why not send them a link such that they can discover neurons themselves by scrolling and zooming through the brains? I haven’t seen many people who were not fascinated when first encountering 3D EM data and who were not overwhelmed by the sheer amount of dendritic arborizations! (And this is a bit funny, if we keep in mind that electron microscopy does not see much of the more complex level of cells, the crowded microenvironment, which is a chaos of competing, interacting, diffusing little protein machines.)

As an alternative to neurodata.io that is accessible even without any registration, a couple of test datasets are available with neuroglancer, a rendering software developed by Google. Check out the dataset from Takemura et al. (2015) by following this link. It is an isotropically resolved dataset (8 nm in x, y and z). You can use the scroll wheel and ctrl to browse through the stack or to zoom in and out. The software includes three EM viewports and an additional rendering of a number of selected neurons.

Another way to explore 3D EM data is to go to eyewire.org, where one can discover 3D EM datasets of neurons (retina, based on Briggman et al., 2011) within the framework of a game – which is fun. Over the last couple of years, the user interface has become very pleasant. The downside compared to the other options is that one cannot discover freely in a big dataset; plus, there is no labeling of the inner organelles or vesicles of the neurons, which is part of the fun for the other datasets.

To understand more details about these EM images, I found it interesting to go through the first chapter of the book Dendrites (“Dendritic structure”), which can accessed almost to its full extent via Google Books.

Full disclosure: my current host lab is working on 3D EM data in zebrafish. My own projects do not involve electron microscopy directly.

Posted in Data analysis, machine learning, Microscopy, Network analysis, zebrafish | Tagged , , , | 1 Comment

How well do CNNs for spike detection generalize to unseen datasets?

Some time ago, Stephan Gerhard and I have used a convolutional neural network (CNN) to detect neuronal spikes from calcium imaging data. (I have mentioned this before, here, here, and on Github.)

This method is covered by the spikefinder paper that was recently published (Berens et al., 2018), based on a competition that featured ground truth for a training and a test dataset. This was a great and useful competition. But there are some important caveats (which are mentioned in the discussion of the main paper). Here, I will discuss one of the caveats.

Continue reading

Posted in Calcium Imaging, Data analysis, machine learning, Neuronal activity | Tagged , , , , , , | Leave a comment

A list of cognitive biases

There are a handful of cognitive biases that are well-known to most scientists: confirmation bias, the Dunning-Kruger effect, the hindsight bias, the recency effect, the planning fallacyloss aversion, etc.. Although they should not be taken as universal laws (for example, recently there was some criticism of generalizations of the loss aversion concept), but it is still important to understand which – probably unconscious – biases might shape our behavior, both as humans and as scientists.

I think that some of thoses biases can be useful to think over if one wants to become better at planning (for example of scientific experiments) or better at understanding data and one’s own (biased) interpretation of it. An unusual resource on cognitive biases that I can recommend is the first third of HPMOR, a Harry Potter fanfiction that discusses cognitive biases in the context of an entertaining narrative. And wikipedia offers a more or less comprehensive list of such biases: List of cognitive biases.

I found it very interesting to read through this list. Although some of it is kind of common sense, having a name for a phenomenon can make a difference – similarly, if I know the names of all the trees and plants, wandering through a forest is different from before, because I start to see things not only with my eyes.

Posted in Data analysis, Links | Tagged | 2 Comments

Springtime for two-photon microscopy

Today, the fields and forests around Basel are full of flowers that try to disseminate their pollen. Fixed pollen are, apart from sub-diffraction beads and the convallaria rhizome, one of the most commonly used test/reference samples for fluorescence microscopy. This is both due to their fine, spiky structures and their strong autofluorescence. The scientific study of pollen (and other small things), palynology, provides us with elaborate protocols on how to collect, clean, stain and fix pollen (example 1example 2) with glycerol jelly between two glass slides.

For two-photon microscopy, these protocols are not ideal since the objectives typically have no correction for glass cover slips between the sample and the objective. Therefore I tested whether it would be possible to look at pollen with a much simpler protocol, using a two-photon microscope. Continue reading

Posted in Imaging, Microscopy | Tagged , , , | 1 Comment

Layer-wise decorrelation in deep-layered artificial neuronal networks

The most commonly used deep networks are purely feed-forward nets. The input is passed to layers 1, 2, 3, then at some point to the final layer (which can be 10, 100 or even 1000 layers away from the input).  Each of the layers contains neurons that are activated differently by different inputs. Whereas activation patterns in earlier layers might reflect the similarity of the inputs, activation patterns in later layers mirror the similarity of the outputs. For example, a picture of an orange and a picture of a yellowish desert are similar in the input space, but very different with respect to the output of the network. But I want to know what happens in-between. How does the transition look like? And how can this transition be quantified?

To answer this question, I’ve performed a very simple analysis by comparing the activation patterns of each layer for a large set of different inputs. To compare the activations, I simply used the correlation coefficient between activations for each pair of inputs. Continue reading

Posted in Data analysis, machine learning, Neuronal activity | Tagged , , , , , | Leave a comment

Understanding style transfer

‘Style transfer’ is a method based on deep networks which extracts the style of a painting or picture in order to transfer it to a second picture. For example, the style of a butterfly image (left) is transferred to the picture of a forest (middle; pictures by myself, style transfer with deepart.io):

Stylus_butterfly

Early on I was intrigued by these results: How is it possible to clearly separate ‘style’ and ‘content’ and mix them together as if they were independent channels? The seminal paper by Gatys et al., 2015 (link) referred to a mathematically defined optimization loss which was, however, not really self-explanatory. In this blog post, I will try to convey the intuitive step-by-step understanding that I was missing in the paper myself. Continue reading

Posted in Data analysis, machine learning | Tagged , , , | Leave a comment

Can two-photon scanning be too fast?

The following back-of-the-envelope calculations do not lead to any useful result, but you might be interested in reading through them if you want to get a better understanding of what happens during two-photon excitation microscopy.

The basic idea of two-photon microscopy is to direct so many photons onto a single confined location in the sample that two photons interact with a fluorophore roughly at the same time, leading to fluorescence. The confinement in time seems to be given by the duration of the laser pulse (ca. 50-500 fs). The confinement in space is in the best case given by the resolution limit (let’s say ca. 0.3 μm in xy and 1 μm in z).

However, since the laser beam is moving around, I wondered whether this may influence the excitation efficiency (spoiler: not really). I thought that his would be the case if the scanning speed in the sample is so high that the fs-pulse is stretched out so much that it spreads over a distance that is greater than the lateral beam size (0.3 μm FWHM).

For normal 8 kHz resonant scanning, the maximum speed (at the center of the FOV) times the temporal pulse width is, assuming a large FOV (1 mm) and a laser pulse that is strongly dispersed through optics and tissue (FWHM = 500 fs):

Δx1 = vmax × Δt = 1 mm × π × 8 kHz × 500 fs = 0.01 nm

This is clearly below the critical limits. Is there anything faster? AOD scanning can run at 100 kHz (reference), although it can not really scan a 1 mm FOV.  TAG lenses are used as scanning devices for two-photon point scanning (reference) and for two-photon light sheet microscopes (reference). They run at up to 1000 kHz sinusoidal. This is performed in the low-resolution direction (z) and usually covers only few hundred microns, but even if it were to cover 1 mm, the spatial spread of the laser pulse would be

Δx1 = 1 mm × π × 1000 kHz × 500 fs = 1 nm

This is already in the range of the size of a typical genetically expressed fluorophor (ca. 2 nm or a bit more for GFP), but clearly less than the resolution limit.

However, even if the infrared pulse was smeared over a couple of micrometers, excitation efficiency would still not be decreased in reality. Why is this so? It can be explained by the requirement that the two photons arriving at the fluorophor have to be absorbed almost ‘simultaneously’. I was unable to find a lot of data on ‘how simultaneous’ this must be, but this interaction window in time seems to be something like Δt < 1 fs (reference). What does this mean? It reduces the true Δx to a fraction of the above results:

Δx2 = 1 mm × π × 1000 kHz × 1 fs = 0.003 nm

Therefore, smearing the physical laser pulses (Δx1) does not really matter. What matters, is the smearing of the temporal interaction window Δt over a spatial distance larger than the resolution limit (Δx2). This, however, would require a line scanning frequency in the GHz range – which will never, ever happen. The scan rate must always be significantly higher than the repetition rate of pulsed excitation. The repetition rate, however,  is limited to <500 MHz due to fluorescence lifetimes of >1-3 ns. Case closed.

Posted in Imaging, Microscopy | Tagged , , , , | 8 Comments

The basis of feature spaces in deep networks

In a new article on Distill, Olah et al. write up a very readable and useful summary of methods to look into the black box of deep networks by feature visualization. I had already spent some time with this topic before (link), but this review pointed me to a couple of interesting aspects that I had not noticed before. In the following, I will write about one aspect of the article in more depth: whether a deep network encodes features rather on a neuronal basis, or rather on a distributed, network basis. Continue reading

Posted in machine learning, Network analysis, Neuronal activity | Tagged , , , , | 2 Comments