Point spread functions

One way to characterize the quality of one’s microscope is to measure the point spread function (PSF), that is the image that is created by a point source  (which can be a fluorescent bead smaller than the expected size of the PSF embedded in agarose). I recently spent quite some time aligning my multiphoton microscope, and due to some reasons, it took me not only some hours, but several days (or nights). In the end, the PSF again was symmetric, small, sharp and nice, but the way to go there was crowded with all varieties of bad, strange and extremely ugly PSFs, sometimes at points during the alignment when I didn’t expect it.

In scientific publications, one only gets to see the nicest symmetric ‘typical’ PSFs, so I just want to put some really bad PSFs here. Not all the PSFs were as bad as they look like, because I have adjusted the contrast of the gifs in order to better show the shape of the PSF. All gifs are simple z-stacks acquired with variable z-spacing at the same day on the very same microscope, with only minor and rarely predictable changes to the beam path.

Here comes the classical stone in the water-PSF, thrown from the lower right:

theCounterDrifter

This one is thrown from the top right:

theSaturnWithManyRings

This is the bathing in a sea of other beads-PSF. Or, rather than bathing, swimming, because there is a clearly visible upwards direction. This happens when you have to many beads in your agarose:

theSwimmerInASeaOfBeads

This astigmatic PSF, on the other hand, is rather indecisive, first going to the right and then upwards.

theUndeciciseWanderer2

Here an even more advanced indecisive right-up-goer-PSF:

theUndeciciseWanderer

This one is so undecided that it decides to almost split in two halves. Let’s call it the bifurcating PSF:

theAsymmetricHalo

This is the banana-PSF, coming from the left and going back again:

theHalfBanana

And finally, you would not guess this to be a PSF, here comes the flying eagle-PSF. I created it simply by tightening too much one of the screws that held the dichroic beam splitter, the back aperture of the objective still seemed to be properly filled:

theFlyingEagle_pressureOnDichroic

Amazing!

Sidenote: In some of these pictures in one or two planes, one can see a diagonal striping pattern. This comes from the pulsing laser, which was unstable during these days, being modulated to the microsecond-timescale. Fortunately, nowadays I have a nice PSF and a stable laser again …

Posted in Microscopy | Tagged , , | 4 Comments

Aphantasia

A couple of days ago, during a hiking tour close to Luzern, I met a medical doctor from Israel who decided to join me for my hike for the rest of the day. After some time, when I made fun of her taking so many pictures, she pointed out that she had a rare neurological condition that prevented her from recalling pictures and that she did not have a picture of anything in her mind when she closes the eyes.   Continue reading

Posted in Uncategorized | Tagged , , | Leave a comment

EODs for kHz imaging

J. Schneider et al., and S. Hell recently published a paper on STED microscopy, using EODs (electro-optical deflectors) to scan 512 x 512 pixels at frame rates of 1000 Hz. Compared to AODs, EODs offer the costumer-friendly advantage of not dispersing the the spectral components of the laser beam. Their main weakness is a small deflection angle. In her thesis from 2012, Jale Schneider gives an overview of manufacturers of EODs:

Company deflection angle per kV [mrad] aperture [mm] capacity [pF]
Conoptics Inc. USA 7.8 2.5 180
NTT Photonics Laboratories, Japan 150 0.5 1000-2000
Leysop Ltd., GB 1.5 3 50
Leysop Ltd., GB 3 2 50
Leysop Ltd., GB 5 1 50
Quantum Technologies, USA 3.5 3 100
AdvR Inc., USA 24 < 0.8 < 100

For resonant scanning, the aperture is ~ 5 mm, the deflection angle 90-260 mrad (depending on resonant frequency).

The capacity of the crystal displayed in the table comes into play indirectly, with a smaller capacitance demanding less from the high-voltage driver system of the EOD. In the appendix of her thesis, Jale Schneider also gives an overview of commercial high-voltage driver systems, although for her work she custom-built one.

Maybe at some point, this will become interesting for voltage imaging in dendrites. Assume one could drive the Conoptics EOD with 5 kV, and let’s further assume a 60x objective (focal length of ca. 3 mm) and a scan lens/tube lens system with magnification 3x. Then one has a 7.5 mm diameter beam on the back focal plane, and a FOV in the sample of ca. (3 mm)/3*tan(5*0.0078 rad) = 40 μm. Such a system would be an alternative to the AOD-based approach, which was, for instance, used by Arthur Konnerth’s lab for kHz calcium imaging of dendritic spines (250 × 80 pixels, field of view = 28 × 9 μm, 40x objective).

[Update] It seems that this estimation is only approximately correct. As described in the thesis cited above, the Conoptics EOD can use the 2.5 mm aperture only for a non-deflected beam. It seems that even for a +- 7.2 mrad deflection, the maximum unclipped beam diameter is only 1.5 mm.

Posted in Uncategorized | Tagged , , , | Leave a comment

A pockels cell with a broken crystal

In a 2P microscope, pockels cells are employed for fast control of the laser beam intensity. I use it for both switching off the laser beam during turnarounds of the resonant scanner, between two frames if they are not immediately one after the other, and to adjust the beam intensity when scanning in z for multi-plane imaging. In total, the pockels cell is quite essential for me. Alternatively, people use mechanical shields to blank the beam during the turnaround, or slow motorized rotating λ/2-plates to adjust the laser intensity on a timescale of seconds.

Recently, I found out how a defect pockels cell can look like. For comparison, the first video shows a properly working pockels cell, although the refractive index-matching liquid inside might be a little bit low. The air-liquid interface can be clearly seen at some points.

In the second video, the crystal inside the pockels cell is clearly broken and therefore visible. This could be clearly seen immediately when looking at the laser beam, which was strongly diffracted after passing the pockels cell.

This defect occured most likely when the cell driver remained switched on for an extended period of time, with the offset voltage being set to a rather high value. So this happened due to the permanent voltage applied to the crystal, and not due to the pulsed laser intensity.

Posted in Uncategorized | Tagged , | 2 Comments

Finding the engram

In their Nature Reviews|Neuroscience article, Finding the engram, Sheena Josselyn, Stefan Köhler and Paul Frankland discuss the recent developments, mainly in circuit neuroscience in mice, that contributed to finding memories on the cellular level, the so-called engram. They accumulate the evidence from past years mostly based on studies using fear- or reward-learning that show that one can identify, modify, disturb and also cross-link cellular ensembles that are necessary and sufficient for the recall of memories. For example, it is fascinating that one can express a stimulator like channelrhodopsin specifically in neurons that have been activated during a defined memory-task.

One caveat the authors mention is that this has only been thoroughly studied for avoidance- and reward-learning, therefore working with a binary behavioral task. This might mask imprecisions and problems that would be obvious when trying to write or recall more difficult memory-tasks.

However, one point of view could be, looking at this corpus of research, that the memory problem is solved. Not only can the memory-forming cells be found, but they can also be manipulated in order to show their causal involvement. Like every solved problem (given it is really solved), it immediately becomes boring; or, at least I’m tempted to look at the weak points or missing links, in order to be able to say, ok, this problem is not solved at all.

The main weak points that I can see:

  1. The temporal sequence of those ‘engram neurons’ during activation is typically lost when expressing labels or stimulators like opsins in the respective neurons. Therefore, recall results in a tattered re-generation of a once temporally ordered pattern, and it is unlikely that nothing is lost during such a recall.
  2. The molecular mechanisms of memory also remain to be elucidated.
  3. It is not understood how the memory recall and associated processes like pattern completion work en detail. This is what happens on a timescale of maybe 100-500 ms. The ‘engram’ is sometimes treated like something static and stable, like a binary pattern on a hard drive. But in reality, memory recall is a process, and this process is still not understood and is indeed difficult to observe, because it is not known how many neurons and brain regions have to be observed at which timescale.

But despite these remaining open questions, it has to be acknowledged that the current state of research has already answered some interesting questions. Maybe the authors have the same opinion. The title of their review, ‘Finding the engram’, reminded me of the last chapter of ‘In search of the lost time’, called ‘Finding time again’, where the main protagonist of the novel indeed finds a way to access and work with his precious memories, about the loss of which he has written this huge tome.

Posted in Uncategorized | Tagged , , | Leave a comment

A simple non-graphical user interface in Matlab : keyboard callback functions

I’m not the first person to be annoyed by Matlabs guide (a tool used to generate GUIs that, unfortunately, are difficult to understand and painful to modify afterwards). Some months ago, I was looking for a way to implement a lightweight user interface for analyzing big data sets, particularly to mark ROIs in calcium imaging movies. I found a simple way which does not use any buttons or any other graphical user interface elements, but only relies on keyboard callback functions. Most likely, this programming style is useful for other tasks as well.

Continue reading

Posted in Uncategorized | Tagged , , | 1 Comment

A network of pointers

Clouds, seen from above. © The Author, 2015.

Clouds, seen from above. © The author, 2015.

When thinking about the way we think, it certainly makes sense to begin with a point which is also used by our thoughts as a starting point, which is sensual experience. To give an example, it is easy to imagine how a visual experience is categorized and analyzed over several cortical stations in more and more complex and fine representations, up to a representation of e.g. a human person in the infero-temporal (IT) cortex of the ventral path of visual information processing. This high-level representation could be done by a single neuron – although this seems unlikely, so let’s say rather the representation consists of a cluster of neurons.

But hold on. I can image that this [cluster of] neuron[s] becomes active for the task of the sensual-passive categorization of this person. But how could the reverse order be accomplished, going from the pure activation of this neuron to the vivid internal imagination of the person? When I think of my brother (i.e. activating the assumed neuron that represents my brother), sensual-like impressions and memories will quickly come to my mind, which I can follow and deepen to a rather arbitrary extent. Where does this come from?

I would naively assume that the bare activity of a neuron in my inferotemporal cortex has nothing sensual, not more than a charging and de-charging condensator in an electrical circuit (given that neurons use a code as simple as most people suppose). To me, the most obvious way how sensuality could enter the game, is the same way it came there in the first place when it helped to generate the concept of this person – only that this path would have to be walked in the opposite direction in the hierarchy of the cortical architecture of the visual system, thereby reconstructing the individual components of the visual percept which might be located in V1.  – If we pursue this line of thought, this means that processing paths which have once been developed and strengthened by sensual experience would have to build in at the same time a way back, in order to allow the concept to go back from the high-level presentation to the low-level sensual components at a later point in time. – Two problems are evident at this point: First, from biological intuition, you would not expect the formation of exact reciprocal connections to arise, since neurons are by construction processing units that tend to operate unidirectionally. Second, this concept would lead to a circular way of signal processing. It’s like having a lot of pointers which are pointing to other pointers, but without anything real at which they point to in the end. Still, a pointer to V1, an unconsciously existing neuronal layer close to perception, seems to be rather better equipped with sensuality than a pointer to any other meaningless empty neuron in IT. I’m sure that people in 200 years will laugh about this tentative attempts to think about representations in the brain, but I would like to meet the woman or man who is in a position to do this now …

In the space of the subjective consciousness, this process is much easier to imagine and to describe. From an abstract term that I am told by a stranger, like the name of my brother, or maybe the expression ‘cumulo-nimbus clouds’, automatically a vivid image of this cloud formation arises; but I could also reject and suppress this imagination; or I could deliberately pursue this picture, searching my memories, rendering them more concrete and sensual with every second I spend thinking about them; maybe even taking a piece of paper to draw a physical picture of the image in my head. Therefore, in any case, this supposed reciprocal connection would not be a purely self-acting cascade back to the original sensual representation, triggered by the activation of a higher-level neuronal representation, but a loose option and subthreshold activation which I can pursue further by the the reinforcing focus of my attention, but which I cannot follow to any imaginable level of detail – at the latest when trying to draw the cloud, I would realize that I do not exactly know how this clouds (or clouds in general) look like in reality and that I can recall nothing but a vague impression. Translated to the idea of reciprocal connections for recall of sensuality, these reciprocal connections would be asymmetric, and the forward image would certainly be not invertible, or only in the vague approximation that I can also experience when recalling the shape of a cumulo-nimbus cloud.

Of course, even if this concept were true, it would be only a part of what is happening in the brain. Beside the sensual recall, also the emotional valence comes into play, which is unlikely to be coded by sensory pathways. Plus, other abstract associations and memories will occur, like the recall of the cloud picture on top of this post at the moment I mentioned cumulus clouds further below. Those associations, however, in turn might each of them cast a vague sensual shadow in the earlier sensory layers …

What I like about this concept is the use of pointers. There is a slight, but maybe interesting difference between thinking of the brain network as an associative network and as a network of pointers. In an associative network, it is objects or representations that are linked to each other; in a network of pointers, single objects do not have a meaning, and only the location or address where they are pointing to makes them meaningful. The second description, different from the ‘associative network’ description, immediately triggers the question: Where, during recall, does the accompanying sensual experience come from?

Posted in Uncategorized | Tagged , , , | Leave a comment

The unsolved problems of neuroscience

The neuroskeptic blog recently mentioned a viewpoint paper which includes a list of the solved and unsolved problems of neuroscience. I’m probably not yet as deep into neuroscience as is the author of the paper, but I find it tempting to sharpen my own mind by commenting on his list. He categorizes the problems into ones that are solved or soon will be (A), those that we should be able to solve in the next 50 years (B), those which can be solved in principle “but who knows when” (C) and those problems that we might never solve (D), and metaquestions (E). I’ll pick only some of the list items. Here are three items which are actually one topic:

How can we image a live brain of 100,000 neurons at cellular and millisecond resolution? (A=solved)
How can we image a live mouse brain at cellular and millisecond resolution? (B=solved in 50 years)
How can we image a live human brain at cellular and millisecond resolution? (C=can be solved in principle)

Continue reading

Posted in Uncategorized | 3 Comments

Gain and photons per pixel

In 2010, Labrigger wrote about how to measure the gain of a imaging system. As mentioned there in the comments, this was discussed in more detail by James Pawley in the confocal microscopy mailing list quite recently (following a question I asked to the list), and I was motivated to try this out myself for my (2P point scanning) system.

Briefly, as described in more detail by Labrigger, you take the mean and the variance of the image, divide one by the other and get the gain (i.e. one detected photon translates into an increase of how many pixel values). This value should not be lower than 1. This gain then allows to calculate the number of photons that arrive at a single pixel.

Instead of using a homogeneous frame as proposed by Labrigger, I used a simple rhod-2 injected brain sample, but I calculated mean and variance of a pixel not spatially over the image, but temporally within each pixel, thus having a) a realistic sample and b) more statistics. Plotting variance against mean allows to fit a straight line, the slope of which gives you the gain.

Fitting var/mean: gain ~ 45.

Fitting var/mean for all pixels of a 512×512 image with a resulting gain ~ 45. System: A standard Hamamatsu PMT, a Femto DHCP preamp, and an Alazar ATS-9440 DAQ board with the input range set to 0.2 V (0.2 V translating into 13 bit).

The calculated gain (ca. 45) is fine … although the real gain might be lower due to multiplicative noise that undermines the assumption of Poisson noise (for a discussion see the above-mentioned post by James Pawley). Additionally, this is a living sample, with moving fluorophors adding to variability in time. Still, I like the idea of using a simple movie of a sample as I use it all day.

The gain then allows to calculate the number of photons per pixel (see again Labrigger’s post for more details). Here it is:

Estimate of numbers of photons per pixel; excerpt of the 512x512 image series used for generating the above plot. The bright (red) spots are blood vessels and not of interest.

Estimate of numbers of photons per pixel; excerpt of the 512×512 image series used for generating the above plot. The bright (red) spots are blood vessels and not of interest.

Looks like I’m getting around 2-6 photons per pixel for cytoplasm (rhod-2 staining). I was sampling at 80 MHz and binning pixels 8x (4096 sampling points per line to 512 pixels). For real experiments, I want to go to larger images, with binning reduced to 2x, at hte same time reducing the number of photons per pixel to less than 3. – But there would still be room left to increase the laser power and therefore the photon yield. Additionally, as mentioned above, I possibly overestimated the gain by factor which might be even as big as 2, which would increase the real number of detected photons per pixel by the same amount. In the end, this does not really matter, if the real data look bad/nice anyway; but it’s nice to count in physical numbers.

Posted in Uncategorized | Tagged , , | 1 Comment

Colormaps (without colorspace theory)

The labrigger blog keeps posting links to all kind of colormaps, so I tried out some of them. Being partially colorblind, I do not like the default colormaps e.g. of Matlab. Here are some noisy data, with two different scalings for each colormap.

A: Matlab default until recently (jet). B: One variant of CubeHelix. C: Colorbrewer. D: Colorbrewer. E: Grayscale.

A: Matlab default until recently (jet). B: The default variant of CubeHelix. C: Colorbrewer ‘diverging’. D: Colorbrewer ‘sequential’ 2. E: Grayscale.

Actually I like none of them. I find data much more accessible if they are presented as 1D plots, unless they are smooth and beautiful. Since I’m partially colorblind, I especially do not like A and C, which have strange parts in the middle part of the colorscale. I never liked the green and turquois part of “jet”. B looks like tree bark to me, to be honest. D and E are both fine for me, but I’d prefer D because of the more beautiful colors.

Posted in Uncategorized | Leave a comment