How does the brain work and how can we understand it? I want to make it a habit to report some of the thoughts about the brain that marked me most during the past twelve month at the end of each year – with the hope to advance and structure the progress in the part of my understanding of the brain that is not immediately reflected in journal publications. Enjoy the read! And check out previous year-end write-ups: 2018, 2019, 2020, 2021, 2022.
The purpose of neuronal spikes is not to represent …
It is a common goal of many neuroscientists to “decode” the activity patterns of the brain. But can the brain be simply understood by finding observables like visual objects or hand movements that are correlated with these activity patterns? Nobody would deny that neuronal activity is correlated with sensory inputs, motor outputs or internal states. But what does it help to look at these correlations? In other words, are these representations of sensory, motor or inner world key to understanding the brain? In the following, I will argue against this “representational” view of the brain.
First, what do I mean with “representational”? In the following, a neuron represents e.g. an external stimulus if its activity pattern is correlated with the occurrence of this external stimulus (for a stricter definition, see below). For example, a neuron in visual cortex fires when a drifting grating of a 55° orientation is presented to the animal. One could say that “the neuron codes for 55° drifting gratings” or “the activity of the neuron represents 55° drifting gratings”.
To give another example, a neuron in hippocampal CA1 fires reliably when the animal is at a certain location (“the neuron codes for this and that place” or “the neuron represents this and that place”). This representational view is probably the most intuitive approach when it comes to characterizing the activity of the brain. However, simply representing the internal or external world is not enough. What else does a brain do?
Before trying to answer this question, let’s speculate why the “representational” view is so deeply engrained in systems neuroscience. First, many thinkers and also neuroscientists have been biased towards a view of the brain as a passive receiving device because they have been sitting passively at a desk in front of an empty sheet of paper while shaping their thoughts. Second, the representational view is also very intuitive: finding something that can be observed, like a visual stimulus, to be reflected directly in neuronal activity seems to explain why this neuronal activity happened. Animals or humans subject to brain experiments are often told to hold still while waiting for a stimulus or cue. It is clear from this typical experimental design that it would be more likely to see “receptive fields”, which is a classic finding of representations in the brain, because the experimental design biased the experience of the subject towards passive receiving. Torsten Wiesel and David Hubel, who have pursued this line of thinking in the most compelling way in the early era of neuroscience, are probably the best examples for this approach:
Apart from that, artificial neuronal networks like the attractor networks put forward by John Hopfield and, more recently, deep convolutional networks, are typically based on classification tasks, e.g. to identify written numbers on an envelope, or to correctly categorize the object seen in an image.
For a device that is supposed to perform such a task, it is clearly sufficient to represent the inputs and the possible outputs. But this is not the main task of a brain.
… but to act upon something
The brain, unlike a pattern classifier, is in a closed loop with its environment, being moved by it and acting upon it at the same time. It is not a passive observer, but an involved device, an organism. The brain is not designed to represent its environment. Instead, it has evolved to act upon its environment.
How could a model of the brain account for this fact? An approach to circumvent the limitation of the “representational” view of the brain is to focus on its generative abilities. The brain is clearly capable of forming mental models of the external world, of other minds, of physical laws, and also of abstract concepts. “Predictive processing” is a term covering a variety of models also applied to the level of single neurons. In these models, predictions and prediction errors are not purely representational, but process information as a bi-directional interaction between sensory (bottom-up) and generative (top-down) content. However, these models consists of hierarchical networks that, again, perform classification tasks or other well-known toy problems from the machine learning community. Therefore, these models often appear to be representational models in disguise, rather than something completely different.
A researcher that inspired me to think into a slightly different direction is Romain Brette, who published an opinion article in 2018 called “Is coding a relevant metaphor for the brain?” (In this article, Brette also defines more precisely the meaning of “representations” and its relationship to “coding” and “causality”; but since I do not think there is a semantic agreement among neuroscientists, I decided not to use this more precise terminology here.) Brette’s article is first and foremost an article against the idea of “coding” and “representations” as useful concepts for understanding the brain. Naturally, it is a bit less clear what alternative he is actually suggesting and how this alternative could be implemented or understood. (In a comment to the opinion piece by Brette, Baltieri and Buckley convincingly argue that a set of predictive processing models that use active inference is an interesting alternative to the representative view and is close to the “subjective physics suggested by Brette.) Although some concepts in the opinion article remain a bit vague, Brette also touches upon the idea of understanding neurons not by analyzing their representational properties, but by analyzing their actions, i.e., their spikes:
“[C]oding variables are observables tied to the temporality of experiments, whereas spikes are timed actions that mediate coupling in a distributed dynamical system.” (Romain Brette)
From this paper and from posts on his blog, it seems to me that he is trying to circumvent the natural and historical representational bias of systems neuroscience by investigating systems which are simple and peripheric enough to be amenable to descriptions that are not based on representations, but actions/reactions. One area of his research, which I find particularly interesting, is the study of single-cellular organisms.
The basic unit of life is the single cell
During my PhD, I gradually went from large-scale calcium imaging to single-cell whole-cell recordings. I was fascinated by the complexity of active conductances and the variability between neurons, and as I mentioned in last year’s write-up, I also got interested in complex signal processing in a single neuron.
Therefore, investigating unicellular organisms in order to understand neurons and ultimately the brain was not so far from my intuition anyway. And although neurons do not move their processes a lot and seem to be rather immobile and static at first glance, there are many examples of single-cell organisms that exhibit quite intriguing behaviors and remind us of what single cells are capable of. Just watch this movie of this tear drop-shaped ciliate that uses a single long process to search for food:
And it’s just a single cell! Interestingly, these ciliates not only have an elaborate behavioral repertoire, but also use spikes to trigger these behaviors. (Check out this short review on “Cell Learning” to get an idea about what single-cell organisms are able to learn.)
At least in the beginning of life, the single cell was the basic unit of life, receiving information about the external world, using its own memory and acting upon the world. Of course, many cell types in humans are far more specialized and cannot really be considered a basic unit of life, since they serve a very specific task like forming a part of a muscle or sitting in the germline. However, information-processing cells like cells in the immune system or neurons in the brain, which live at the interface between inputs and outputs of an organism, are much more likely to be similar to these unicellular forms of life. They make most of their decisions on their own, based on local signals, without asking a higher entity what to do.
If we look at life with human eyes, the mind and the human body tell us to search for an understanding of neurons from the perspective of the whole brain (or the whole body). However, if we look at life from a greater distance, life shows its highest richness at the level of single cells, and it would therefore make sense to search for an understanding of neurons from the perspective of a single neuron.
The objective function of a single cell
What does it mean to understand a single cell? A very biologist way would be to make an as complete as possible list of all its behaviors and states when exposed to certain conditions. For example, a hypothetical cell senses a higher concentration of nutrients at one of its distal processes and will therefore move into this direction. Conversely, if it senses a lower concentration in this direction, it will move into the opposite direction. If the nutrient level is overall high, it will not move at all. If it is overall low, it will try to move away in a random direction. This is a biologist’s list of behaviors.
A more compressed and therefore better understanding would be in my opinion to write down the “goals” of this single cell. Or, to abuse an expression by mathematicians and machine learning people, to write down the objective function (or loss function) of the single cell. For the above-mentioned hypothetical cell, this would be simply: move to places with high nutrient concentrations.
If we shift back from ciliates in a pond to neurons in the brain, this leads to a very obvious, but also surprisingly difficult-to-answer question: what is the objective function of a single neuron?
Of course it is not clear whether it is reasonable to assume that the objective function or optimization goal is set on the single-cell level. But given that cognition is most likely a self-organized mess that is only coarsely orchestrated by reward and other feedback mechanisms, I would tend to say: no, it’s not unreasonable. (But check out below for more detailed criticism.) Therefore, the question is, what are the goals of a single neuron? What does it want?
Let’s assume that this question is a good one. How can it be answered? I see three possible approaches:
- To investigate the evolution of self-sustaining cells into neurons of the central nervous system. Since objective functions of uni-cellular organisms seem to be rather accessible and objective functions of neurons seem to be difficult to guess, one could investigate intermediate stages and try to understand the evolutionary development.
- To understand the development of the brain and the rules that govern migration and initial wiring of a single neuron. This would possibly allow to make a list of development behaviors for a certain neuronal type and from that extract an objective function. This would be an objective function not of regular operation, but of development. That’s exactly what developmental neurobiology is actually doing.
- To observe the full local environment of a neuron (for example, its electrical inputs) and to try to extract rules that govern changes in its behavior (for example, synaptic strength or output spikes).
The third approach is probably the most difficult one because it is currently technically impossible to observe a significant fraction of the inputs of a neuron together with its internal dynamics. There have been many studies on the plasticity of very few synapses in artificial conditions in slices using paired recordings, but it is currently impossible to perform these paired recordings in a behaving animal. Even in slices, these experiments are daunting given the myriad of different neuronal cell types and the expected variability in these experiments. Effects like long-term potentiation, long-term depression, short-term facilitation, short-term depression, spike-timing dependent plasticity, immediate early genes, receptor trafficking and neuromodulatory effects are just some of many processes that are essential to understand what is happening to a single cell.
A different approach that, however, still resonates with the idea of single cells as actors has been put forward by Lansdell and Kording in 2018. Their idea, as I understand it, is that a neuron could try to estimate its own causal effect, that is, the effect of its actions. This is possible because its actions are a discontinuous function of their state, the membrane potential. If external conditions are almost identical in two situations but the neuron fires a spike only in one of the two situations, the neuron could extract from the received input the effect of this single spike. Therefore, the neuron would measure the causal impact of its actions on itself. This idea is very close to that of a unicellular organism that immediately feels the effects of its own actions.
But what could be the objective function of such a neuron? – For example, the goal of a neuron that receives recurrent feedback could be to find a regime of a certain level of recurrent excitation. Recurrence could be measured by estimating the effect of the spikes of the neuron on the received feedback, possibly also in a dendrite-specific manner. I could imagine that objective functions almost simple as that could make a neuronal net work, once it is embedded in a closed-loop environment with sensory input and motor output. Also, I think that this way of thinking could be close to ideas put forward by e.g. Sophie Denève about predictive processing in spike-based balanced recurrent networks (for example, check out this proposal), which heavily relies on the self-organizing properties of adaptive recurrent circuits with a couple of plasticity rules.
An additional strength of the idea to use spikes to measure a neuron’s causal impact is that it could explain why neurons fire stochastically: this way, they acquire information about their causal impact that is hidden in a regime of deterministic firing.
Clearly, these ideas need more refinement and restructuring, and it is obvious that some of the feasible experiments (spike-timing dependent plasticity, behavioral time scale plasticity) have been done already. But I still like the idea of reframing these experiments by analyzing a potential objective function of a single neuron.
Beyong that, it would be interesting to try to understand self-organized systems of artificial neurons that are not governed by global loss functions, but by neuron-intrinsic and possibly evolving objective functions. I’m really looking forward to seeing theoretical work and simulations that do not focus on the behavior and objectives of the whole network, but on the behavior and objectives of single units, of single cells.
Unicellular organisms and neurons are both single cells. Left: Unicellular ciliate lacrymaria olor, adapted from Wikicommons (CC 2.0). Right: Two-photon image of hippocampal pyramidal neurons in mice.
Appendix: the devil’s advocate
Criticism 1: Representations and coding seem to be still the best way to think about neuronal activity
Although I find the ideas above appealing conceptually, it is clear from past research that most of our understanding of the brain comes from correlating the observed brain activity with sensory or motor information, because it allows to conclude when certain brain areas or neuronal types are active and what happens if they fail. This functional anatomy of the brain might not reveal all the design principles of the brain, but it could be necessary to make an educated guess about those principles.
In addition, our human languages are based on concepts (“you”, “house”, “hope”, “cold”) rather than on dynamical processes. For example, the science of language, linguistics, has coined the term ‘signified’ which describes the content of such a concept as a central element of language. From this converging view from both naive usage and scientific study of languages, it seems to me likely that high- and low-level representations indeed exist and are accessible in the human mind – probably as an evolutionary byproduct of mental processes that were more directly connected to actions or sensation. – It is still interesting that some sensory modalities are more vaguely represented in thought and language (olfaction) than others (vision).
Criticism 2: It is so far technically impossible to record the perspective of a single neuron
It seems to be relatively easy, although not trivial, to monitor a unicellular ciliate and its environment. However, the environment of a neuron consists of all of its synaptic partners, which can be in the range of thousands, and any other source of electrical or chemical neuromodulation. A single neuron itself is widespread over such a large volume that is very far from being amenable to any electrophysiological or imaging technique that provides sufficient resolution in time and space.
The best candidate technique, voltage imaging of both soma and dendrites, cannot be used over periods longer than minutes due to bleaching, suffers from low signal-to-noise as well as difficult calibration and requires imaging speeds and photon yields that are impossible to achieve with any imaging technique and fluorescence sensor that I know of.
But researchers have proved before that imperfect methods can be used successfully. I’m curious about all technical developments and what they will bring.
Criticism 3: Regarding neurons as actors is a case of anthropomorphism and ignores the interactions in complex systems that can lead to emergence
If I scrutinize my own ideas, I realize that one reason why I like the idea of neurons as actors is that this idea is simple and connects to the human desire to have an intuitive, empathic understanding of things. For example, we tend to personify even the smallest animals like ants, bees or ciliates in movies, children books or youtube videos. We tend to personify items of our personal households like a cup, a table, or an old house; or our plants in the garden; or the ocean, the wind, a dead tree. On the other hand, I feel that we are largely unable to feel the same empathy with distributed systems like the mycelium of a mushroom that spreads over kilometers in a forest, or the distributed power supply grid, or a large fish swarm, or a deep artificial neuronal network. We can feel amazement, but no empathy. I can imagine to be single water molecule that vibrates, rotates and is tossed around by the Brownian motion of the environment and by the convection of the ocean, but I cannot imagine to be the emergent phenomenon of water waves breaking at the shoreline which are the complex environment of this single water particle.
I think that there is a tendency in human thinking to avoid complexity and to impose a simple human-like personality onto all actors. Therefore, we should observe ourselves and try to avoid our tendency to personify parts of complex systems when they actually cannot be disentangled from their environment. It is easy to tell a story about specific neurons as important actors and what they want to do and how they feel immediate feedback of their actions like we do when seizing an apple, but it is difficult to tell a story about complexity, which is over the top of our heads. But maybe this, which we can hardly understand, is the truth, and this is good to keep in mind.