Annual report of my intuition about the brain (2020)

How does the brain work and how can we understand it? I want to make it a habit to report some of the thoughts about the brain that marked me most during the past twelve month at the end of each year – with the hope to advance and structure the progress in the part of my understanding of the brain that is not immediately reflected in journal publications. Enjoy the read! And check out previous year-end write-ups: 2018201920202021, 2022.

Doing experiments in neuroscience means opening Pandora’s box. On a daily basis, you’re confronted with the vexing fact that the outcome of experiments is not only slightly, but much more complex and variable than any mental model you could come up with. It is rewarding and soothing to read published stories about scientific findings, but they often become stories only because things which did not fit in were omitted or glossed over. This is understandable to some extent, since nobody wants to read 379 side-notes on anecdotal and potentially confusing observations. But it leads to a striking gap between headlines with clear messages, and the feeling of being overwhelmed by complexity when doing experiments or going through a raw dataset. It is possible to overcome this complexity by nested analysis pipelines (‘source’ extraction, unsupervised clustering, inclusion criteria, dimensionality reduction, etc.) and to restore simplicity. But the confusion often comes back when going back to the raw, unreduced data, because they contain so much more complexity.

In this year’s write-up, I want to address this complexity of the brain from the perspective of self-organized systems, and I will try to point out lines of research that can, in my opinion, contribute to an understanding of these systems in the brain.

Complex systems

Two years ago, I have been writing about the limitation of the human mind to deal with the brain’s complexity, and the reasons behind this limitations (Entanglement of temporal and spatial scales in the brain but not in the mind). This year again, I have been thinking quite a bit about these issues. During summer, in the bookshelf of a friend, I noticed the novel Jurassic Park, which my friend, to my surprise, recommended to me. The book, more so than the movie, tells the story of how a complex system – the Jurassic Park – cannot be controlled because of unexpected interactions among system components that were thought to be separated by design. This perspective is represented in the book by a smart-ass physicist who works on chaos theory. He not only predicts from the start that everything will go downhill but also provides lengthy rants about the hubris of men who think they can control complexity.

This threw me back to the days when I studied physics myself, actually also with a focus on complex systems: non-linear dynamics, non-equilibrium thermodynamics, chaos control and biophysics. So, with some years in neuroscience behind me, I went back to the theory of complex systems. I started to go through a very easy-to-read book on the topic by Melanie Mitchell: Complexity: A Guided Tour. Melanie Mitchell is herself a researcher in complexity science. She did her PhD work with Douglas Hofstadter, famously known for his book Gödel, Escher, Bach. Mitchell summarizes the history and ideas of her field in a refreshingly modest and self-critical way, which I can only recommend. As another bonus point, the book was published in 2009, just before deep learning emerged as a dominant idea – which also suppressed and overshadowed many other interesting lines of thought.

For example, Mitchell brings up John von Neumann’s idea of self-organization in cellular automata, Douglas Hofstadter’s work, Alan Turing’s idea of self-organization in simple reaction-diffusion systems, the cybernetics movement around Norbert Wiener, Hermann Haken’s concept of Synergetics, and Stephen Wolfram’s A New Kind of Science.

Unfortunately, many of these ideas about complex systems were intellectually inspiring and certainly influenced many people; but at the same time they often did not really hold their promise. They did not have a significant real-world impact outside of the philosophical realm, in contrast to, e.g., the invention of semiconductors or backpropagation. On both extreme sides of the spectrum, things were a bit detached from reality. On one extreme, ideas around self-organization like the Autopoiesis concept were riddled with ill-defined concepts and connected to ideas like “emergence”, “cognition” or “consciousness” in very vague ways. On the other side, many very influential researchers like Douglas Hofstadter or Stephen Wolfram had a very strong mathematical background and therefore were fascinated by beauty and simplicity rather than truly high-dimensional chaos. I think it’s fascinating to proof that a cellular automaton like the Game of Life is Turing-complete (i.e., it is a universal computer), but wouldn’t a practical application of such an automaton be more convincing and useful than a theoretical proof?

It is therefore tempting for an experimentalist and natural scientist to simply trash the entire field as verbal or mathematical acrobatics that will not help to understand complex systems like the brain. However, in the next section I’d like to make the point why I think the concept of self-organized systems should still be considered as potentially central when it comes to understanding the brain.

Self-organized systems

Over the last years, I have become more and more convinced that complex systems cannot be easily understood by simply describing their behavior. Even for very simple phenomena like the Lorenz equations, where the behavior of the system can be described by some sort of attractors, the low-dimensional description allows to predict the behavior of the system, but it does not tell much about the generative processes underlying the respective phenomena.

Low-dimensional descriptions of brain activity are one of the most active areas of current research in neuroscience. This can range from a description of brain dynamics in terms of oscillatory regimes, to default mode networks of the human brain, or more recently to attempts to break down the population activity of thousands of neurons into a low-dimensional manifold. These are beautiful descriptions of neuronal activities, and it is certainly useful to study the brain with these approaches. But does it provide us with a real understanding? From one perspective, one could say that such a condensed description (if it exists, which is not yet clear) would be a form of deep understanding, since any way to compress a description is some sort of understanding. But I think there should be a deeper way of understanding that focuses on the underlying generative processes.

Imagine you want to understand artificial neural networks (deep networks). One way would be to investigate information flows and how representations of the input evolve across different layers and become less similar to the input and more similar to the respective target label. This is an operative and valuable way to understand of what is going on. In my opinion, it would however allow for a deeper understanding to simply bring up the very organizing principles which underlie the generation of the network: back-propagation of errors during learning and the definition of the loss function.

Similarly, I think it would be equally more interesting in neuroscience to understand the generative and organizing principles which underlie the final structure of the brain, instead of studying the representations of information in neuronal activity. It is clear that a part of the organization of the brain is encoded in the genome (e.g., guidance cues for axonal growth, or the program of sequential migration of cell types, or the coarse specific connectivity across different cell types). However, the more flexible and possibly also more interesting part is probably not organized by an external designer (like a deep network) and also not directly organized by the genome. In the absence of an external designing instance, there must be self-organization at work.

Once we accept that this part of the generative principles underlying the brain structure and function is self-organization, it becomes immediately clear that it might be useful to get inspired by complexity science and the study of self-organized systems. This connection between neuroscience is probably evident to anybody working on complex systems, but I have the impression that this perspective is sometimes lost by systems neuroscience and in particular experimental neuroscience.

Self-organizing principles of neuronal networks: properties of single neurons

I believe that the most relevant building blocks of self-organization in the brain are single neurons (and not molecules, synapses or brain areas). A year ago, I have argued why I think this makes sense from an evolutionary perspective (Annual report of my intuition about the brain (2019)), and I have argued why it would be interesting to understand the objective function of a single cell. The objective function would be the single cell-specific generative principle that underlies the self-organization of biological neuronal networks.

Realistically speaking, this is too abstract a way of exploring biological neurons. What would be a less condensed way to describe the self-organizing principles underlying single neurons that are analogous to back-propagation and loss functions for deep networks? I would tend to mention main ingredients: First, the principles that determine the integration of inputs in a single neuron. Second, the principles that determine the way that neurons connect and modify their mutual connections – which is basically nothing but plasticity rules between neurons. I am convinced that the integrative properties of neurons and the plasticity rules of a neuron when interacting with other cells are the main ingredients that are together the self-organizing principles of neuronal networks.

This is a somewhat underwhelming conclusion, because both plasticity rules and integrative properties of neurons have been studied very intensely since the 1990s. The detour of this blog post about self-organization basically reframes what a certain branch of neuroscience has been studying anyway. However, in addition it makes – in my opinion – also clear why the study of population dynamics and low-dimensional descriptions of neuronal activity aims at a different level of understanding. And it makes the point that the deepest understanding of biosimilar networks can probably be achieved by studying the aspects of self-organizing agents, plasticity rules and single-cell integrative properties, and not by studying the pure behavior of animals or neuronal networks.

Studying self-organized neuronal networks and plasticity rules

So, how can we study these principles of self-organized agents? Unfortunately, the last 30 years have made it quite clear that there is not simply a single universal plasticity rule. Typical plasticity rules (spike-time dependent plasticity; fire-together-wire-together; NMDA-dependent potentiation) usually explain only a small fraction of the variance in the experimental data and can often be only studied in very specific experimental conditions, and in most cases only in slice work. Usually, the conditions of the experiment (number of presynaptic action potentials to induce plasticity, spike frequency, etc.) are tuned to achieve strong effects, and absence of effects in other conditions are not systematically studied and often go unreported. In addition, plasticity rules in vivo seem to be somewhat different. Neuromodulation and other state-dependent influences might affect plasticity rules in ways that make them almost impossible to study systematically. In addition, it is very likely that there is not a single plasticity rule that governs the same behavior across all all neurons, since diversity of properties has been shown in simulations to provide robustness to neuronal circuits at many different levels. And evolution has would be a fool not to make use of this property that is so easy to achieve – evolution does not care about being hard to reverse engineer. This however makes it merely impossible (although still very valuable!) to dissect these principles systematically in experiments.

That is why I think that simulations – not experiments – could be the best starting point for understanding these self-organized networks.

There is indeed a large body of work going into this direction. If you google for “self-organizing neuronal networks”, you will find a huge literature which goes back to the 60s and is often based on very simplistic models of neurons (still heavily inspired by condensed matter physics), but there are also some interesting more recent papers that directly combine modern plasticity rules with the idea of self-organization (e.g. Lazar et al., 2009). And there are not few computational labs that study plasticity rules and their effect on the organization of neuronal networks, which is also some kind of self-organization, e.g. the labs of Henning Sprekeler, Claudia Clopath, Tim Vogels, Friedemann Zenke, Richard Naud, all of them influenced by Wulfram Gerstner; or Sophie Deneuve; Christian Machens; Wolfgang Maass – to name just few out of many people who work on this topic. I think this is one of the most interesting fields of theoretical neuroscience. However, I would be personally very satisfied to see this field shift towards a better inclusion of the self-organizing perspective.

To give a random example, in a study from this year, Naumann and Sprekeler show how specific non-linear properties of neurons can mitigate a well-known problem associated with purely Hebbian plasticity rules (Presynaptic inhibition rapidly stabilises recurrent excitationin the face of plasticity, 2020). They basically take an experimental finding that has been made quite some time ago (pre-synaptic inhibition of the axonal boutons via GABAB receptors) and builds a model that explains how this could make sense in the light of plasticity rules. This is a very valuable way of doing research, also because it takes biological details of neurons into account and gives the experimentalists a potential context and explanation of their findings. However, this approach seems to be the perspective of a designer or engineer, rather than the approach of somebody who aims at an understanding of a self-organized system. What would be an alternative approach?

From engineered organization to self-organization

I think it would be useful to take the perspective of a neuron and in addition also an evolutionary perspective. Let’s say, a neuron with certain properties (rules on when and how to connect) joins the large pool of a recurrent network. The question which must be solved by the neuron is: How do I learn how to behave meaningfully?

I’d like to give an analogy on how I think this neuron should ideally behave: A human person that interacts with others in a social network, be it in real life or in the virtual world, must adjust their actions according to how they are received. Shouting loud all the time will isolate them, because they are blocking the receiving channels of others, and being silent all the time will let others equally drop the connections. To adjust the level of output and to adjust the appropriate content that will be well-received, it is crucial to listen to feedback.

This is what I think could be the central question from this self-organized perspective of neuronal circuits: How does the neuron get feedback on its own actions? With feedback, I do not mean global error signals about the behavior of the organism via neuromodulation channels, but feedback on the neuron’s action potentials and its other actions. Where does this feedback come from?

If we reduce the complex network to a single cell organism that lives by itself, we can immediately see the answer to this question. The feedback comes from the external world. A spike of this single cell organism has a direct impact on the world, and the world in return acts back upon the single cell organism. It is not clear how this scales up to larger networks, but I think that this inclusion of the external world, as opposed to a machine learning-style input-output task, could be the most important ingredient that makes the step from engineered network organizations to self-organized networks.

(There are many loose connections from here to reinforcement learning using learning agents and also to predictive processing, but let’s not go into that here.)

Conclusion and summary

I’m glad to get convinced by the opposite, but today I think that a very promising way to achieve a deep understanding of the brain could consist of the following ingredients, as motivated above:

1) To regard the brain as an at least partially self-organized network,

2) To use simulations together with evolutionary algorithms to explore the generative / self-organizing principles,

3) To consider properties and actions on the level of single neurons as the main parameters that can be modified during this evolutionary process and

4) To include an external world to transition from an externally organized to a self-organized system.

This entry was posted in Data analysis, machine learning, Network analysis, Neuronal activity, Reviews and tagged , , , . Bookmark the permalink.

4 Responses to Annual report of my intuition about the brain (2020)

  1. Pingback: Annual report of my intuition about the brain | A blog about neurophysiology

  2. Pingback: Annual report of my intuition about the brain (2019) | A blog about neurophysiology

  3. Pingback: Annual report of my intuition about the brain (2021) | A blog about neurophysiology

  4. Pingback: Peter Rupprecht’s Intuition about the Brain – a Book Review – The Self-Assembling Brain

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.