Decomposition of the local arrow of time in the brain

by

in

[ad_1]

    Yasser Roudi1 and John Hertz2

    • 1Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, Norway
    • 2Nordic Institute for Theoretical Physics (NORDITA), Stockholm, Sweden

&ball; Physics 15, 133

Researchers have developed a way to quantitatively assess irreversibility in complex networks.

S. Mohsenin

Figure 1: Scientists have developed a method to calculate the irreversibility of the behavior of complex systems.

A cracked egg cannot spontaneously break, and a drop of ink when mixed with water cannot spontaneously dissolve. Nature is full of such irreversible phenomena, actions that cannot undo themselves. This irreversibility is quantified by the so-called entropy production rate, which according to the second law of thermodynamics is always positive [1]. Thus, one can think of the rate of entropy production as a measure of the flow or “arrow” of time for a system. However, it is difficult to measure this parameter for complex systems, such as the brain, which have non-trivial, complex interactions between their constituents. Now Christopher Lynn of the City University of New York and Princeton University and colleagues present a method to quantify entropy production in such a system [2, 3] (Fig. 1). The team applies its method to the activity of neurons in the retina of a newt, as the system responds to a series of complex visual images. Their work opens the door to the quantitative analysis of the arrow of time in complex biological systems, such as neuronal networks in the brain, where the model could potentially enable a quantitative understanding of the neural basis for our perception of the passage of time.

At a basic level, the irreversibility of a system is mathematically equivalent to the “distance” between the probability of the system transitioning between two states and the probability of the reverse transition [4]. When these probabilities are different, the distance is positive. Irreversibility, so defined, is not an all-or-nothing quantity; it can have any positive value. Estimating this value from data from real systems can be very difficult, especially for biological systems, most of which are very complex and have many interacting variables. For example, in the brain there are billions of neurons – the brain’s information messengers – which emit small voltages to create different voltage spike patterns – the brain’s states. Even in just a millimeter-sized piece of the brain, there are thousands of neurons, making measurements difficult. Current experimental know-how only allows scientists to record with high temporal resolution the spike trains of a few hundred neurons. However, the spike trains cannot be recorded long enough to allow calculation of irreversibility for networks of interconnected neurons. There is therefore a need for ways of calculating irreversibility that work for the kind of limited data we have. This problem is what Lynn and his colleagues are addressing.

In their study, Lynn and colleagues show that in a system like the brain, irreversibility can be decomposed into a series of terms that can be calculated from first-order statistics, pairwise statistics, triplet statistics, and so on, up to Norder where N is the number of variables. In the case of the brain, these terms correspond to contributions from spike statistics of single neurons, pairs of neurons, triplets of neurons, and so on.

The team shows that this decomposition works to calculate the irreversibility of some simple small models for which they evaluate the entire series. For example, the team applies its model to a system of Boolean logic gates, which perform operations on multiple binary inputs that produce a single binary output. They also consider a theoretical system of neurons that produce spike trains of the length that can be recorded today. In that case, they find that they can accurately estimate the lower-order statistics, but not the higher-order ones—as the order increases, more and more data are needed for the calculations, and the estimations become impossible. This finding indicates that if the series converges sufficiently fast for a given system, it should be possible to reasonably calculate irreversibility using the formalism.

In the case of neural data, Lynn and colleagues also applied their method to spike trains recorded from the retina of a salamander exposed to different visual stimuli: a movie of a natural scene and an artificially constructed, reversible movie showing Brownian motion. They show that their data were sufficient to estimate terms up to fifth- or sixth-order statistics, meaning that a complete analysis is only possible if a system contains up to five or six neurons. The team then makes the following observations about the system: First, the measured degree of irreversibility depends on the film shown. The irreversibility is also always positive even if the stimulus is reversible, a finding indicating that the irreversibility of spike trains is not simply inherited from the stimulus. Neither of these results is particularly surprising, and the latter is expected given that the retina is a biological machine that responds to, but does not exactly copy, the statistics of the visual stimuli. Perhaps surprising is the observation that the irreversibility of spike trains is greater when the stimulus is reversible than when it is not.

The team also finds that low-order (especially pairwise) statistics account for most of the total estimated irreversibility. If this result turns out to hold for a much larger population of neurons, it will be good news, since experimentalists could simply use the lower-order statistics to get the information they need without going to higher and higher order. However, this possibility is not obvious, since the number of Nth-order correlations increase exponentially with the size of the population, so their effect may be more significant for larger populations [5]. Eventually, it might even be possible to apply the model to networks large enough to have behavioral relevance, allowing researchers to address questions such as whether or how the subjectively perceived passage of time is related to irreversibility in network dynamics.

Experiments to test these questions are, of course, not yet possible. Yet, with the remarkable technological advances currently taking place in data collection and in the manipulation of complex systems, that may soon change. Lynn and his colleagues’ quantitative framework will help design and analyze such experiments.

References

  1. E. Fermi, Thermodynamics (Dover Publications, New York, 1936).
  2. CW Lynn et al.“Decomposition of the local arrow of time in interacting systems,” Phys. Rev. Easy. 129118101 (2022).
  3. CW Lynn et al.“The emergence of local irreversibility in complex interacting systems,” Phys. Rev. I will 106034102 (2022).
  4. U. Seifert, “Stochastic Thermodynamics, Fluctuation Theorems and Molecular Machines,” Rep. Prog. Phys. 75126001 (2012).
  5. Y. Roudi et al.“Pairwise maximum entropy models for studying large biological systems: When they work and when they don’t,” PLoS Comput. Biol. 5e1000380 (2009).

About the authors

Picture of Yasser Roudi

Yasser Roudi is a professor at the Kavli Institute for Systems Neuroscience at the Norwegian University of Science and Technology. He got his Ph.D. from the International School for Advanced Studies (SISSA), Trieste, Italy, and has also worked at University College London, UK; Nordic Institute for Theoretical Physics (NORDITA), Sweden; and the Institute for Advanced Study, New Jersey. His research interests include statistical physics of perturbed systems, theory of neural computations and statistical inference. In 2015, he was awarded the Eric Kandel Young Neuroscientist Prize for his contributions to statistical physics and information processing. His recent work focuses on understanding efficient data processing in the undersampled regime and on the role of nonlinearities in neural information processing.

Image by John Hertz

John Hertz is Emeritus Professor at the Nordic Institute for Theoretical Physics (NORDITA), an institute hosted by Stockholm University and KTH Royal Institute of Technology, Sweden, as well as at the Niels Bohr Institute at the University of Copenhagen, Denmark. He got his Ph.D. from the University of Pennsylvania and worked at the University of Cambridge, UK and the University of Chicago before moving to NORDITA in 1980. He has worked on problems in condensed matter and statistical physics, especially itinerant electron magnetism, spin glasses and artificial glasses. neural networks. In recent decades, he has primarily worked in theoretical neuroscience, with a particular focus on cortical circuit dynamics and network inference.


Subject areas

related articles

Depletion force measurements Get active
Complex systems

Depletion force measurements Get active

Measurements of the attractive force experienced by a passive particle in a bath of active and of that system’s microstructure raise the tantalizing possibility of simple and generic quantitative descriptions of the organization of objects in active and living systems. Read more “

In the brain, function follows form
Complex systems

In the brain, function follows form

By interpreting magnetic resonance images in the context of network control theory, researchers seek to explain the brain’s dynamics in terms of its structure, information content, and energy. Read more “

Defects help 3D printed particles keep swirling

More articles

[ad_2]


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *