Friday, December 13, 2019

Our visual system uses recurrence in its representational dynamics

Fundamental work from Kietzmann et al. shows how recurrence - lateral and top-down feedback from higher to the more primary visual areas of the brain that first register visual input - is occurring during forming visual representations. This process is missing from engineering and neuroscience models that emphasize feedforward neural network models. (Click the link to the article and scroll down to see a fascinating video of their real time magnetoencephalography (MEG) measurements. ) 

Understanding the computational principles that underlie human vision is a key challenge for neuroscience and could help improve machine vision. Feedforward neural network models process their input through a deep cascade of computations. These models can recognize objects in images and explain aspects of human rapid recognition. However, the human brain contains recurrent connections within and between stages of the cascade, which are missing from the models that dominate both engineering and neuroscience. Here, we measure and model the dynamics of human brain activity during visual perception. We compare feedforward and recurrent neural network models and find that only recurrent models can account for the dynamic transformations of representations among multiple regions of visual cortex.
The human visual system is an intricate network of brain regions that enables us to recognize the world around us. Despite its abundant lateral and feedback connections, object processing is commonly viewed and studied as a feedforward process. Here, we measure and model the rapid representational dynamics across multiple stages of the human ventral stream using time-resolved brain imaging and deep learning. We observe substantial representational transformations during the first 300 ms of processing within and across ventral-stream regions. Categorical divisions emerge in sequence, cascading forward and in reverse across regions, and Granger causality analysis suggests bidirectional information flow between regions. Finally, recurrent deep neural network models clearly outperform parameter-matched feedforward models in terms of their ability to capture the multiregion cortical dynamics. Targeted virtual cooling experiments on the recurrent deep network models further substantiate the importance of their lateral and top-down connections. These results establish that recurrent models are required to understand information processing in the human ventral stream.

Wednesday, December 11, 2019

More insight into metformin's beneficial effects on diabetes, aging, and several diseases.

A group at McMaster University has shown an effect of the diabetes drug metformin beyond its suppression of liver glucose production that might partially explain its beneficial effects on aging and a number of diverse diseases such as cognitive disorders, cancer and cardiovascular disease. (There are currently over 1,500 registered clinical trials to test the effects of metformin in aging and different diseases.) It induces the expression and secretion of growth differentiating factor 15 (GDF15) in mouse liver cells, a protein known to suppress appetite and cause weight loss.

I'm sorely tempted to try to get myself a prescription for the stuff! Here is the technical abstract of the article:
Metformin is the most commonly prescribed medication for type 2 diabetes, owing to its glucose-lowering effects, which are mediated through the suppression of hepatic glucose production (reviewed in refs. 1,2,3). However, in addition to its effects on the liver, metformin reduces appetite and in preclinical models exerts beneficial effects on ageing and a number of diverse diseases (for example, cognitive disorders, cancer, cardiovascular disease) through mechanisms that are not fully understood1,2,3. Given the high concentration of metformin in the liver and its many beneficial effects beyond glycemic control, we reasoned that metformin may increase the secretion of a hepatocyte-derived endocrine factor that communicates with the central nervous system4. Here we show, using unbiased transcriptomics of mouse hepatocytes and analysis of proteins in human serum, that metformin induces expression and secretion of growth differentiating factor 15 (GDF15). In primary mouse hepatocytes, metformin stimulates the secretion of GDF15 by increasing the expression of activating transcription factor 4 (ATF4) and C/EBP homologous protein (CHOP; also known as DDIT3). In wild-type mice fed a high-fat diet, oral administration of metformin increases serum GDF15 and reduces food intake, body mass, fasting insulin and glucose intolerance; these effects are eliminated in GDF15 null mice. An increase in serum GDF15 is also associated with weight loss in patients with type 2 diabetes who take metformin. Although further studies will be required to determine the tissue source(s) of GDF15 produced in response to metformin in vivo, our data indicate that the therapeutic benefits of metformin on appetite, body mass and serum insulin depend on GDF15.

Monday, December 09, 2019

Heritable gaps between chronological age and brain age are increased in common brain disorders.

Kaufmann et al. have used machine learning on s large dataset to estimate robust estimation of individual biological brain ages on the basis of structural brain imaging features. The deviation between brain age and chronological age — termed the brain age gap — appears to be a promising marker of brain health. It was largest in schizophrenia, multiple sclerosis, dementia, and bipolar spectrum disorder. The authors also assessed the overlap between the genetic underpinnings of brain age gap and common brain disorders. The bottom line conclusion (from a very extensive and complex analysis) is that common brain disorders are associated with heritable patterns of apparent aging of the brain Their abstract:
Common risk factors for psychiatric and other brain disorders are likely to converge on biological pathways influencing the development and maintenance of brain structure and function across life. Using structural MRI data from 45,615 individuals aged 3–96 years, we demonstrate distinct patterns of apparent brain aging in several brain disorders and reveal genetic pleiotropy between apparent brain aging in healthy individuals and common brain disorders.

Friday, December 06, 2019

Same-Sex behavior in animals - a new view.

Monk et al. offer a fresh perspective on the "problem" of how same-sex sexual behavior could have evolved. It is a problem only if different-sex sexual behavior is the baseline condition for animals, from which single-sex behavior has evolved. The authors suggest that same-sex behavior is bound up in the very origins of animal sex. It hasn’t had to continually re-evolve: It’s always been there. The arguments of Monk and collaborators are summarized in a review by Elbein:
Instead of wondering why same-sex behavior had independently evolved in so many species, Ms. Monk and her colleagues suggest that it may have been present in the oldest parts of the animal family tree. The earliest sexually reproducing animals may have mated with any other individual they came across, regardless of sex. Such reproductive strategies are still practiced today by hermaphroditic species, like snails, and species that don’t appear to differentiate, like sea urchins.
Over time, Ms. Monk said, sexual signals evolved — different sizes, colors, anatomical features and behaviors — allowing different sexes to more accurately target each other for reproduction. But same-sex behavior continued in some organisms, leading to diverse sexual behaviors and strategies across the animal kingdom. And while same-sex behavior may grant some evolutionary benefits, an ancient origin would mean those benefits weren’t required for it to exist.
But how has same-sex behavior stuck around? The answer may be that such behaviors aren’t as evolutionarily costly as assumed. Traditionally, Ms. Monk said, any mating behavior that doesn’t produce young is seen as a waste. But animal behavior often doesn’t fit neatly into an economic accounting of costs and benefits.
Here is the abstract of Monk et al.:
Same-sex sexual behaviour (SSB) has been recorded in over 1,500 animal species with a widespread distribution across most major clades. Evolutionary biologists have long sought to uncover the adaptive origins of ‘homosexual behaviour’ in an attempt to resolve this apparent Darwinian paradox: how has SSB repeatedly evolved and persisted despite its presumed fitness costs? This question implicitly assumes that ‘heterosexual’ or exclusive different-sex sexual behaviour (DSB) is the baseline condition for animals, from which SSB has evolved. We question the idea that SSB necessarily presents an evolutionary conundrum, and suggest that the literature includes unchecked assumptions regarding the costs, benefits and origins of SSB. Instead, we offer an alternative null hypothesis for the evolutionary origin of SSB that, through a subtle shift in perspective, moves away from the expectation that the origin and maintenance of SSB is a problem in need of a solution. We argue that the frequently implicit assumption of DSB as ancestral has not been rigorously examined, and instead hypothesize an ancestral condition of indiscriminate sexual behaviours directed towards all sexes. By shifting the lens through which we study animal sexual behaviour, we can more fruitfully examine the evolutionary history of diverse sexual strategies.

Wednesday, December 04, 2019

Something in the way we move.

Gretchen Reynolds points to work by Hug et al. suggesting that each of us has a unique muscle activation signature that can be revealed during walking and pedaling. Understanding movement patterns could help in improving and refining robotics, prosthetics, physical therapy and personalized exercise programs. On the darker side, a Chinese company (Watrix) is using computer vision to to enhance the recognition of individuals in crowds by their walking postures:
...its gait recognition solution “Shuidi Shenjian” ... will enable security departments to quickly search and recognize identities by their body shape and walking posture. The company notes that this product is highly effective when targets walk from a long distance or in weak light, cover their faces or wear different clothes, and would be a great supplement to current computer vision products.
Here is the complete abstract from Hug et al.:
Although it is known that the muscle activation patterns used to produce even simple movements can vary between individuals, these differences have not been considered to prove the existence of individual muscle activation strategies (or signatures). We used a machine learning approach (support vector machine) to test the hypothesis that each individual has unique muscle activation signatures. Eighty participants performed a series of pedaling and gait tasks, and 53 of these participants performed a second experimental session on a subsequent day. Myoelectrical activity was measured from eight muscles: vastus lateralis and medialis, rectus femoris, gastrocnemius lateralis and medialis, soleus, tibialis anterior, and biceps femoris-long head. The classification task involved separating data into training and testing sets. For the within-day classification, each pedaling/gait cycle was tested using the classifier, which had been trained on the remaining cycles. For the between-day classification, each cycle from day 2 was tested using the classifier, which had been trained on the cycles from day 1. When considering all eight muscles, the activation profiles were assigned to the corresponding individuals with a classification rate of up to 99.28% (2,353/2,370 cycles) and 91.22% (1,341/1,470 cycles) for the within-day and between-day classification, respectively. When considering the within-day classification, a combination of two muscles was sufficient to obtain a classification rate >80% for both pedaling and gait. When considering between-day classification, a combination of four to five muscles was sufficient to obtain a classification rate >80% for pedaling and gait. These results demonstrate that strategies not only vary between individuals, as is often assumed, but are unique to each individual.

Monday, December 02, 2019

Rival theories of consciousness being tested by large project.

In the first phase of a $20 million dollar project, six laboratories are going to run experiments with more than 500 participants to test two of the primary theories of consciousness:
The first two contenders are the global workspace theory (GWT), championed by Stanislas Dehaene of the Coll├Ęge de France in Paris, and the integrated information theory (IIT), proposed by Giulio Tononi of the Uni-versity of Wisconsin in Madison. The GWT says the brain’s prefrontal cortex, which con-trols higher order cognitive processes like decision-making, acts as a central computer that collects and prioritizes information from sensory input. It then broadcasts the infor-mation to other parts of the brain that carry out tasks. Dehaene thinks this selection pro-cess is what we perceive as consciousness. By contrast, the IIT proposes that conscious-ness arises from the interconnectedness of brain networks. The more neurons interact with one another, the more a being feels conscious—even without sensory input. IIT proponents suspect this process occurs in the back of the brain, where neurons con-nect in a gridlike structure...Tononi and Dehaene have agreed to pa-rameters for the experiments and have reg-istered their predictions. To avoid conflicts of interest, the scientists will neither collect nor interpret the data. If the results appear to disprove one theory, each has agreed to admit he was wrong—at least to some extent
The labs, in the United States, Germany, the United Kingdom, and China, will use three techniques to record brain activity as volun-teers perform consciousness-related tasks: functional magnetic resonance imaging, electroencephalography, and electrocortico-graphy (a form of EEG done during brain sur-gery, in which electrodes are placed directly on the brain). In one experiment, research-ers will measure the brain’s response when a person becomes aware of an image. The GWT predicts the front of the brain will suddenly become active, whereas the IIT says the back of the brain will be consistently active.