Wednesday, March 01, 2017

Theory of cortical function

Heeger presents a simple and lucid framework for a unified theory of cortical function that he suggests should be useful for guiding both neuroscience and artificial intelligence work. I'm passing on the summary, abstract and the first part of the introduction (the article, unfortunately, is not open source.)

Significance
A unified theory of cortical function is proposed for guiding both neuroscience and artificial intelligence research. The theory offers an empirically testable framework for understanding how the brain accomplishes three key functions: (i) inference: perception is nonconvex optimization that combines sensory input with prior expectation; (ii) exploration: inference relies on neural response variability to explore different possible interpretations; (iii) prediction: inference includes making predictions over a hierarchy of timescales. These three functions are implemented in a recurrent and recursive neural network, providing a role for feedback connections in cortex, and controlled by state parameters hypothesized to correspond to neuromodulators and oscillatory activity.
Abstract
Most models of sensory processing in the brain have a feedforward architecture in which each stage comprises simple linear filtering operations and nonlinearities. Models of this form have been used to explain a wide range of neurophysiological and psychophysical data, and many recent successes in artificial intelligence (with deep convolutional neural nets) are based on this architecture. However, neocortex is not a feedforward architecture. This paper proposes a first step toward an alternative computational framework in which neural activity in each brain area depends on a combination of feedforward drive (bottom-up from the previous processing stage), feedback drive (top-down context from the next stage), and prior drive (expectation). The relative contributions of feedforward drive, feedback drive, and prior drive are controlled by a handful of state parameters, which I hypothesize correspond to neuromodulators and oscillatory activity. In some states, neural responses are dominated by the feedforward drive and the theory is identical to a conventional feedforward model, thereby preserving all of the desirable features of those models. In other states, the theory is a generative model that constructs a sensory representation from an abstract representation, like memory recall. In still other states, the theory combines prior expectation with sensory input, explores different possible perceptual interpretations of ambiguous sensory inputs, and predicts forward in time. The theory, therefore, offers an empirically testable framework for understanding how the cortex accomplishes inference, exploration, and prediction.
Introduction
Perception is an unconscious inference. Sensory stimuli are inherently ambiguous so there are multiple (often infinite) possible interpretations of a sensory stimulus (Fig. 1). People usually report a single interpretation, based on priors and expectations that have been learned through development and/or instantiated through evolution. For example, the image in Fig. 1A is unrecognizable if you have never seen it before. However, it is readily identifiable once you have been told that it is an image of a Dalmatian sniffing the ground near the base of a tree. Perception has been hypothesized, consequently, to be akin to Bayesian inference, which combines sensory input (the likelihood of a perceptual interpretation given the noisy and uncertain sensory input) with a prior or expectation.

Our brains explore alternative possible interpretations of a sensory stimulus, in an attempt to find an interpretation that best explains the sensory stimulus. This process of exploration happens unconsciously but can be revealed by multistable sensory stimuli (e.g., Fig. 1B), for which one’s percept changes over time. Other examples of bistable or multistable perceptual phenomena include binocular rivalry, motion-induced blindness, the Necker cube, and Rubin’s face/vase figure. Models of perceptual multistability posit that variability of neural activity contributes to the process of exploring different possible interpretations, and empirical results support the idea that perception is a form of probabilistic sampling from a statistical distribution of possible percepts. This noise-driven process of exploration is presumably always taking place. We experience a stable percept most of the time because there is a single interpretation that is best (a global minimum) with respect to the sensory input and the prior. However, in some cases, there are two or more interpretations that are roughly equally good (local minima) for bistable or multistable perceptual phenomena.
Prediction, along with inference and exploration, may be a third general principle of cortical function. Information processing in the brain is dynamic. Visual perception, for example, occurs in both space and time. Visual signals from the environment enter our eyes as a continuous stream of information, which the brain must process in an ongoing, dynamic way. How we perceive each stimulus depends on preceding stimuli and impacts our processing of subsequent stimuli. Most computational models of vision are, however, static; they deal with stimuli that are isolated in time or at best with instantaneous changes in a stimulus (e.g., motion velocity). Dynamic and predictive processing is needed to control behavior in sync with or in advance of changes in the environment. Without prediction, behavioral responses to environmental events will always be too late because of the lag or latency in sensory and motor processing. Prediction is a key component of theories of motor control and in explanations of how an organism discounts sensory input caused by its own behavior. Prediction has also been hypothesized to be essential in sensory and perceptual processing. ...Moreover, prediction might be critical for yet a fourth general principle of cortical function: learning.

Tuesday, February 28, 2017

Universality of the cognitive architecture of pride.

An international collaboration of evolutionary psychologists suggests that a universal cognitive architecture underlies the emotion of pride, and that the emotion of pride functions as an evolved guidance system that modulates behavior to cost-effectively manage and capitalize on the propensities of others to value or respect the actor:

Significance
Cross-cultural tests from 16 nations were performed to evaluate the hypothesis that the emotion of pride evolved to guide behavior to elicit valuation and respect from others. Ancestrally, enhanced evaluations would have led to increased assistance and deference from others. To incline choice, the pride system must compute for a potential action an anticipated pride intensity that tracks the magnitude of the approval or respect that the action would generate in the local audience. All tests demonstrated that pride intensities measured in each location closely track the magnitudes of others’ positive evaluations. Moreover, different cultures echo each other both in what causes pride and in what elicits positive evaluations, suggesting that the underlying valuation systems are universal.
Abstract
Pride occurs in every known culture, appears early in development, is reliably triggered by achievements and formidability, and causes a characteristic display that is recognized everywhere. Here, we evaluate the theory that pride evolved to guide decisions relevant to pursuing actions that enhance valuation and respect for a person in the minds of others. By hypothesis, pride is a neurocomputational program tailored by selection to orchestrate cognition and behavior in the service of: (i) motivating the cost-effective pursuit of courses of action that would increase others’ valuations and respect of the individual, (ii) motivating the advertisement of acts or characteristics whose recognition by others would lead them to enhance their evaluations of the individual, and (iii) mobilizing the individual to take advantage of the resulting enhanced social landscape. To modulate how much to invest in actions that might lead to enhanced evaluations by others, the pride system must forecast the magnitude of the evaluations the action would evoke in the audience and calibrate its activation proportionally. We tested this prediction in 16 countries across 4 continents (n = 2,085), for 25 acts and traits. As predicted, the pride intensity for a given act or trait closely tracks the valuations of audiences, local (mean r = +0.82) and foreign (mean r = +0.75). This relationship is specific to pride and does not generalize to other positive emotions that coactivate with pride but lack its audience-recalibrating function.

Monday, February 27, 2017

Monday morning Schubert

On Sunday Feb. 19 I gave a recital dedicated to the memory of David Goldberger, who I had performed with in several four hands recitals several years ago. He gave a recital on his 90th birthday in the summer of 2015, after his diagnosis with stomach cancer, and died in May of 2016. Franz Schubert was his passion, and his magnum opus on the life and music of Schubert was left unfinished at his death. Here is one of the pieces I played at his memorial recital.


Friday, February 24, 2017

Vitamin B3 (Niacin) protects from glaucoma

A number of anti-aging elixirs contain vitamin B3, or niacin, which is a precursor of nicotinamide adenine dinucleotide (NAD+), a key molecule in mitochondrial energy and redox metabolism. (I've tried a few mixtures with niacin myself, but find they make me a bit hyper.) Williams et al. show one clear therapeutic effect of the compound. Here is science summary by Crowston and Trounce, followed by the abstract from Williams et al.
Advancing age predisposes us to a number of neurodegenerative diseases, yet the underlying mechanisms are poorly understood. With some 70 million individuals affected, glaucoma is the world's leading cause of irreversible blindness. Glaucoma is characterized by the selective loss of retinal ganglion cells that convey visual messages from the photoreceptive retina to the brain. Age is a major risk factor for glaucoma, with disease incidence increasing near exponentially with increasing age. Treatments that specifically target retinal ganglion cells or the effects of aging on glaucoma susceptibility are currently lacking. On page 756 of this issue, Williams et al. (1) report substantial advances toward filling these gaps by identifying nicotinamide adenine dinucleotide (NAD+) decline as a key age-dependent risk factor and showing that restoration with long-term dietary supplementation or gene therapy robustly protects against neuronal degeneration.
Glaucomas are neurodegenerative diseases that cause vision loss, especially in the elderly. The mechanisms initiating glaucoma and driving neuronal vulnerability during normal aging are unknown. Studying glaucoma-prone mice, we show that mitochondrial abnormalities are an early driver of neuronal dysfunction, occurring before detectable degeneration. Retinal levels of nicotinamide adenine dinucleotide (NAD+, a key molecule in energy and redox metabolism) decrease with age and render aging neurons vulnerable to disease-related insults. Oral administration of the NAD+ precursor nicotinamide (vitamin B3), and/or gene therapy (driving expression of Nmnat1, a key NAD+-producing enzyme), was protective both prophylactically and as an intervention. At the highest dose tested, 93% of eyes did not develop glaucoma. This supports therapeutic use of vitamin B3 in glaucoma and potentially other age-related neurodegenerations.

Thursday, February 23, 2017

Scientific curiosity counters politically motivated reasoning.

Jasny points to work by Kahan et al. (open source) showing that science curiosity (of the sort shown by MindBlog readers!) promotes open-minded engagement with information that is contrary to individuals’ political predispositions. Jasny's summary:
Knowledge does not always change biases, and people tend to absorb information that fits their prejudices. However, rather than studying scientific knowledge, Kahan et al. studied scientific curiosity—the tendency to look for and consume scientific information for pleasure. Two sets of subjects, including several thousand people, were given questions about their interests and activities. Reactions to documentaries and to news stories that contained surprising or unsurprising material were also tracked. The more scientifically curious people were (regardless of their politics), the less likely they were to show signs of politically motivated reasoning. People with higher curiosity ratings were more willing to look at surprising information that conflicted with their political tendencies than people with lower ratings.

Wednesday, February 22, 2017

Will the “Anthropocene Era” concept slow or accelerate human impact on our planet?

Wesley Yang does a nice piece in the NYTimes Magazine, which notes that the concept of an anthropocene era, as a new stage of the geological time scale leaving behind the Holocene epoch which began 10,000-12,000 years ago and introducing the sixth great extinction in earth’s history, was introduced by Paul Crutzen around the year 2000…
…to capture the imagination and frame the world in a word that would create urgency around the issue of climate change and other slow-building dangers accruing to the earth. But the risk was always that the word would capture the imagination all too well and become more like a summons to further heroic exertions to remake the world in our own image.
In Diane Ackerman’s 2014 book, “The Human Age: The World Shaped by Us,” the author declares herself “enormously hopeful” at the start of the Anthropocene. She goes on to chronicle, in a mood of excited ambivalence, the good and the bad: “a scary mass extinction of animals” and “alarming signs of climate change” but also a number of promising “revolutions” in sustainability, manufacturing, biomimicry and nanotechnology. The novelist Roy Scranton, in his short 2015 polemic, “Learning to Die in the Anthropocene,” calls on us to abandon false hope in the “toxic, cannibalistic and self-destructive” system of carbon-based capitalism and to “learn to die not as individuals, but as a civilization.” And Jedediah Purdy, author of the 2015 tract “After Nature: A Politics for the Anthropocene,” contrives to see opportunity in the crisis.
The Israeli writer and historian Yuval Harari’s book “Homo Deus,” published this month in the United States, makes the case that the 21st century will see an effort “to upgrade humans into Gods” who will take over biological evolution, replacing chance with intelligent design oriented around our desires. By merging with our technologies, humans could be released from the biases that plague our cognition, free to exercise the meticulous planning and invention required to save the planet from ourselves.
The book’s ruthless appropriation of the Anthropocene will almost certainly be regarded as an obscenity by those who first rallied around it, a celebration of the very hubris that brought us to the brink of destruction in the first place. Unwinding the damage we’ve done to the earth now represents a challenge so enormous that it forces us to dream about fantastical powers, to set about creating them and in the process either find our salvation or hasten our demise.

Tuesday, February 21, 2017

Decision-making: Judges' decisions not so legal

An article by Spamann and Klöhn in The Journal of Legal Studies presenting an experiment with real judges showing that justice is less blind and less legalistic than we might hope. Here is a summary by Yeeles:
It is well known that judges utilize extra-legal information when deciding cases. What is notable from a recent experimental investigation is that precedent, a core precept of the legal model of judicial decision-making, seems to have no detectable effect on judgment when weak, while defendant characteristics play an outsized role.
Holger Spamann, of Harvard Law School, and Lars Klöhn, of Humboldt-University Berlin, report the results of an experiment that asked 32 US federal judges to decide a real appeals case from an international tribunal. The judges were presented cases with contrasting weak precedents and two fictitious defendants that varied by nationality, biography and attitude. The proportion of judges upholding the trial conviction was indistinguishable across precedents, but differed significantly by defendants. Strikingly, although perhaps not surprisingly, the judges' written reasons disregarded defendant characteristics and instead focused on precedent. Prima facie, their decisions adhered to the legal model, obscuring strategic and attitudinal factors that influenced their decisions.
The authors are hesitant to draw strong policy conclusions at this stage, and instead call for replication and refinement. Further research will be needed to obtain a broader understanding of when legally irrelevant information takes blind precedence.

Monday, February 20, 2017

Musical evolution in the lab exhibits rhythmic universals

Ravignani et al. have managed to grow the rhythmic universals of human music in the laboratory, suggesting that they arise from human cognitive and biological biases.  Their abstract:
Music exhibits some cross-cultural similarities, despite its variety across the world. Evidence from a broad range of human cultures suggests the existence of musical universals, here defined as strong regularities emerging across cultures above chance. In particular, humans demonstrate a general proclivity for rhythm, although little is known about why music is particularly rhythmic and why the same structural regularities are present in rhythms around the world. We empirically investigate the mechanisms underlying musical universals for rhythm, showing how music can evolve culturally from randomness. Human participants were asked to imitate sets of randomly generated drumming sequences and their imitation attempts became the training set for the next participants in independent transmission chains. By perceiving and imitating drumming sequences from each other, participants turned initially random sequences into rhythmically structured patterns. Drumming patterns developed into rhythms that are more structured, easier to learn, distinctive for each experimental cultural tradition and characterized by all six statistical universals found among world music; the patterns appear to be adapted to human learning, memory and cognition. We conclude that musical rhythm partially arises from the influence of human cognitive and biological biases on the process of cultural evolution.
And, some background from their article describing the six statistical universals found in world music:
Six rhythmic features can be considered human universals, showing a greater than chance frequency overall and appearing in all geographic regions of the world. These statistical universals are: 
-A regularly spaced (isochronous) underlying beat, akin to an implicit metronome. 
-Hierarchical organization of beats of unequal strength, so that some events in time are marked with respect to others. 
-Grouping of beats in two (for example, marches) or three (for example, waltzes). 
-A preference for binary (2-beat) groupings. 
-Clustering of beat durations around a few values distributed in less than five durational categories. 
-The use of durations from different categories to construct riffs, that is, rhythmic motifs or tunes.

Saturday, February 18, 2017

OhMyGawd....

I have to pass on this graphic sent by a friend, perhaps a reaction to Trump's recent press conference.


Friday, February 17, 2017

The purpose of sleep? To forget.

In two recent Science papers de Vivo et al. and Diering et al. probe the nightlife of the synapses that control the signalling between cells in our brain. They find substantial alterations in the structure and molecular machinery of synapses during sleep, providing strong evidence for synaptic downscaling during sleep and upscaling during wake, as well as clues to the molecular mechanisms. The idea is that our brain synapses grow during the day, and our brain circuits get more noisy. During sleep our brains pare back the connections to enhance the signal to noise ratio, as we forget some of the things learned during the day. Here is a summary graphic taken from the review by Acsády and Harris:


Thursday, February 16, 2017

Hacking the brain to overcome fear

Schiller does a brief review of work by Koizumi et al., which points to a method for reducing defensive responses without consciously confronting the threatening cues, paving the way for fear-reducing therapies via unconscious processing. The fMRI signals associated with fear conditioned stimuli trained on the first day of the experiment are determined . Then, in the absence of the threatening cues, appearance and growth of the activation pattern representing the conditioned stimulus is paired with a monetary reward in sessions over the next three days. On the fifth day, the defensive response of participants to the threatening cues is significantly reduced, as is amygdala activity. The open source articles give a more complete account. Here is the Koizumi et al. abstract:
Fear conditioning is a fundamentally important and preserved process across species. In humans it is linked to fear-related disorders such as phobias and post-traumatic stress disorder (PTSD). Fear memories can be reduced by counter-conditioning, in which fear conditioned stimuli (CS+s) are repeatedly reinforced with reward or with novel non-threatening stimuli. However, this procedure involves explicit presentations of CS+s, which is itself aversive before fear is successfully reduced. This aversiveness may be a problem when trying to translate such experimental paradigms into clinical settings. It also raises the fundamental question as to whether explicit presentations of feared objects is necessary for fear reduction. Although learning without explicit stimulus presentation has been previously demonstrated, whether fear can be reduced while avoiding explicit exposure to CS+s remains largely unknown. One recently developed approach employs an implicit method to induce learning by reinforcing stimulus-specific neural representations using real-time decoding of multivariate functional magnetic resonance imaging (fMRI) signals in the absence of stimulus presentation; that is, pairing rewards with the occurrences of multi-voxel brain activity patterns matching a specific stimulus (decoded fMRI neurofeedback (DecNef)). It has been shown that participants exhibit perceptual learning for a specific visual stimulus feature through DecNef, without being given any strategy for the induction of specific neural representations, and without awareness of the content of reinforced neural representations. Here we examined whether a similar approach could be applied to counter-conditioning of fear. We show that we can reduce fear towards CS+s by pairing rewards with the activation patterns in visual cortex representing a CS+, while participants remain unaware of the content and purpose of the procedure. This procedure may be an initial step towards novel treatments for fear-related disorders such as phobia and PTSD, via unconscious processing.

Wednesday, February 15, 2017

Gender stereotypes emerge early.

Bian et al. (open source) find that children at age five do not consider boys and girls different with respect to being 'really, really smart' - the childhood version of adult brilliance. But by age 6, girls are more like to put more boys in the 'really, really smart' category and steer themselves away from games intended for that category.
Common stereotypes associate high-level intellectual ability (brilliance, genius, etc.) with men more than women. These stereotypes discourage women’s pursuit of many prestigious careers; that is, women are underrepresented in fields whose members cherish brilliance (such as physics and philosophy). Here we show that these stereotypes are endorsed by, and influence the interests of, children as young as 6. Specifically, 6-year-old girls are less likely than boys to believe that members of their gender are “really, really smart.” Also at age 6, girls begin to avoid activities said to be for children who are “really, really smart.” These findings suggest that gendered notions of brilliance are acquired early and have an immediate effect on children’s interests.

Tuesday, February 14, 2017

How our brains make meaning, with the help of a little LSD

Interesting work from Preller et al:

Highlights
•LSD-induced effects are blocked by the 5-HT2A receptor antagonist ketanserin 
•LSD increased the attribution of meaning to previously meaningless music 
•Simulation of the 5-HT2A receptor is crucial for the generation of meaning 
•Changes in personal meaning attribution are mediated by cortical midline structures
Summary
A core aspect of the human self is the attribution of personal relevance to everyday stimuli enabling us to experience our environment as meaningful. However, abnormalities in the attribution of personal relevance to sensory experiences are also critical features of many psychiatric disorders. Despite their clinical relevance, the neurochemical and anatomical substrates enabling meaningful experiences are largely unknown. Therefore, we investigated the neuropharmacology of personal relevance processing in humans by combining fMRI and the administration of the mixed serotonin (5-HT) and dopamine receptor (R) agonist lysergic acid diethylamide (LSD), well known to alter the subjective meaning of percepts, with and without pretreatment with the 5-HT2AR antagonist ketanserin. General subjective LSD effects were fully blocked by ketanserin. In addition, ketanserin inhibited the LSD-induced attribution of personal relevance to previously meaningless stimuli and modulated the processing of meaningful stimuli in cortical midline structures. These findings point to the crucial role of the 5-HT2AR subtype and cortical midline regions in the generation and attribution of personal relevance. Our results thus increase our mechanistic understanding of personal relevance processing and reveal potential targets for the treatment of psychiatric illnesses characterized by alterations in personal relevance attribution.

Monday, February 13, 2017

An emotional experience can enhance future memory formation.

Tambini et al. show that neural effects of an emotional experience can persist, and bias how new and unrelated information is encoded and stored by our brains:
Emotional arousal can produce lasting, vivid memories for emotional experiences, but little is known about whether emotion can prospectively enhance memory formation for temporally distant information. One mechanism that may support prospective memory enhancements is the carry-over of emotional brain states that influence subsequent neutral experiences. Here we found that neutral stimuli encountered by human subjects 9–33 min after exposure to emotionally arousing stimuli had greater levels of recollection during delayed memory testing compared to those studied before emotional and after neutral stimulus exposure. Moreover, multiple measures of emotion-related brain activity showed evidence of reinstatement during subsequent periods of neutral stimulus encoding. Both slow neural fluctuations (low-frequency connectivity) and transient, stimulus-evoked activity predictive of trial-by-trial memory formation present during emotional encoding were reinstated during subsequent neutral encoding. These results indicate that neural measures of an emotional experience can persist in time and bias how new, unrelated information is encoded and recollected.

Friday, February 10, 2017

Kind words in language - changes over time

John Carson does a nice precis of Iliev et al.:
It is debated whether linguistic positivity bias (LPB) — the cross-cultural tendency to use more positive words than negative — results from a common cognitive underpinning or our environmental and cultural context.
Rumen Iliev, from the University of Michigan, and colleagues tackle the theoretical stalemate by looking at changes in positive word usage within a language over time. They use time-stamped texts from Google Books Ngrams and the New York Times to analyse LPB trends in American English over the last 200 years. They show that LPB has declined overall since 1800, which discounts the importance of universal cognition and, they suggest, aligns most strongly with a decline in social cohesion and prosociality in the United States. They find a significant association between LPB and casualty levels in war, economic performance, and measures of public happiness, suggesting that objective circumstances and subjective public mood drive its dynamics.
Analysing time-stamped historical texts is a powerful way to investigate evolving behaviours. The next step will be to look across other languages and historical events and tease apart the contribution of different contextual factors to LPB.

Thursday, February 09, 2017

Mysterianism

Here is Nicholas Carr's statement of an argument that has always appealed to me, a concept that should be more widely know. Roughly: "I don't expect my cat to understand quantum physics, why should I imagine that I will ever be able to understand consciousness or the basic nature of our universe?"
By leaps, steps, and stumbles, science progresses. Its seemingly inexorable advance promotes a sense that everything can be known and will be known. Through observation and experiment, and lots of hard thinking, we will come to explain even the murkiest and most complicated of nature’s secrets: consciousness, dark matter, time, the full story of the universe.
But what if our faith in nature’s knowability is just an illusion, a trick of the overconfident human mind? That’s the working assumption behind a school of thought known as mysterianism. Situated at the fruitful if sometimes fraught intersection of scientific and philosophic inquiry, the mysterianist view has been promulgated, in different ways, by many respected thinkers, from the philosopher Colin McGinn to the cognitive scientist Steven Pinker. The mysterians propose that human intellect has boundaries and that some of nature’s mysteries may forever lie beyond our comprehension.
Mysterianism is most closely associated with the so-called hard problem of consciousness: How can the inanimate matter of the brain produce subjective feelings? The mysterians argue that the human mind may be incapable of understanding itself, that we will never understand how consciousness works. But if mysterianism applies to the workings of the mind, there’s no reason it shouldn’t also apply to the workings of nature in general. As McGinn has suggested, “It may be that nothing in nature is fully intelligible to us.”
The simplest and best argument for mysterianism is founded on evolutionary evidence. When we examine any other living creature, we understand immediately that its intellect is limited. Even the brightest, most curious dog is not going to master arithmetic. Even the wisest of owls knows nothing of the anatomy of the field mouse it devours. If all the minds that evolution has produced have bounded comprehension, then it’s only logical that our own minds, also products of evolution, would have limits as well. As Pinker has observed, “The brain is a product of evolution, and just as animal brains have their limitations, we have ours.” To assume that there are no limits to human understanding is to believe in a level of human exceptionalism that seems miraculous, if not mystical.
Mysterianism, it’s important to emphasize, is not inconsistent with materialism. The mysterians don’t suggest that what’s unknowable must be spiritual. They posit that matter itself has complexities that lie beyond our ken. Like every other animal on earth, we humans are just not smart enough to understand all of nature’s laws and workings.
What’s truly disconcerting about mysterianism is that, if our intellect is bounded, we can never know how much of existence lies beyond our grasp. What we know or may in the future know may be trifling compared with the unknowable unknowns. “As to myself,” remarked Isaac Newton in his old age, “I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.” It may be that we are all like that child on the strand, playing with the odd pebble or shell—and fated to remain so.
Mysterianism teaches us humility. Through science, we have come to understand much about nature, but much more may remain outside the scope of our perception and comprehension. If the mysterians are right, science’s ultimate achievement may be to reveal to us its own limits.

Wednesday, February 08, 2017

Feel good fractals.

I want to point to this excerpt from Florence Williams' new book, "The Nature Fix: Why Nature Makes Us Happier, Healthier, and More Creative," that appears on aeon's website. It describes the work and ideas of physicist Richard Taylor, who noted in a Nature paper in 1999 that Jackson Pollock's paintings were fractal in design, 25 years ahead of their scientific discovery. Here is one clip from the text:
...Taylor ran experiments to gauge people’s physiological response to viewing images with similar fractal geometries. He measured people’s skin conductance (a measure of nervous system activity) and found that they recovered from stress 60 per cent better when viewing computer images with a mathematical fractal dimension (called D) of between 1.3 and 1.5. D measures the ratio of the large, coarse patterns (the coastline seen from a plane, the main trunk of a tree, Pollock’s big-sweep splatters) to the fine ones (dunes, rocks, branches, leaves, Pollock’s micro-flick splatters). Fractal dimension is typically notated as a number between 1 and 2; the more complex the image, the higher the D.
Next, Taylor and Caroline Hägerhäll, a Swedish environmental psychologist with a specialty in human aesthetic perception, converted a series of nature photos into a simplistic representation of the landforms’ fractal silhouettes against the sky. They found that people overwhelmingly preferred images with a low to mid-range D (between 1.3 and 1.5.) To find out if that dimension induced a particular mental state, they used EEG to measure people’s brain waves while viewing geometric fractal images. They discovered that in that same dimensional magic zone, the subjects’ frontal lobes easily produced the feel-good alpha brainwaves of a wakefully relaxed state. This occurred even when people looked at the images for only one minute.
EEG measures waves, or electrical frequency, but it doesn’t precisely map the active real estate in the brain. For that, Taylor has now turned to functional MRI, which shows the parts of the brain working hardest by imaging the blood flow. Preliminary results show that mid-range fractals activate some brain regions that you might expect, such as the ventrolateral cortex (involved with high-level visual processing) and the dorsolateral cortex, which codes spatial long-term memory. But these fractals also engage the parahippocampus, which is involved with regulating emotions and is also highly active while listening to music. To Taylor, this is a cool finding. ‘We were delighted to find [mid-range fractals] are similar to music,’ he said. In other words, looking at an ocean might have a similar effect on us emotionally as listening to Brahms.
But why is the mid-range of D (remember, that’s the ratio of large to small patterns) so magical and so highly preferred among most people? Taylor and Hägerhäll have an interesting theory, and it doesn’t necessarily have to do with a romantic yearning for Arcadia. In addition to lungs, capillaries and neurons, another human system is branched into fractals: the visual system as expressed by the movement of the eye’s retina. When Taylor used an eye-tracking machine to measure precisely where people’s pupils were focusing on projected images (of Pollock paintings, for example, but also other things), he saw that the pupils used a search pattern that was itself fractal. The eyes first scanned the big elements in the scene and then made micro passes in smaller versions of the big scans, and it does this in a mid-range D. Interestingly, if you draw a line over the tracks that animals make to forage for food, for example albatrosses surveying the ocean, you also see this fractal pattern of search trajectories. It’s simply an efficient search strategy, said Taylor.
‘Your visual system is in some way hardwired to understand fractals,’ said Taylor. ‘The stress-reduction is triggered by a physiological resonance that occurs when the fractal structure of the eye matches that of the fractal image being viewed.’

Tuesday, February 07, 2017

Why our supermarket tomatoes are sturdy and flavorless.

Having dinked with tomato breeding and genetic manipulations to make our supermarket tomatoes sturdy, colorful, and tasteless, now geneticists have tried to figure out why flavor got thrown away. Tieman et al. combined tasting panels with chemical and genomic analyses of nearly 400 varieties of tomatoes to identify flavorful components that have been lost over time. Now maybe they will get to work and put the flavor back in?
Modern commercial tomato varieties are substantially less flavorful than heirloom varieties. To understand and ultimately correct this deficiency, we quantified flavor-associated chemicals in 398 modern, heirloom, and wild accessions. A subset of these accessions was evaluated in consumer panels, identifying the chemicals that made the most important contributions to flavor and consumer liking. We found that modern commercial varieties contain significantly lower amounts of many of these important flavor chemicals than older varieties. Whole-genome sequencing and a genome-wide association study permitted identification of genetic loci that affect most of the target flavor chemicals, including sugars, acids, and volatiles. Together, these results provide an understanding of the flavor deficiencies in modern commercial varieties and the information necessary for the recovery of good flavor through molecular breeding.

Monday, February 06, 2017

MindBlog’s 11th anniversary…some statistics.

Today is MindBlog’s 11 anniversary. I let the 10th anniversary pass without noticing, so I want to briefly comment this year. Google analytics tells me that there have been about four million views of the 4,115 posts that have appeared thus far. After I have weeded out the 4-5 comments submitted each week whose purpose is to insert a link to a commercial site, authentic comments on the posts are few and far between. I also receive 1-2 emails each week from sites wanting to contribute a post and get a crosslink to their site. My cut and paste response is: “I must decline your kind offer. MindBlog is my own idiosyncratic hobby, and I only post content that I initiate.  I have no interest in revenue.”

The blog passes on material that I would be reading even if I were not doing the blogging gig, and it would seem a shame not to share what I find interesting. While I do write occasional posts that are entirely of my composition, most of the posting is better described as ‘curated content.’ I keep a queue of 5-10 completed posts that are post dated for automatic 3 a.m. daily posting by the Blogger platform. My actual writing occurs in bursts. (It is not happening this week, while my husband and I are on a Caribbean cruise!)

Each day MindBlog receives about 1,500 page views. A typical post, on its first day, will gather 300-600 views, rising to ~1,000 views after two weeks. MindBlog’s RSS feed has ~750 followers, and the automatic reposts to twitter have ~1450 followers. I haven’t monitored the response to reposts on Facebook and Google+.

I’m grateful that this many people seem to find the material interesting. The occasional emails from readers who express gratitude for my efforts motivate me to continue.

Friday, February 03, 2017

Artificial intelligence: Machines that reason

Stavroula Kousta does a precis of work reported by Graves et al. of the Google DeepMind project:
Complex reasoning is a hallmark of natural intelligence, as is learning from experience. Artificial neural networks — biologically inspired computational models — also learn from examples and excel at pattern recognition tasks, such as object and speech recognition. However, they cannot handle complex reasoning tasks that require memory to be solved.
Alex Graves, Greg Wayne and co-workers at Google DeepMind have now developed a neural network with read–write access to external memory, called a differentiable neural computer (DNC). The DNC's two modules — the memory and the neural network that controls it — interact like a digital computer's RAM and CPU, but do not need to be programmed. The system learns through exposure to examples to provide highly accurate responses to questions that require deductive reasoning (for example, “Sheep are afraid of wolves. Gertrude is a sheep. What is Gertrude afraid of?”), to traverse a novel network (for example, the London Underground map), and to carry out logical planning tasks.
This work represents a major leap forward in showing how symbolic reasoning can arise from an entirely non-symbolic system that learns through experience.