Tuesday, April 18, 2017

Scratching is contagious.

The precis from Science Magazine, followed by the abstract:
Observing someone else scratching themselves can make you want to do so. This contagious itching has been observed in monkeys and humans, but what about rodents? Yu et al. found that mice do imitate scratching when they observe it in other mice. The authors identified a brain area called the suprachiasmatic nucleus as a key circuit for mediating contagious itch. Gastrin-releasing peptide and its receptor in the suprachiasmatic nucleus were necessary and sufficient to transmit this contagious behavior.
Abstract
Socially contagious itch is ubiquitous in human society, but whether it exists in rodents is unclear. Using a behavioral paradigm that does not entail prior training or reward, we found that mice scratched after observing a conspecific scratching. Molecular mapping showed increased neuronal activity in the suprachiasmatic nucleus (SCN) of the hypothalamus of mice that displayed contagious scratching. Ablation of gastrin-releasing peptide receptor (GRPR) or GRPR neurons in the SCN abolished contagious scratching behavior, which was recapitulated by chemogenetic inhibition of SCN GRP neurons. Activation of SCN GRP/GRPR neurons evoked scratching behavior. These data demonstrate that GRP-GRPR signaling is necessary and sufficient for transmitting contagious itch information in the SCN. The findings may have implications for our understanding of neural circuits that control socially contagious behaviors.

Monday, April 17, 2017

Is "The Stack" the way to understand everything?

When the Apple II computer arrived in 1977, I eagerly took its BASIC language tutorials and began writing simple programs to work with my laboratory’s data. When Apple Pascal, based on the UCSD Pascal system, arrived in 1979 I plunged in and wrote a number of data analysis programs. Pascal is a structured programming language, and I soon found myself structuring my mental life around its metaphors. Thus Herrman’s recent article on “the stack” has a particular resonance with me. Some clips:
…the explanatory metaphors of a given era incorporate the devices and the spectacles of the day…technology that Greeks and Romans developed for pumping water, for instance, underpinned their theories of the four humors and the pneumatic soul. Later, during the Enlightenment, clockwork mechanisms left their imprint on materialist arguments that man was only a sophisticated machine. And as of 1990, it was concepts from computing that explained us to ourselves..
We don’t just talk intuitively about the ways in which people are “programmed” — we talk about our emotional “bandwidth” and look for clever ways to “hack” our daily routines. These metaphors have developed right alongside the technology from which they’re derived…Now we’ve arrived at a tempting concept that promises to contain all of this: the stack. These days, corporate managers talk about their solution stacks and idealize “full stack” companies; athletes share their recovery stacks and muscle-building stacks; devotees of so-called smart drugs obsessively modify their brain-enhancement stacks to address a seemingly infinite range of flaws and desires.
“Stack,” in technological terms, can mean a few different things, but the most relevant usage grew from the start-up world: A stack is a collection of different pieces of software that are being used together to accomplish a task.
An individual application’s stack might include the programming languages used to build it, the services used to connect it to other apps or the service that hosts it online; a “full stack” developer would be someone proficient at working with each layer of that system, from bottom to top. The stack isn’t just a handy concept for visualizing how technology works. For many companies, the organizing logic of the software stack becomes inseparable from the logic of the business itself. The system that powers Snapchat, for instance, sits on top of App Engine, a service owned by Google; to the extent that Snapchat even exists as a service, it is as a stack of different elements. …A healthy stack, or a clever one, is tantamount (the thinking goes) to a well-structured company…On StackShare, Airbnb lists over 50 services in its stack, including items as fundamental as the Ruby programming language and as complex and familiar as Google Docs.
Other attempts to elaborate on the stack have been more rigorous and comprehensive, less personal and more global. In a 2016 book, “The Stack: On Software and Sovereignty,” the professor and design theorist Benjamin Bratton sets out to, in his words, propose a “specific model for the design of political geography tuned to this era of planetary-scale computation,” by drawing on the “multilayered structure of software, hardware and network ‘stacks’ that arrange different technologies vertically within a modular, interdependent order.” In other words, Bratton sees the world around us as one big emerging technological stack. In his telling, the six-layer stack we inhabit is complex, fluid and vertigo-inducing: Earth, Cloud, City, Address, Interface and User. It is also, he suggests, extremely powerful, with the potential to undermine and replace our current conceptions of, among other things, the sovereign state — ushering us into a world blown apart and reassembled by software. This might sound extreme, but such is the intoxicating logic of the stack.
As theory, the stack remains mostly a speculative exercise: What if we imagined the whole world as software? And as a popular term, it risks becoming an empty buzzword, used to refer to any collection, pile or system of different things. (What’s your dental care stack? Your spiritual stack?) But if tech start-ups continue to broaden their ambitions and challenge new industries — if, as the venture-capital firm Andreessen-Horowitz likes to say, “software is eating the world” — then the logic of the stack can’t be trailing far behind, ready to remake more and more of our economy and our culture in its image. It will also, of course, be subject to the warning with which Daugman ended his 1990 essay. “We should remember,” he wrote, “that the enthusiastically embraced metaphors of each ‘new era’ can become, like their predecessors, as much the prison house of thought as they first appeared to represent its liberation.”

Friday, April 14, 2017

Anterior temporal lobe and the representation of knowledge about people

Anzellotti frames work by Wang et al.:
Patients with semantic dementia (SD), a neurodegenerative disease affecting the anterior temporal lobes (ATL), present with striking cognitive deficits: they can have difficulties naming objects and familiar people from both pictures and descriptions. Furthermore, SD patients make semantic errors (e.g., naming “horse” a picture of a zebra), suggesting that their impairment affects object knowledge rather than lexical retrieval. Because SD can affect object categories as disparate as artifacts, animals, and people, as well as multiple input modalities, it has been hypothesized that ATL is a semantic hub that integrates information across multiple modality-specific brain regions into multimodal representations. With a series of converging experiments using multiple analysis techniques, Wang et al. test the proposal that ATL is a semantic hub in the case of person knowledge, investigating whether ATL: (i) encodes multimodal representations of identity, and (ii) mediates the retrieval of knowledge about people from representations of perceptual cues.
The Wang et al. Significance and Abstract statements:

Significance
Knowledge about other people is critical for group survival and may have unique cognitive processing demands. Here, we investigate how person knowledge is represented, organized, and retrieved in the brain. We show that the anterior temporal lobe (ATL) stores abstract person identity representation that is commonly embedded in multiple sources (e.g. face, name, scene, and personal object). We also found the ATL serves as a “neural switchboard,” coordinating with a network of other brain regions in a rapid and need-specific way to retrieve different aspects of biographical information (e.g., occupation and personality traits). Our findings endorse the ATL as a central hub for representing and retrieving person knowledge.
Abstract
Social behavior is often shaped by the rich storehouse of biographical information that we hold for other people. In our daily life, we rapidly and flexibly retrieve a host of biographical details about individuals in our social network, which often guide our decisions as we navigate complex social interactions. Even abstract traits associated with an individual, such as their political affiliation, can cue a rich cascade of person-specific knowledge. Here, we asked whether the anterior temporal lobe (ATL) serves as a hub for a distributed neural circuit that represents person knowledge. Fifty participants across two studies learned biographical information about fictitious people in a 2-d training paradigm. On day 3, they retrieved this biographical information while undergoing an fMRI scan. A series of multivariate and connectivity analyses suggest that the ATL stores abstract person identity representations. Moreover, this region coordinates interactions with a distributed network to support the flexible retrieval of person attributes. Together, our results suggest that the ATL is a central hub for representing and retrieving person knowledge.

Thursday, April 13, 2017

Lying is a feature, not a bug, of Trump’s presidency.

PolitiFact rates half of Trump’s disputed public statements to be completely false. Adam Smith points out that Trump is telling…
“blue” lies—a psychologist’s term for falsehoods, told on behalf of a group, that can actually strengthen the bonds among the members of that group…blue lies fall in between generous “white” lies and selfish “black” ones.
…lying is a feature, not a bug, of Trump’s campaign and presidency. It serves to bind his supporters together and strengthen his political base—even as it infuriates and confuses most everyone else. In the process, he is revealing some complicated truths about the psychology of our very social species.
…while black lies drive people apart and white lies draw them together, blue lies do both: They help bring some people together by deceiving those in another group. For instance, if a student lies to a teacher so her entire class can avoid punishment, her standing with classmates might actually increase.
A variety of research highlights...
...a difficult truth about our species: We are intensely social creatures, but we’re prone to divide ourselves into competitive groups, largely for the purpose of allocating resources. People can be “prosocial”—compassionate, empathic, generous, honest—in their groups, and aggressively antisocial toward outside groups. When we divide people into groups, we open the door to competition, dehumanization, violence—and socially sanctioned deceit.
If we see Trump’s lies not as failures of character but rather as weapons of war, then we can come to understand why his supporters might see him as an effective leader. To them, Trump isn’t Hitler (or Darth Vader, or Voldemort), as some liberals claim—he’s President Roosevelt, who repeatedly lied to the public and the world on the path to victory in World War II.
...partisanship for many Americans today takes the form of a visceral, even subconscious, attachment to a party group...Democrats and Republicans have become not merely political parties but tribes, whose affiliations shape the language, dress, hairstyles, purchasing decisions, friendships, and even love lives of their members.
...when the truth threatens our identity, that truth gets dismissed. For millions and millions of Americans, climate change is a hoax, Hillary Clinton ran a sex ring out of a pizza parlor, and immigrants cause crime. Whether they truly believe those falsehoods or not is debatable—and possibly irrelevant. The research to date suggests that they see those lies as useful weapons in a tribal us-against-them competition that pits the “real America” against those who would destroy it.
Perhaps the above clips will motivate you read Smith's entire article, which goes on to discuss how anger fuels lying, and suggests some approaches to defying blue lies.

Wednesday, April 12, 2017

How exercise calms anxiety.

Another mouse story, as in the previous post, hopefully applicable to us humans. Gretchen Reynolds points to work of Gould and colleagues at Princeton showing that in the hippocampus of mice that have been in a running regime not only are new excitatory neurons and synapses generated, but also inhibitory neurons are more likely to become activated to dampen the excitatory neurons, in response to stress. This was a long term running response, because running mice were blocked from exercise for a day before stress testing in a cold bath that showed them to be less reactive to the cold than sedentary mice.
Physical exercise is known to reduce anxiety. The ventral hippocampus has been linked to anxiety regulation but the effects of running on this subregion of the hippocampus have been incompletely explored. Here, we investigated the effects of cold water stress on the hippocampus of sedentary and runner mice and found that while stress increases expression of the protein products of the immediate early genes c-fos and arc in new and mature granule neurons in sedentary mice, it has no such effect in runners. We further showed that running enhances local inhibitory mechanisms in the hippocampus, including increases in stress-induced activation of hippocampal interneurons, expression of vesicular GABA transporter (vGAT), and extracellular GABA release during cold water swim stress. Finally, blocking GABAA receptors in the ventral hippocampus, but not the dorsal hippocampus, with the antagonist bicuculline, reverses the anxiolytic effect of running. Together, these results suggest that running improves anxiety regulation by engaging local inhibitory mechanisms in the ventral hippocampus.

Tuesday, April 11, 2017

The calming effect of breathing.

Sheikhbahaei1 and Smith do a Perspective article in Science on the work of Yackle et al. in the same issue. The first bit of their perspective, followed by the Yackle et al. abstract:
Breathing is one of the perpetual rhythms of life that is often taken for granted, its apparent simplicity belying the complex neural machinery involved. This behavior is more complicated than just producing inspiration, as breathing is integrated with many other motor functions such as vocalization, orofacial motor behaviors, emotional expression (laughing and crying), and locomotion. In addition, cognition can strongly influence breathing. Conscious breathing during yoga, meditation, or psychotherapy can modulate emotion, arousal state, or stress. Therefore, understanding the links between breathing behavior, brain arousal state, and higher-order brain activity is of great interest...Yackle et al. identify an apparently specialized, molecularly identifiable, small subset of ∼350 neurons in the mouse brain that forms a circuit for transmitting information about respiratory activity to other central nervous system neurons, specifically with a group of noradrenergic neurons in the locus coeruleus (LC) in the brainstem, that influences arousal state. This finding provides new insight into how the motor act of breathing can influence higher-order brain functions.
The Yackle et al. abstract:
Slow, controlled breathing has been used for centuries to promote mental calming, and it is used clinically to suppress excessive arousal such as panic attacks. However, the physiological and neural basis of the relationship between breathing and higher-order brain activity is unknown. We found a neuronal subpopulation in the mouse preBötzinger complex (preBötC), the primary breathing rhythm generator, which regulates the balance between calm and arousal behaviors. Conditional, bilateral genetic ablation of the ~175 Cdh9/Dbx1 double-positive preBötC neurons in adult mice left breathing intact but increased calm behaviors and decreased time in aroused states. These neurons project to, synapse on, and positively regulate noradrenergic neurons in the locus coeruleus, a brain center implicated in attention, arousal, and panic that projects throughout the brain.

Monday, April 10, 2017

Brain correlates of information virality

Scholz et al. show that activity in brain areas associated with value, self and social cognition correlates with internet sharing of articles, reflecting how people express themselves in positive ways to strengthen their social bonds.

Significance
Why do humans share information with others? Large-scale sharing is one of the most prominent social phenomena of the 21st century, with roots in the oldest forms of communication. We argue that expectations of self-related and social consequences of sharing are integrated into a domain-general value signal, representing the value of information sharing, which translates into population-level virality. We analyzed brain responses to New York Times articles in two separate groups of people to predict objectively logged sharing of those same articles around the world (virality). Converging evidence from the two studies supports a unifying, parsimonious neurocognitive framework of mechanisms underlying health news virality; these results may help advance theory, improve predictive models, and inform new approaches to effective intervention.
Abstract
Information sharing is an integral part of human interaction that serves to build social relationships and affects attitudes and behaviors in individuals and large groups. We present a unifying neurocognitive framework of mechanisms underlying information sharing at scale (virality). We argue that expectations regarding self-related and social consequences of sharing (e.g., in the form of potential for self-enhancement or social approval) are integrated into a domain-general value signal that encodes the value of sharing a piece of information. This value signal translates into population-level virality. In two studies (n = 41 and 39 participants), we tested these hypotheses using functional neuroimaging. Neural activity in response to 80 New York Times articles was observed in theory-driven regions of interest associated with value, self, and social cognitions. This activity then was linked to objectively logged population-level data encompassing n = 117,611 internet shares of the articles. In both studies, activity in neural regions associated with self-related and social cognition was indirectly related to population-level sharing through increased neural activation in the brain's value system. Neural activity further predicted population-level outcomes over and above the variance explained by article characteristics and commonly used self-report measures of sharing intentions. This parsimonious framework may help advance theory, improve predictive models, and inform new approaches to effective intervention. More broadly, these data shed light on the core functions of sharing—to express ourselves in positive ways and to strengthen our social bonds.

Friday, April 07, 2017

Three sources of cancer - the importance of “bad luck”

Tomasetti and Vogelstein raised a storm by claiming several years ago that 65% of the risk of certain cancers is not due to inheritance or environmental factors, but rather to mutations linked to stem cell division in the cancerous tissues examined. Now they have provided further evidence that this is not specific to the United States. Here is a summary of, and the abstract from, their more recent paper:

Cancer and the unavoidable R factor
Most textbooks attribute cancer-causing mutations to two major sources: inherited and environmental factors. A recent study highlighted the prominent role in cancer of replicative (R) mutations that arise from a third source: unavoidable errors associated with DNA replication. Tomasetti et al. developed a method for determining the proportions of cancer-causing mutations that result from inherited, environmental, and replicative factors. They found that a substantial fraction of cancer driver gene mutations are indeed due to replicative factors. The results are consistent with epidemiological estimates of the fraction of preventable cancers.
Abstract
Cancers are caused by mutations that may be inherited, induced by environmental factors, or result from DNA replication errors (R). We studied the relationship between the number of normal stem cell divisions and the risk of 17 cancer types in 69 countries throughout the world. The data revealed a strong correlation (median = 0.80) between cancer incidence and normal stem cell divisions in all countries, regardless of their environment. The major role of R mutations in cancer etiology was supported by an independent approach, based solely on cancer genome sequencing and epidemiological data, which suggested that R mutations are responsible for two-thirds of the mutations in human cancers. All of these results are consistent with epidemiological estimates of the fraction of cancers that can be prevented by changes in the environment. Moreover, they accentuate the importance of early detection and intervention to reduce deaths from the many cancers arising from unavoidable R mutations.

Thursday, April 06, 2017

How "you" makes meaning.

Orvell et al. do some experiments on our use of the generic “you” rather than the first-person pronoun “I.”
“You” is one of the most common words in the English language. Although it typically refers to the person addressed (“How are you?”), “you” is also used to make timeless statements about people in general (“You win some, you lose some.”). Here, we demonstrate that this ubiquitous but understudied linguistic device, known as “generic-you,” has important implications for how people derive meaning from experience. Across six experiments, we found that generic-you is used to express norms in both ordinary and emotional contexts and that producing generic-you when reflecting on negative experiences allows people to “normalize” their experience by extending it beyond the self. In this way, a simple linguistic device serves a powerful meaning-making function.

Wednesday, April 05, 2017

Religiosity and social support

I found this article by Eleanor Power to be an interesting read. Here is her abstract:.
In recent years, scientists based in a variety of disciplines have attempted to explain the evolutionary origins of religious belief and practice1. Although they have focused on different aspects of the religious system, they consistently highlight the strong association between religiosity and prosocial behaviour (acts that benefit others). This association has been central to the argument that religious prosociality played an important role in the sociocultural florescence of our species. But empirical work evaluating the link between religion and prosociality has been somewhat mixed. Here, I use detailed, ethnographically informed data chronicling the religious practice and social support networks of the residents of two villages in South India to evaluate whether those who evince greater religiosity are more likely to undertake acts that benefit others. Exponential random graph models reveal that individuals who worship regularly and carry out greater and costlier public religious acts are more likely to provide others with support of all types. Those individuals are themselves better able to call on support, having a greater likelihood of reciprocal relationships. These results suggest that religious practice is taken as a signal of trustworthiness, generosity and prosociality, leading village residents to establish supportive, often reciprocal relationships with such individuals.

Tuesday, April 04, 2017

Wiser than than the crowd.

In a summary in Nature Human Behavior, Kousta points to work by Prelec et al. The summary:
The notion that the average judgment of a large group is more accurate than that of any individual, including experts, is widely accepted and influential. This ‘wisdom of the crowd’ principle, however, has serious limitations, as it is biased against the latest knowledge that is not widely shared.
Dražen Prelec and colleagues propose an alternative principle — the ‘surprisingly popular’ principle — that requires people to answer a question and also predict how others will answer it. By selecting the answer that is more popular than people predict, the surprisingly popular algorithm outperforms the wisdom of crowds. To understand why it works, think of a scenario where the correct answer is mostly known by experts. While those who do not know the correct answer will incorrectly predict that their answer will be the most popular, those who know the correct answer — the experts — also know it is not widely known and hence will predict that the incorrect answer will prevail. The authors formalize and test the surprisingly popular principle in a series of studies that demonstrate that it yields more accurate answers than an algorithm relying on the ‘democratic vote’.
Polling people for their views as well as their predictions of the views of others offers a powerful tool for allowing expert knowledge to win out when popular views are incorrect.

Monday, April 03, 2017

Several takes on extending our lives.

In spite of the fact that I am unsympathetic to efforts to extend our lifespan, I want to pass on several recent articles on the effort. Tad Friend does an excellent article on Silicon Valley money supporting a variety of different efforts to let us attain eternal life,  Baar et al. find that an anti-aging protein that causes the apoptosis (death) of senescent cells reverses symptoms of aging,  Li et al. show that NAD+ directly regulates protein-protein interactions which may protect against cancer, radiation, and aging; and Rich Handy points to several pieces of research, one by Baar et al. on a peptide that restores fitness, hair density, and renal function in fast and naturally aged mice.

Friday, March 31, 2017

Preverbal foundations of human fairness

I want to point to two articles in the second issue of Nature Human Behavior. One is a review by McAuliffe et al.:
New behavioural and neuroscientific evidence on the development of fairness behaviours demonstrates that the signatures of human fairness can be traced into childhood. Children make sacrifices for fairness (1) when they have less than others, (2) when others have been unfair and (3) when they have more than others. The latter two responses mark a critical departure from what is observed in other species because they enable fairness to be upheld even when doing so goes against self-interest. This new work can be fruitfully combined with insights from cognitive neuroscience to understand the mechanisms of developmental change.
And the second is interesting work on preverbal infants from Kanakogi et al.:
Protective interventions by a third party on the behalf of others are generally admired, and as such are associated with our notions of morality, justice and heroism. Indeed, stories involving such third-party interventions have pervaded popular culture throughout recorded human history, in myths, books and movies. The current developmental picture is that we begin to engage in this type of intervention by preschool age. For instance, 3-year-old children intervene in harmful interactions to protect victims from bullies, and furthermore, not only punish wrongdoers but also give priority to helping the victim. It remains unknown, however, when we begin to affirm such interventions performed by others. Here we reveal these developmental origins in 6- and 10-month old infants (N = 132). After watching aggressive interactions involving a third-party agent who either interfered or did not, 6-month-old infants preferred the former. Subsequent experiments confirmed the psychological processes underlying such choices: 6-month-olds regarded the interfering agent to be protecting the victim from the aggressor, but only older infants affirmed such an intervention after considering the intentions of the interfering agent. These findings shed light upon the developmental trajectory of perceiving, understanding and performing protective third-party interventions, suggesting that our admiration for and emphasis upon such acts — so prevalent in thousands of stories across human cultures — is rooted within the preverbal infant’s mind.

Thursday, March 30, 2017

The best exercise for aging muscles

I want to pass on the message from this Gretchen Reynolds article, that points to work by Robinson et al.. Their experiments were...
.... on the cells of 72 healthy but sedentary men and women who were 30 or younger or older than 64. After baseline measures were established for their aerobic fitness, their blood-sugar levels and the gene activity and mitochondrial health in their muscle cells, the volunteers were randomly assigned to a particular exercise regimen.
Some of them did vigorous weight training several times a week; some did brief interval training three times a week on stationary bicycles (pedaling hard for four minutes, resting for three and then repeating that sequence three more times); some rode stationary bikes at a moderate pace for 30 minutes a few times a week and lifted weights lightly on other days. A fourth group, the control, did not exercise.
After 12 weeks, the lab tests were repeated. In general, everyone experienced improvements in fitness and an ability to regulate blood sugar.
There were some unsurprising differences: The gains in muscle mass and strength were greater for those who exercised only with weights, while interval training had the strongest influence on endurance.
But more unexpected results were found in the biopsied muscle cells. Among the younger subjects who went through interval training, the activity levels had changed in 274 genes, compared with 170 genes for those who exercised more moderately and 74 for the weight lifters. Among the older cohort, almost 400 genes were working differently now, compared with 33 for the weight lifters and only 19 for the moderate exercisers.
Many of these affected genes, especially in the cells of the interval trainers, are believed to influence the ability of mitochondria to produce energy for muscle cells; the subjects who did the interval workouts showed increases in the number and health of their mitochondria — an impact that was particularly pronounced among the older cyclists.
It seems as if the decline in the cellular health of muscles associated with aging was “corrected” with exercise, especially if it was intense...

Wednesday, March 29, 2017

Brain systems specialized for knowing our place in the pecking order


From Kumaran et al.:

Highlights
•Social hierarchy learning is accounted for by a Bayesian inference scheme 
•Amygdala and hippocampus support domain-general social hierarchy learning 
•Medial prefrontal cortex selectively updates knowledge about one’s own hierarchy 
•Rank signals are generated by these neural structures in the absence of task demands
Summary
Knowledge about social hierarchies organizes human behavior, yet we understand little about the underlying computations. Here we show that a Bayesian inference scheme, which tracks the power of individuals, better captures behavioral and neural data compared with a reinforcement learning model inspired by rating systems used in games such as chess. We provide evidence that the medial prefrontal cortex (MPFC) selectively mediates the updating of knowledge about one’s own hierarchy, as opposed to that of another individual, a process that underpinned successful performance and involved functional interactions with the amygdala and hippocampus. In contrast, we observed domain-general coding of rank in the amygdala and hippocampus, even when the task did not require it. Our findings reveal the computations underlying a core aspect of social cognition and provide new evidence that self-relevant information may indeed be afforded a unique representational status in the brain.

Tuesday, March 28, 2017

Termite castles, human minds, and Daniel Dennett.

After reading through Rothman’s New Yorker article on Daniel Dennett, I downloaded Dennett’s latest book, “From Bacteria to Bach and Back” to check out his bottom lines, which should be familiar to readers of MindBlog. (In the 1990’s, when I was teaching my Biology of Mind course at the Univ. of Wisconsin, I invited Dennett to give a lecture there.)  

I was surprised to find limited or no references to the work of major figures such Thomas Metzinger, Michael Graziano, Antonio Damasio, and others. The ideas in Chapter 14, “Consciousness as an Evolved User-Illusion” have been lucidly outlined earlier in Metzinger’s book “The Ego Tunnel,” and in Graziano’s “Consciousness and the Social Brain.”   (Academics striving to be the most prominent in their field are not known for noting the efforts of their competitors.)

The strongest sections in the book are his explanations of the work and ideas of others. I want to pass on a few chunks. The first is from Chapter 14:
,,,according to the arguments advanced by the ethologist and roboticist David McFarland (1989), “Communication is the only behavior that requires an organism to self-monitor its own control system.” Organisms can very effectively control themselves by a collection of competing but “myopic” task controllers, each activated by a condition (hunger or some other need, sensed opportunity, built-in priority ranking, and so on). When a controller’s condition outweighs the conditions of the currently active task controller, it interrupts it and takes charge temporarily. (The “pandemonium model” by Oliver Selfridge [1959] is the ancestor of many later models.) Goals are represented only tacitly, in the feedback loops that guide each task controller, but without any global or higher level representation. Evolution will tend to optimize the interrupt dynamics of these modules, and nobody’s the wiser. That is, there doesn’t have to be anybody home to be wiser! Communication, McFarland claims, is the behavioral innovation which changes all that. Communication requires a central clearing house of sorts in order to buffer the organism from revealing too much about its current state to competitive organisms. As Dawkins and Krebs (1978) showed, in order to understand the evolution of communication we need to see it as grounded in manipulation rather than as purely cooperative behavior. An organism that has no poker face, that “communicates state” directly to all hearers, is a sitting duck, and will soon be extinct (von Neumann and Morgenstern 1944). What must evolve to prevent this exposure is a private, proprietary communication-control buffer that creates opportunities for guided deception— and, coincidentally, opportunities for self-deception (Trivers 1985)— by creating, for the first time in the evolution of nervous systems, explicit and more globally accessible representations of its current state, representations that are detachable from the tasks they represent, so that deceptive behaviors can be formulated and controlled without interfering with the control of other behaviors.
It is important to realize that by communication, McFarland does not mean specifically linguistic communication (which is ours alone), but strategic communication, which opens up the crucial space between one’s actual goals and intentions and the goals and intentions one attempts to communicate to an audience. There is no doubt that many species are genetically equipped with relatively simple communication behaviors (Hauser 1996), such as stotting, alarm calls, and territorial marking and defense. Stereotypical deception, such as bluffing in an aggressive encounter, is common, but a more productive and versatile talent for deception requires McFarland’s private workspace. For a century and more philosophers have stressed the “privacy” of our inner thoughts, but seldom have they bothered to ask why this is such a good design feature. (An occupational blindness of many philosophers: taking the manifest image as simply given and never asking what it might have been given to us for.)
The second chunk I pass on is from the very end of the book, describing Seabright’s ideas:
Seabright compares our civilization with a termite castle. Both are artifacts, marvels of ingenious design piled on ingenious design, towering over the supporting terrain, the work of vastly many individuals acting in concert. Both are thus by-products of the evolutionary processes that created and shaped those individuals, and in both cases, the design innovations that account for the remarkable resilience and efficiency observable were not the brain-children of individuals, but happy outcomes of the largely unwitting, myopic endeavors of those individuals, over many generations. But there are profound differences as well. Human cooperation is a delicate and remarkable phenomenon, quite unlike the almost mindless cooperation of termites, and indeed quite unprecedented in the natural world, a unique feature with a unique ancestry in evolution. It depends, as we have seen, on our ability to engage each other within the “space of reasons,” as Wilfrid Sellars put it. Cooperation depends, Seabright argues, on trust, a sort of almost invisible social glue that makes possible both great and terrible projects, and this trust is not, in fact, a “natural instinct” hard-wired by evolution into our brains. It is much too recent for that. Trust is a by-product of social conditions that are at once its enabling condition and its most important product. We have bootstrapped ourselves into the heady altitudes of modern civilization, and our natural emotions and other instinctual responses do not always serve our new circumstances. Civilization is a work in progress, and we abandon our attempt to understand it at our peril. Think of the termite castle. We human observers can appreciate its excellence and its complexity in ways that are quite beyond the nervous systems of its inhabitants. We can also aspire to achieving a similarly Olympian perspective on our own artifactual world, a feat only human beings could imagine. If we don’t succeed, we risk dismantling our precious creations in spite of our best intentions. Evolution in two realms, genetic and cultural, has created in us the capacity to know ourselves. But in spite of several millennia of ever-expanding intelligent design, we still are just staying afloat in a flood of puzzles and problems, many of them created by our own efforts of comprehension, and there are dangers that could cut short our quest before we— or our descendants— can satisfy our ravenous curiosity.
And, from Dennett’s wrap-up summary of the book:
Returning to the puzzle about how brains made of billions of neurons without any top-down control system could ever develop into human-style minds, we explored the prospect of decentralized, distributed control by neurons equipped to fend for themselves, including as one possibility feral neurons, released from their previous role as docile, domesticated servants under the selection pressure created by a new environmental feature: cultural invaders. Words striving to reproduce, and other memes, would provoke adaptations, such as revisions in brain structure in coevolutionary response. Once cultural transmission was secured as the chief behavioral innovation of our species, it not only triggered important changes in neural architecture but also added novelty to the environment— in the form of thousands of Gibsonian affordances— that enriched the ontologies of human beings and provided in turn further selection pressure in favor of adaptations— thinking tools— for keeping track of all these new opportunities. Cultural evolution itself evolved away from undirected or “random” searches toward more effective design processes, foresighted and purposeful and dependent on the comprehension of agents: intelligent designers. For human comprehension, a huge array of thinking tools is required. Cultural evolution de-Darwinized itself with its own fruits. 
This vantage point lets us see the manifest image, in Wilfrid Sellars’s useful terminology, as a special kind of artifact, partly genetically designed and partly culturally designed, a particularly effective user-illusion for helping time-pressured organisms move adroitly through life, availing themselves of (over) simplifications that create an image of the world we live in that is somewhat in tension with the scientific image to which we must revert in order to explain the emergence of the manifest image. Here we encounter yet another revolutionary inversion of reasoning, in David Hume’s account of our knowledge of causation. We can then see human consciousness as a user-illusion, not rendered in the Cartesian Theater (which does not exist) but constituted by the representational activities of the brain coupled with the appropriate reactions to those activities (“ and then what happens?”). 
This closes the gap, the Cartesian wound, but only a sketch of this all-important unification is clear at this time. The sketch has enough detail, however, to reveal that human minds, however intelligent and comprehending, are not the most powerful imaginable cognitive systems, and our intelligent designers have now made dramatic progress in creating machine learning systems that use bottom-up processes to demonstrate once again the truth of Orgel’s Second Rule: Evolution is cleverer than you are. Once we appreciate the universality of the Darwinian perspective, we realize that our current state, both individually and as societies, is both imperfect and impermanent. We may well someday return the planet to our bacterial cousins and their modest, bottom-up styles of design improvement. Or we may continue to thrive, in an environment we have created with the help of artifacts that do most of the heavy cognitive lifting their own way, in an age of post-intelligent design. There is not just coevolution between memes and genes; there is codependence between our minds’ top-down reasoning abilities and the bottom-up uncomprehending talents of our animal brains. And if our future follows the trajectory of our past— something that is partly in our control— our artificial intelligences will continue to be dependent on us even as we become more warily dependent on them.
The above excerpts are from: Dennett, Daniel C. (2017-02-07). From Bacteria to Bach and Back: The Evolution of Minds (Kindle Locations 6819-6840). W. W. Norton & Company. Kindle Edition.

Monday, March 27, 2017

Ownership of an artificial limb induced by electrical brain stimulation

From Collins et al.:

Significance
Creating a prosthetic device that feels like one’s own limb is a major challenge in applied neuroscience. We show that ownership of an artificial hand can be induced via electrical stimulation of the hand somatosensory cortex in synchrony with touches applied to a prosthetic hand in full view. These findings suggest that the human brain can integrate “natural” visual input and direct cortical-somatosensory stimulation to create the multisensory perception that an artificial limb belongs to one’s own body.
Abstract
Replacing the function of a missing or paralyzed limb with a prosthetic device that acts and feels like one’s own limb is a major goal in applied neuroscience. Recent studies in nonhuman primates have shown that motor control and sensory feedback can be achieved by connecting sensors in a robotic arm to electrodes implanted in the brain. However, it remains unknown whether electrical brain stimulation can be used to create a sense of ownership of an artificial limb. In this study on two human subjects, we show that ownership of an artificial hand can be induced via the electrical stimulation of the hand section of the somatosensory (SI) cortex in synchrony with touches applied to a rubber hand. Importantly, the illusion was not elicited when the electrical stimulation was delivered asynchronously or to a portion of the SI cortex representing a body part other than the hand, suggesting that multisensory integration according to basic spatial and temporal congruence rules is the underlying mechanism of the illusion. These findings show that the brain is capable of integrating “natural” visual input and direct cortical-somatosensory stimulation to create the multisensory perception that an artificial limb belongs to one’s own body. Thus, they serve as a proof of concept that electrical brain stimulation can be used to “bypass” the peripheral nervous system to induce multisensory illusions and ownership of artificial body parts, which has important implications for patients who lack peripheral sensory input due to spinal cord or nerve lesions.

Friday, March 24, 2017

Predicting the knowledge–recklessness distinction in the human brain

Important work from Vilares et al. - in an open source paper in which fMRI results are shown in a series of figures - showing that brain imaging can determine, with high accuracy, on which side of a legally defined boundary a person's mental state lies.

 Significance
Because criminal statutes demand it, juries often must assess criminal intent by determining which of two legally defined mental states a defendant was in when committing a crime. For instance, did the defendant know he was carrying drugs, or was he merely aware of a risk that he was? Legal scholars have debated whether that conceptual distinction, drawn by law, mapped meaningfully onto any psychological reality. This study uses neuroimaging and machine-learning techniques to reveal different brain activities correlated with these two mental states. Moreover, the study provides a proof of principle that brain imaging can determine, with high accuracy, on which side of a legally defined boundary a person's mental state lies.
Abstract
Criminal convictions require proof that a prohibited act was performed in a statutorily specified mental state. Different legal consequences, including greater punishments, are mandated for those who act in a state of knowledge, compared with a state of recklessness. Existing research, however, suggests people have trouble classifying defendants as knowing, rather than reckless, even when instructed on the relevant legal criteria. We used a machine-learning technique on brain imaging data to predict, with high accuracy, which mental state our participants were in. This predictive ability depended on both the magnitude of the risks and the amount of information about those risks possessed by the participants. Our results provide neural evidence of a detectable difference in the mental state of knowledge in contrast to recklessness and suggest, as a proof of principle, the possibility of inferring from brain data in which legally relevant category a person belongs. Some potential legal implications of this result are discussed.

Thursday, March 23, 2017

Warping reality in the era of Trump - some interesting essays

I try to not pay attention, feel worn down by the continual bombardment of alternative realities presented by today's media, but have read and enjoyed the following articles recently, and want to pass them on to MindBlog readers.

How to Escape Your Political Bubble for a Clearer View Amanda Hess lists a number of Smartphone Apps, and Twitter and Facebook plug-ins, that expose you to views that are opposite to those that normally predominate during you internet viewing.

Trump’s Method, Our Madness Joel Whitebook distinguishes neurosis, in which individuals break with a portion of reality they find intolerable, from psychosis, in which individuals break globally from reality as a whole, and construct an alternative, delusional, "magical" reality of their own.
Trumpism as a social-psychological phenomenon has aspects reminiscent of psychosis, in that it entails a systematic — and it seems likely intentional — attack on our relation to reality...anti-fact campaigns, such as the effort led by archconservatives like the Koch brothers to discredit scientific research on climate change, remained within the register of truth. They were forced to act as if facts and reality were still in place, even if only to subvert them...Donald Trump and his operatives are up to something qualitatively different. Armed with the weaponized resources of social media, Trump has radicalized this strategy in a way that aims to subvert our relation to reality in general. To assert that there are “alternative facts,” as his adviser Kellyanne Conway did, is to assert that there is an alternative, delusional, reality in which those “facts” and opinions most convenient in supporting Trump’s policies and worldview hold sway.
On the hopeful side, there has recently been a robust and energetic attempt not only by members of the press, but also of the legal profession and by average citizens to call out and counter Trumpism’s attack on reality.
But on the less encouraging side, clinical experience teaches us that work with more disturbed patients can be time-consuming, exhausting and has been known to lead to burnout. The fear here is that if the 45th president can maintain this manic pace, he may wear down the resistance and Trump-exhaustion will set in, causing the disoriented experience of reality he has created to grow ever stronger and more insidious.

Are Liberals On The Wrong Side Of History? Adam Gopnik does his usual erudite job in reviewing three books that deal with the continuing historical clash between the elitist progressivism of the enlightenment (Voltaire) and the romantic search for old-fashioned community (Rousseau). A few clips:
A reader can’t help noting that anti-liberal polemics ... always have more force and gusto than liberalism’s defenses have ever had. Best-sellers tend to have big pictures, secret histories, charismatic characters, guilty parties, plots discovered, occult secrets unlocked. Voltaire’s done it! The Singularity is upon us! The World is flat! Since scientific liberalism ... believes that history doesn’t have a preordained plot, and that the individual case, not the enveloping essence, is the only quantum that history provides, it is hard for it to dramatize itself in quite this way. The middle way is not the way of melodrama.
Beneath all the anti-liberal rhetoric is an unquestioned insistence: that the way in which our societies seem to have gone wrong is evidence of a fatal flaw somewhere in the systems we’ve inherited. This is so quickly agreed on and so widely accepted that it seems perverse to dispute it. But do causes and effects work quite so neatly, or do we search for a cause because the effect is upon us? We can make a false idol of causality. Looking at the rise of Trump, the fall of Europe, one sees a handful of contingencies that, arriving in a slightly different way, would have broken a very different pane.
...the dynamic of cosmopolitanism and nostalgic reaction is permanent and recursive...We live, certainly, in societies that are in many ways inequitable, unfair, capriciously oppressive, occasionally murderous, frequently imperial—but, by historical standards, much less so than any other societies known in the history of mankind. We may angrily debate the threat to transgender bathroom access, but no other society in our long, sad history has ever attempted to enshrine the civil rights of the gender nonconforming...anger...seems based not on any acute experience of inequality or injustice but on deep racial and ethnic and cultural panics that repeatedly rise and fall in human affairs, largely indifferent to the circumstances of the time in which they summit. We use the metaphor of waves that rise and fall in societies, perhaps forgetting that the actual waves of the ocean are purely opportunistic, small irregularities in water that, snagging a fortunate gust, rise and break like monsters, for no greater cause than their own accidental invention.
Depressed by Politics? Just Let Go. Arthur Brooks:
I analyzed the 2014 data from the General Social Survey collected by the National Opinion Research Center at the University of Chicago to see how attention to politics is associated with life satisfaction. The results were significant. Even after controlling for income, education, age, gender, race, marital status and political views, being “very interested in politics” drove up the likelihood of reporting being “not too happy” about life by about eight percentage points..behavioral science shows that the link might just be causal through what psychologists call “external locus of control,” which refers to a belief that external forces (such as politics) have a large impact on one’s life...An external locus of control brings unhappiness. Three social psychologists showed this in a famous 2004 paper published in the journal Personality and Social Psychology Review. Studying surveys of college students over several decades and controlling for life circumstances and demographics, they compared people who associated their destinies with luck and outside forces with those who believed they were more in control of their lives. They conclude that an external locus is correlated with worse academic achievement, more stress and higher levels of depression.
So what is the solution? First, find a way to bring politics more into your sphere of influence so it no longer qualifies as an external locus of control. Simply clicking through angry political Facebook posts by people with whom you already agree will most likely worsen your mood and help no one. Instead, get involved in a tangible way — volunteering, donating money or even running for office. This transforms you from victim of political circumstance to problem solver.
Second, pay less attention to politics as entertainment. Read the news once a day, as opposed to hitting your Twitter feed 50 times a day like a chimp in a 1950s experiment on the self-administration of cocaine. Will you get the very latest goings on in Washington in real time? No. Will that make you a more boring person? No. Trust me here — you will be less boring to others. But more important, you will become happier.

Wednesday, March 22, 2017

Is the body the missing link for truly intelligent machines?

Medlock comments on efforts to achieve human-like artificial intelligence (AI) while bypassing the messy flesh that characterizes organic life. A clip from the end of his article:
...long before we were conscious, thinking beings, our cells were reading data from the environment and working together to mould us into robust, self-sustaining agents. What we take as intelligence, then, is not simply about using symbols to represent the world as it objectively is. Rather, we only have the world as it is revealed to us, which is rooted in our evolved, embodied needs as an organism. Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’, wrote the neuroscientist Antonio Damasio in Descartes’ Error (1994), his seminal book on cognition. In other words, we think with our whole body, not just with the brain.
I suspect that this basic imperative of bodily survival in an uncertain world is the basis of the flexibility and power of human intelligence. But few AI researchers have really embraced the implications of these insights. The motivating drive of most AI algorithms is to infer patterns from vast sets of training data – so it might require millions or even billions of individual cat photos to gain a high degree of accuracy in recognising cats. By contrast, thanks to our needs as an organism, human beings carry with them extraordinarily rich models of the body in its broader environment. We draw on experiences and expectations to predict likely outcomes from a relatively small number of observed samples. So when a human thinks about a cat, she can probably picture the way it moves, hear the sound of purring, feel the impending scratch from an unsheathed claw. She has a rich store of sensory information at her disposal to understand the idea of a ‘cat’, and other related concepts that might help her interact with such a creature.
This means that when a human approaches a new problem, most of the hard work has already been done. In ways that we’re only just beginning to understand, our body and brain, from the cellular level upwards, have already built a model of the world that we can apply almost instantly to a wide array of challenges. But for an AI algorithm, the process begins from scratch each time. There is an active and important line of research, known as ‘inductive transfer’, focused on using prior machine-learned knowledge to inform new solutions. However, as things stand, it’s questionable whether this approach will be able to capture anything like the richness of our own bodily models.
Medlock’s comment that “for an AI algorithm, the process begins from scratch each time” may not be correct for newer AI attempts than use deep reinforcement learning of learning through examples.