Monday, July 16, 2018

What is consciousness, and could machines have it?

I want to point to a lucid article by Dehaene, Lau, and Koulder that gives the most clear review I have seen of our current state of knowledge on the nature of human consciousness, which we need to define if we wish to consider the question of machines being conscious like us.   Here is the abstract, followed by a few edited clips that attempt to communicate the core points (motivated readers can obtain the whole text by emailing me):
The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.
C1: Global availability
This corresponds to the transitive meaning of consciousness (as in “The driver is conscious of the light”)... We can recall it, act upon it, and speak about it. This sense is synonymous with “having the information in mind”; among the vast repertoire of thoughts that can become conscious at a given time, only that which is globally available constitutes the content of C1 consciousness.
C2: Self-monitoring
Another meaning of consciousness is reflexive. It refers to a self-referential relationship in which the cognitive system is able to monitor its own processing and obtain information about itself..This sense of consciousness corresponds to what is commonly called introspection, or what psychologists call “meta-cognition”—the ability to conceive and make use of internal representations of one’s own knowledge and abilities.
CO: Unconscious processing: Where most of our intelligence lies
...many computations involve neither C1 nor C2 and therefore are properly called “unconscious” ...Cognitive neuroscience confirms that complex computations such as face or speech recognition, chess-game evaluation, sentence parsing, and meaning extraction occur unconsciously in the human brain—under conditions that yield neither global reportability nor self-monitoring. The brain appears to operate, in part, as a juxtaposition of specialized processors or “modules” that operate nonconsciously and, we argue, correspond tightly to the operation of current feedforward deep-learning networks.
The phenomenon of priming illustrates the remarkable depth of unconscious processing...Subliminal digits, words, faces, or objects can be invariantly recognized and influence motor, semantic, and decision levels of processing. Neuroimaging methods reveal that the vast majority of brain areas can be activated nonconsciously...Subliminal priming generalizes across visual-auditory modalities...Even the semantic meaning of sensory input can be processed without awareness by the human brain.
...subliminal primes can influence prefrontal mechanisms of cognitive control involved in the selection of a task...Neural mechanisms of decision-making involve accumulating sensory evidence that affects the probability of the various choices until a threshold is attained. This accumulation of probabilistic knowledge continues to happen even with subliminal stimuli. Bayesian inference and evidence accumulation, which are cornerstone computations for AI, are basic unconscious mechanisms for humans.
Reinforcement learning algorithms, which capture how humans and animals shape their future actions on the basis of history of past rewards, have excelled in attaining supra-human AI performance in several applications, such as playing Go. Remarkably, in humans, such learning appears to proceed even when the cues, reward, or motivation signals are presented below the consciousness threshold.
What additional computations are required for conscious processing?

C1: Global availability of relevant information
The need for integration and coordination. Integrating all of the available evidence to converge toward a single decision is a computational requirement that, we contend, must be faced by any animal or autonomous AI system and corresponds to our first functional definition of consciousness: global availability (C1)...Such decision-making requires a sophisticated architecture for (i) efficiently pooling over all available sources of information, including multisensory and memory cues; (ii) considering the available options and selecting the best one on the basis of this large information pool; (iii) sticking to this choice over time; and (iv) coordinating all internal and external processes toward the achievement of that goal.
Consciousness as access to an internal global workspace. We hypothesize that...On top of a deep hierarchy of specialized modules, a “global neuronal workspace,” with limited capacity, evolved to select a piece of information, hold it over time, and share it across modules. We call “conscious” whichever representation, at a given time, wins the competition for access to this mental arena and gets selected for global sharing and decision-making.
Relation between consciousness and attention. William James described attention as “the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought”. This definition is close to what we mean by C1: the selection of a single piece of information for entry into the global workspace. There is, however, a clear-cut distinction between this final step, which corresponds to conscious access, and the previous stages of attentional selection, which can operate unconsciously...What we call attention is a hierarchical system of sieves that operate unconsciously. Such unconscious systems compute with probability distributions, but only a single sample, drawn from this probabilistic distribution, becomes conscious at a given time. We may become aware of several alternative interpretations, but only by sampling their unconscious distributions over time.
Evidence for all-or-none selection in a capacity-limited system. The primate brain comprises a conscious bottleneck and can only consciously access a single item at a time. For instance, rivaling pictures or ambiguous words are perceived in an all-or-none manner; at any given time, we subjectively perceive only a single interpretation out of many possible ones [even though the others continue to be processed unconsciously]...Brain imaging in humans and neuronal recordings in monkeys indicate that the conscious bottleneck is implemented by a network of neurons that is distributed through the cortex, but with a stronger emphasis on high-level associative areas. ... Single-cell recordings indicate that each specific conscious percept, such as a person’s face, is encoded by the all-or-none firing of a subset of neurons in high-level temporal and prefrontal cortices, whereas others remain silent...the stable, reproducible representation of high-quality information by a distributed activity pattern in higher cortical areas is a feature of conscious processing. Such transient “meta-stability” seems to be necessary for the nervous system to integrate information from a variety of modules and then broadcast it back to them, achieving flexible cross-module routing.
C1 consciousness in human and nonhuman animals. C1 consciousness is an elementary property that is present in human infants as well as in animals. Nonhuman primates exhibit similar visual illusions, attentional blink, and central capacity limits as human subjects.
C2: Self-monitoring
Whereas C1 consciousness reflects the capacity to access external information, consciousness in the second sense (C2) is characterized by the ability to reflexively represent oneself ("metacognition")
A probabilistic sense of confidence. Confidence can be assessed nonverbally, either retrospectively, by measuring whether humans persist in their initial choice, or prospectively, by allowing them to opt out from a task without even attempting it. Both measures have been used in nonhuman animals to show that they too possess metacognitive abilities. By contrast, most current neural networks lack them: Although they can learn, they generally lack meta-knowledge of the reliability and limits of what has been learned...Magnetic resonance imaging (MRI) studies in humans and physiological recordings in primates and even in rats have specifically linked such confidence processing to the prefrontal cortex. Inactivation of the prefrontal cortex can induce a specific deficit in second-order (metacognitive) judgements while sparing performance on the first-order task. Thus, circuits in the prefrontal cortex may have evolved to monitor the performance of other brain processes.
Error detection: Reflecting on one’s own mistakes ...just after responding, we sometimes realize that we made an error and change our mind. Error detection is reflected by two components of electroencephalography (EEG) activity: the error-relativity negativity (ERN) and the positivity upon error (Pe), which emerge in cingulate and medial prefrontal cortex just after a wrong response but before any feedback is received...A possibility compatible with the remarkable speed of error detection is that two parallel circuits, a low-level sensory-motor circuit and a higher-level intention circuit, operate on the same sensory data and signal an error whenever their conclusions diverge. Self-monitoring is such a basic ability that it is already present during infancy. The ERN, indicating error monitoring, is observed when 1-year-old infants make a wrong choice in a perceptual decision task.
Meta-memory The term “meta-memory” was coined to capture the fact that humans report feelings of knowing, confidence, and doubts on their memories. ...Meta-memory is associated with prefrontal structures whose pharmacological inactivation leads to a metacognitive impairment while sparing memory performance itself. Metamemory is crucial to human learning and education by allowing learners to develop strategies such as increasing the amount of study or adapting the time allocated to memory encoding and rehearsal.
Reality monitoring. In addition to monitoring the quality of sensory and memory representations, the human brain must also distinguish self-generated versus externally driven representations - we can perceive things, but we also conjure them from imagination or memory...Neuroimaging studies have linked this kind of reality monitoring to the anterior prefrontal cortex
Dissociations between C1 and C2
According to our analysis, C1 and C2 are largely orthogonal and complementary dimensions of what we call consciousness. On one side of this double dissociation, self-monitoring can exist for unreportable stimuli (C2 without C1). Automatic typing provides a good example: Subjects slow down after a typing mistake, even when they fail to consciously notice the error. Similarly, at the neural level, an ERN can occur for subjectively undetected errors. On the other side of this dissociation, consciously reportable contents sometimes fail to be accompanied with an adequate sense of confidence (C1 without C2). For instance, when we retrieve a memory, it pops into consciousness (C1) but sometimes without any accurate evaluation of its confidence (C2), leading to false memories.
Synergies between C1 and C2 consciousness
Because C1 and C2 are orthogonal, their joint possession may have synergistic benefits to organisms. In one direction, bringing probabilistic metacognitive information (C2) into the global workspace (C1) allows it to be held over time, integrated into explicit long-term reflection, and shared with others...In the converse direction, the possession of an explicit repertoire of one’s own abilities (C2) improves the efficiency with which C1 information is processed. During mental arithmetic, children can perform a C2-level evaluation of their available competences (for example, counting, adding, multiplying, or memory retrieval) and use this information to evaluate how to best face a given arithmetic problem.
Endowing machines with C1 and C2
[Note: I am not abstracting this section as I did the above descriptions of consciousness. It describes numerous approaches rising above the level of most present day machines to making machines able to perform C1 and C2 operations.]
Most present-day machine-learning systems are devoid of any self-monitoring; they compute (C0) without representing the extent and limits of their knowledge or the fact that others may have a different viewpoint than their own. There are a few exceptions: Bayesian networks or programs compute with probability distributions and therefore keep track of how likely they are to be correct. Even when the primary computation is performed by a classical CNN, and is therefore opaque to introspection, it is possible to train a second, hierarchically higher neural network to predict the first one’s performance.
Concluding remarks
Our stance is based on a simple hypothesis: What we call “consciousness” results from specific types of information-processing computations, physically realized by the hardware of the brain. It differs from other theories in being resolutely computational; we surmise that mere information-theoretic quantities do not suffice to define consciousness unless one also considers the nature and depth of the information being processed.
We contend that a machine endowed with C1 and C2 would behave as though it were conscious; for instance, it would know that it is seeing something, would express confidence in it, would report it to others, could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans. Still, such a purely functional definition of consciousness may leave some readers unsatisfied. Are we “over-intellectualizing” consciousness, by assuming that some high-level cognitive functions are necessarily tied to consciousness? Are we leaving aside the experiential component (“what it is like” to be conscious)? Does subjective experience escape a computational definition?
Although those philosophical questions lie beyond the scope of the present paper, we close by noting that empirically, in humans the loss of C1 and C2 computations covaries with a loss of subjective experience. For example, in humans, damage to the primary visual cortex may lead to a neurological condition called “blindsight,” in which the patients report being blind in the affected visual field. Remarkably, those patients can localize visual stimuli in their blind field but cannot report them (C1), nor can they effectively assess their likelihood of success (C2)—they believe that they are merely “guessing.” In this example, at least, subjective experience appears to cohere with possession of C1 and C2. Although centuries of philosophical dualism have led us to consider consciousness as unreducible to physical interactions, the empirical evidence is compatible with the possibility that consciousness arises from nothing more than specific computations.

Friday, July 13, 2018

Playing with proteins in virtual reality.

Much of my mental effort while I was doing laboratory research on the mechanisms of visual transduction (changing light into a nerve signal in our retinal rod and cone photoreceptor cells) was devoted to trying to visualize how proteins might interact with each other. I spent many hours using molecular model kits of color-coded plastic atoms one could plug together with flexible joints, like the Tinkertoys of my childhood. If I had only had the system now described by O'Connor et al! Have a look at the video below showing manipulating molecular dynamics in a VR environment,  and here is their abstract:
We describe a framework for interactive molecular dynamics in a multiuser virtual reality (VR) environment, combining rigorous cloud-mounted atomistic physics simulations with commodity VR hardware, which we have made accessible to readers (see It allows users to visualize and sample, with atomic-level precision, the structures and dynamics of complex molecular structures “on the fly” and to interact with other users in the same virtual environment. A series of controlled studies, in which participants were tasked with a range of molecular manipulation goals (threading methane through a nanotube, changing helical screw sense, and tying a protein knot), quantitatively demonstrate that users within the interactive VR environment can complete sophisticated molecular modeling tasks more quickly than they can using conventional interfaces, especially for molecular pathways and structural transitions whose conformational choreographies are intrinsically three-dimensional. This framework should accelerate progress in nanoscale molecular engineering areas including conformational mapping, drug development, synthetic biology, and catalyst design. More broadly, our findings highlight the potential of VR in scientific domains where three-dimensional dynamics matter, spanning research and education.

Sampling molecular conformational dynamics in virtual reality from david glowacki on Vimeo.

Thursday, July 12, 2018

Increasing despair among poor Americans.

A survey by Goldman et al. of more than 4,600 American adults conducted in 1995-1996 and in 2011-2014 suggests that among individuals of low socioeconomic status, negative affect increased significantly between the two survey waves, and life satisfaction and psychological well-being decreased:

In the past few years, references to the opioid epidemic, drug poisonings, and associated feelings of despair among Americans, primarily working-class whites, have flooded the media, and related patterns of mortality have been of increasing interest to social scientists. Yet, despite recurring references to distress or despair in journalistic accounts and academic studies, there has been little analysis of whether psychological health among American adults has worsened over the past two decades. Here, we use data from national samples of adults in the mid-1990s and early 2010s to demonstrate increasing distress and declining well-being that was concentrated among low-socioeconomic-status individuals but spanned the age range from young to older adults.
Although there is little dispute about the impact of the US opioid epidemic on recent mortality, there is less consensus about whether trends reflect increasing despair among American adults. The issue is complicated by the absence of established scales or definitions of despair as well as a paucity of studies examining changes in psychological health, especially well-being, since the 1990s. We contribute evidence using two cross-sectional waves of the Midlife in the United States (MIDUS) study to assess changes in measures of psychological distress and well-being. These measures capture negative emotions such as sadness, hopelessness, and worthlessness, and positive emotions such as happiness, fulfillment, and life satisfaction. Most of the measures reveal increasing distress and decreasing well-being across the age span for those of low relative socioeconomic position, in contrast to little decline or modest improvement for persons of high relative position.

Wednesday, July 11, 2018

A fundamental advance in brain imaging techniques.

I want to pass on the abstract, along with a bit of text and two figures, from an article by Coalson, Van Essen, and Glasser that argues for a fundamental change is how functional cortical areas of the brain are recorded and reported.  They demonstrate that surface based parcellation is 3-fold more accurate than traditional volume based parcellations.:

Most human brain-imaging studies have traditionally used low-resolution images, inaccurate methods of cross-subject alignment, and extensive blurring. Recently, a high-resolution approach with more accurate alignment and minimized blurring was used by the Human Connectome Project to generate a multimodal map of human cortical areas in hundreds of individuals. Starting from these data, we systematically compared these two approaches, showing that the traditional approach is nearly three times worse than the Human Connectome Project’s improved approach in two objective measures of spatial localization of cortical areas. Furthermore, we demonstrate considerable challenges in comparing data across the two approaches and, as a result, argue that there is an urgent need for the field to adopt more accurate methods of data acquisition and analysis.
Localizing human brain functions is a long-standing goal in systems neuroscience. Toward this goal, neuroimaging studies have traditionally used volume-based smoothing, registered data to volume-based standard spaces, and reported results relative to volume-based parcellations. A novel 360-area surface-based cortical parcellation was recently generated using multimodal data from the Human Connectome Project, and a volume-based version of this parcellation has frequently been requested for use with traditional volume-based analyses. However, given the major methodological differences between traditional volumetric and Human Connectome Project-style processing, the utility and interpretability of such an altered parcellation must first be established. By starting from automatically generated individual-subject parcellations and processing them with different methodological approaches, we show that traditional processing steps, especially volume-based smoothing and registration, substantially degrade cortical area localization compared with surface-based approaches. We also show that surface-based registration using features closely tied to cortical areas, rather than to folding patterns alone, improves the alignment of areas, and that the benefits of high-resolution acquisitions are largely unexploited by traditional volume-based methods. Quantitatively, we show that the most common version of the traditional approach has spatial localization that is only 35% as good as the best surface-based method as assessed using two objective measures (peak areal probabilities and “captured area fraction” for maximum probability maps). Finally, we demonstrate that substantial challenges exist when attempting to accurately represent volume-based group analysis results on the surface, which has important implications for the interpretability of studies, both past and future, that use these volume-based methods.
Some context from their introduction:
The recently reported HCP-MMP1.0 multimodal cortical parcellation (, see the graphic below) contains 180 distinct areas per hemisphere and was generated from hundreds of healthy young adult subjects from the Human Connectome Project (HCP) using data precisely aligned with the HCP’s surface-based neuroimaging analysis approach. Each cortical area is defined by multiple features, such as those representing architecture, function, connectivity, or topographic maps of visual space. This multimodal parcellation has generated widespread interest, with many investigators asking how to relate its cortical areas to data processed using the traditional neuroimaging approach. Because volume-registered analysis of the cortex in MNI space is still widely used, this has often translated into concrete requests, such as: “Please provide the HCP-MMP1.0 parcellation in standard MNI volume space.” Here, we investigate quantitatively the drawbacks of traditional volume-based analyses and document that much of the HCP-MMP1.0 parcellation cannot be faithfully represented when mapped to a traditional volume-based atlas.
Here is a graphic from the parcellation analysis

And here is Figure 1 and its explanation from the Coalson et al. paper.

The figure shows a probabilistic maps of five exemplar areas spanning a range of peak probabilities. Each area is shown as localized by areal-feature–based surface registration (Lower, Center), and as localized by volume-based methods. One area (3b in Fig. 1) has a peak probability of 0.92 in the volume (orange, red), whereas the other four have volumetric peak probabilities in the range of 0.35–0.7 (blue, yellow). Notably, the peak probabilities of these five areas are all higher on the surface (Figure, Lower, Center) (range 0.90–1) than in the volume, indicating that MSMAll nonlinear surface-based registration provides substantially better functional alignment across subjects than does nonlinear volume-based registration.

Tuesday, July 10, 2018

Mindfulness training increases strength of right insula connections.

Sharp et al.(open source) suggest that:
The endeavor to understand how mindfulness works will likely be advanced by using recently developed tools and theory within the nascent field of brain connectomics. The connectomic framework conceives of the brain’s functional and structural architecture as a complex, dynamic network. This network view of brain function partly arose from the lack of support for highly selective, modular regions instantiating specialized functions. That is, large meta-analyses consisting mostly of univariate fMRI analyses disconfirm that, for example, the amygdala is exclusively selective for fear processing. Indeed, more fruitful mechanistic knowledge of how neural systems function may emerge from delineating how different regions communicate functionally across a range of environments, and by identifying the underlying structural connections that constrain such functional dynamics.
Their abstract, and a summary figure:
Although mindfulness meditation is known to provide a wealth of psychological benefits, the neural mechanisms involved in these effects remain to be well characterized. A central question is whether the observed benefits of mindfulness training derive from specific changes in the structural brain connectome that do not result from alternative forms of experimental intervention. Measures of whole-brain and node-level structural connectome changes induced by mindfulness training were compared with those induced by cognitive and physical fitness training within a large, multi-group intervention protocol (n = 86). Whole-brain analyses examined global graph-theoretical changes in structural network topology. A hypothesis-driven approach was taken to investigate connectivity changes within the insula, which was predicted here to mediate interoceptive awareness skills that have been shown to improve through mindfulness training. No global changes were observed in whole-brain network topology. However, node-level results confirmed a priori hypotheses, demonstrating significant increases in mean connection strength in right insula across all of its connections. Present findings suggest that mindfulness strengthens interoception, operationalized here as the mean insula connection strength within the overall connectome. This finding further elucidates the neural mechanisms of mindfulness meditation and motivates new perspectives about the unique benefits of mindfulness training compared to contemporary cognitive and physical fitness interventions.
Legend - Anatomical representation of tractography pathways between right insula and highly connected regions. Connections displayed (only corticocortical, here) comprised the top 80% connection strengths across all insula pathways. (A) Displays pre-training connections in right insula, which showed the greatest structural reorganization across mindfulness training. (B) Represents the same image as in (A) except at post-training.

Monday, July 09, 2018

Mortality rates level off at extreme age

Interesting work from Barbi et al. showing that human death rates increase exponentially up to about age 80, then decelerate, and plateau after age 105. At that point, the odds of someone dying from one birthday to the next are roughly 50:50. This implies that there might be no natural limit to how long humans can live, contrary to the view of most demographers and biologists.:
Theories about biological limits to life span and evolutionary shaping of human longevity depend on facts about mortality at extreme ages, but these facts have remained a matter of debate. Do hazard curves typically level out into high plateaus eventually, as seen in other species, or do exponential increases persist? In this study, we estimated hazard rates from data on all inhabitants of Italy aged 105 and older between 2009 and 2015 (born 1896–1910), a total of 3836 documented cases. We observed level hazard curves, which were essentially constant beyond age 105. Our estimates are free from artifacts of aggregation that limited earlier studies and provide the best evidence to date for the existence of extreme-age mortality plateaus in humans.

Friday, July 06, 2018

Brain imaging predicts who will be a good musical performer.

Fascinating observations from Zatorre's group:

In sophisticated auditory–motor learning such as musical instrument learning, little is understood about how brain plasticity develops over time and how the related individual variability is reflected in the neural architecture. In a longitudinal fMRI training study on cello learning, we reveal the integrative function of the dorsal cortical stream in auditory–motor information processing, which comes online quickly during learning. Additionally, our data show that better performers optimize the recruitment of regions involved in auditory encoding and motor control and reveal the critical role of the pre-supplementary motor area and its interaction with auditory areas as predictors of musical proficiency. The present study provides unprecedented understanding of the neural substrates of individual learning variability and therefore has implications for pedagogy and rehabilitation.
The auditory and motor neural systems are closely intertwined, enabling people to carry out tasks such as playing a musical instrument whose mapping between action and sound is extremely sophisticated. While the dorsal auditory stream has been shown to mediate these audio–motor transformations, little is known about how such mapping emerges with training. Here, we use longitudinal training on a cello as a model for brain plasticity during the acquisition of specific complex skills, including continuous and many-to-one audio–motor mapping, and we investigate individual differences in learning. We trained participants with no musical background to play on a specially designed MRI-compatible cello and scanned them before and after 1 and 4 wk of training. Activation of the auditory-to-motor dorsal cortical stream emerged rapidly during the training and was similarly activated during passive listening and cello performance of trained melodies. This network activation was independent of performance accuracy and therefore appears to be a prerequisite of music playing. In contrast, greater recruitment of regions involved in auditory encoding and motor control over the training was related to better musical proficiency. Additionally, pre-supplementary motor area activity and its connectivity with the auditory cortex during passive listening before training was predictive of final training success, revealing the integrative function of this network in auditory–motor information processing. Together, these results clarify the critical role of the dorsal stream and its interaction with auditory areas in complex audio–motor learning.

Thursday, July 05, 2018

Why are religious people trusted more?

A prevailing view is that religious behavior facilitate trust, primarily toward coreligionists, and particularly when it is diagnostic of belief in moralizing deities. Moon et al. suggest a further reason that religious people are viewed as more trustworthy than non-religious people: they follow 'slow life-history' strategies that tend to be sexually restricted, invested in family, nonimpulsive, and nonaggressive, all traits that associated with cooperativeness and prosociality. They find that direct information about life history reproductive strategy (i.e., a subject's “dating preferences”) tend to override the effects of religious information. Their abstract:
Religious people are more trusted than nonreligious people. Although most theorists attribute these perceptions to the beliefs of religious targets, religious individuals also differ in behavioral ways that might cue trust. We examined whether perceivers might trust religious targets more because they heuristically associate religion with slow life-history strategies. In three experiments, we found that religious targets are viewed as slow life-history strategists and that these findings are not the result of a universally positive halo effect; that the effect of target religion on trust is significantly mediated by the target’s life-history traits (i.e., perceived reproductive strategy); and that when perceivers have direct information about a target’s reproductive strategy, their ratings of trust are driven primarily by his or her reproductive strategy, rather than religion. These effects operate over and above targets’ belief in moralizing gods and offer a novel theoretical perspective on religion and trust.

Wednesday, July 04, 2018

Seven creepy Facebook patents

I'll follow yesterday's post with yet another post on creepy high tech patents, this time from Facebook, showing their ongoing intention to invade our privacy as much as possible. From the article by Chinoy:

Reading your relationships
One patent application discusses predicting whether you’re in a romantic relationship using information such as how many times you visit another user’s page, the number of people in your profile picture and the percentage of your friends of a different gender.
Classifying your personality
Another proposes using your posts and messages to infer personality traits. It describes judging your degree of extroversion, openness or emotional stability, then using those characteristics to select which news stories or ads to display.
Predicting your future
This patent application describes using your posts and messages, in addition to your credit card transactions and location, to predict when a major life event, such as a birth, death or graduation, is likely to occur.
Identifying your camera
Another considers analyzing pictures to create a unique camera “signature” using faulty pixels or lens scratches. That signature could be used to figure out that you know someone who uploads pictures taken on your device, even if you weren’t previously connected. Or it might be used to guess the “affinity” between you and a friend based on how frequently you use the same camera.
Listening to your environment
This patent application explores using your phone microphone to identify the television shows you watched and whether ads were muted. It also proposes using the electrical interference pattern created by your television power cable to guess which show is playing.
Tracking your routine
Another patent application discusses tracking your weekly routine and sending notifications to other users of deviations from the routine. In addition, it describes using your phone’s location in the middle of the night to establish where you live.
Inferring your habits
This patent proposes correlating the location of your phone to locations of your friends’ phones to deduce whom you socialize with most often. It also proposes monitoring when your phone is stationary to track how many hours you sleep.

Tuesday, July 03, 2018

A patent for emotional robots?

I recently received an interesting email from Deepak Gupta, who is with a patent research services company, who, having seen my post on robots and emotion, thought to point me to a description of a patent application filed by Samsung. It seems so straightforward that I would have thought that such a patent would have been proposed years ago. It is an attempt to get past the "uncanny valley" issue, the feelings of eeriness and revulsion people can have in observing or interacting with robots. (It still seems a bit scary to me.) electronic robot includes a motor mounted inside its head. A head part of the robot consists of a display unit, to display emotional expressions and a motor control unit that can rotate the robot’s head in a clockwise or counter-clockwise direction on its axis.
The electronic robot decides which emotional expression to be displayed on a display panel by sensing information from its various sensors, such as a camera sensor, a pressure sensor, a geomagnetic sensor, and a microphone to sense the motion of a user. Accordingly, the electronic robot tracks the major feature points of the face such as eyes, nose, mouth, and eyebrows from user images captured by the camera sensor and recognizes user emotional information, based on basic facial expressions conveying happiness, surprise, anger, sadness, and sorrow. Once the information is received from the sensors, the data is then analyzed by the processor that sends an appropriate voltage signal to the display unit and the motor control unit in order to express relevant emotion. The emotional states expressed by the electronic robot include anger, disgust, sadness, interest, happiness, impassiveness, surprise, agreement (i.e., “yes”), and rejection (i.e., “no”).
To express emotional states, the electronic robot stores motion information of the head predefined for each emotional state in a storage unit. The head motion information includes head motion type, angle, speed (or rotational force), and direction.
The technology disclosed in the patent document allows robots to express emotions. Therefore, these robots can communicate or interact with humans more effectively. These robots can be used in the applications that require interaction with humans, for instance, communicating with patients in a hospital in the absence of actual staff, or even interacting with pets such as dogs or cats that are left alone by their owners during their working hours.

Monday, July 02, 2018

An optimistic outlook creates a rosy past.

From Devitt and Schacter in the Harvard Psychology department, a study recruiting the usual gaggle of psychology undergraduate students as subjects:
People frequently engage in future thinking in everyday life, but it is unknown how simulating an event in advance changes how that event is remembered once it takes place. To initiate study of this important topic, we conducted two experiments in which participants simulated emotional events before learning the hypothetical outcome of each event via narratives. Memory was assessed for emotional details contained in those narratives. Positive simulation resulted in a liberal response bias for positive information and a conservative bias for negative information. Events preceded by positive simulation were considered more favorably in retrospect. In contrast, negative simulation had no impact on subsequent memory. Results were similar across an immediate and delayed memory test and for past and future simulation. These results provide novel insights into the cognitive consequences of episodic future simulation and build on the optimism-bias literature by showing that adopting a favorable outlook results in a rosy memory.

Friday, June 29, 2018

Stress and autoimmune disease

I am now in my 77th year, and not all that pleased that my inflammatory, immune, and stress systems are becoming more excitable, more likely to be activated by small environmental perturbations that went unnoticed 10 or 20 years ago. Increases in sympathetic nervous system and inflammatory process with aging have been documented in many studies, and there is a literature linking stress with immune system dysfunction at all ages. This post points to yet another study of the linkage of stress with immune system activity. Song et al. report a massive study of over one million people in a Swedish cohort over a thirty year period that shows a strong association between persons suffering stress disorders and increased risk of developing autoimmune diseases such as arthritis and Crohn's disease. This affirms the strong link between psychological stress and physical inflammatory conditions. Here is a truncated and edited statement of results from the JAMA paper:
The median age at diagnosis of stress-related disorders was 41 years, and 40% of the exposed (i.e. suffering from stress-related disorder) patients were male. During a mean follow-up of 10 years, the incidence rate of autoimmune diseases was 9.1, 6.0, and 6.5 per 1000 person-years among the exposed, matched unexposed, and sibling cohorts, respectively. Compared with the unexposed population, patients with stress-related disorders were at increased risk of autoimmune disease... Persistent use of selective serotonin reuptake inhibitors during the first year of posttraumatic stress disorder diagnosis was associated with attenuated relative risk of autoimmune disease.

Thursday, June 28, 2018

Social media and democracy

This post passes on links to articles on social media discussed recently in the Univ. of Wisconsin Chaos and Complex Systems seminar that I attend. (Two previous MindBlog posts have noted Jaron Lanier's critique of social media, "Ten arguments for deleting your social media accounts right now," which coins an acronym, BUMMER, for what Lanier considers one of its core evils. BUMMER is “Behaviors of Users Modified, and Made Into an Empire for Rent.”)  I want to point now to Franklin Foer's review of Lanier's book (Foer claims he actually deleted his Facebook account after reading the book. I haven't managed to do that.)

As an antidote to the current pessimism about the social effects of social media, Ethan Zuckerman offers an essay "Six or seven things social media can do for democracy." He takes his title from an article by Schudson, “Six or Seven Things News Can Do for Democracy,” that anchored his book, "Why Democracies Need an Unloveable Press." Schudson's list of functions for journalism or news is to inform, investigate, analyze, be a public forum, be a tool for social empathy, be a force for mobilization, and promote representative democracy.

Zukerman makes a similar list for the functions of social media, that I condense here:
1. Social media can inform us. (Arab Spring, Ferguson Missouri) but also misinform us (fake news, pizza parlor prostitution ring.)
2.   Social media can an amplify important voices and issues (which might either strengthen or weaken democracy)
3. Social media can be a tool for connection and solidarity (of either good or evil groups, pro or anti democratic.) 
4. Social media can be a space for mobilization (Tahrir Square, Taksim Gezi Park, Charlottesville, pro or anti-democratic.)
5. Social media can be a space for deliberation debate (yet also disappointing, mean, petty, superficial, uncivil.)  There needs to be more effort to build civil platforms.
6. Social media can show us diversity of views and perspectives.    (However, bubbles, homophily, racism, ethnonationalism, can inhibit this).  Conscious intervention is needed change this.
7. Social media can be a model for democratically governed spaces.   Social platforms should seek, be based on consent of their communities. (Example of Reddit, ~1000 volunteer moderators, rules, policing of rule breakers.)

One of Zuckerman's concluding paragraphs   
Finally, it’s also fair to note that there’s a dark side to every democratic function I’ve listed. The tools that allow marginalized people to report their news and influence media are the same ones that allow fake news to be injected into the media ecosystem. Amplification is a technique used by everyone from Black Lives Matter to neo-Nazis, as is mobilization, and the spaces for solidarity that allow Jen Brea to manage her disease allow “incels” to push each other towards violence. While I feel comfortable advocating for respectful dialog and diverse points of view, someone will see my advocacy as an attempt to push politically correct multiculturalism down their throat, or to silence the exclusive truth of their perspectives through dialog. The bad news is that making social media work better for democracy likely means making it work better for the Nazis as well. The good news is that there’s a lot more participatory democrats than there are Nazis.

Wednesday, June 27, 2018

The Philosophy we need: Personalism

To his credit, David Brooks keeps searching for moral visions that might be an antidote to our current social and political malaise. A recent Op-Ed piece is on Personalism, the:
...philosophic tendency built on the infinite uniqueness and depth of each person. Over the years people like Walt Whitman, Martin Luther King, William James, Peter Maurin and Wojtyla (who went on to become Pope John Paul II) have called themselves personalists, but the movement is still something of a philosophic nub.
...our culture does a pretty good job of ignoring the uniqueness and depth of each person. Pollsters see in terms of broad demographic groups. Big data counts people as if it were counting apples. At the extreme, evolutionary psychology reduces people to biological drives, capitalism reduces people to economic self-interest, modern Marxism to their class position and multiculturalism to their racial one. Consumerism treats people as mere selves — as shallow creatures concerned merely with the experience of pleasure and the acquisition of stuff.
..personalism is a middle way between authoritarian collectivism and radical individualism. The former subsumes the individual within the collective. The latter uses the group to serve the interests of the self.
Personalism demands that we change the way we structure our institutions. A company that treats people as units to simply maximize shareholder return is showing contempt for its own workers. Schools that treat students as brains on a stick are not preparing them to lead whole lives.
The big point is that today’s social fragmentation didn’t spring from shallow roots. It sprang from worldviews that amputated people from their own depths and divided them into simplistic, flattened identities. That has to change. As Charles PĆ©guy said, “The revolution is moral or not at all.”

Tuesday, June 26, 2018

Mindfulness can lower your motivation.

Vohs and Hafenbrack do an NYTimes opinion piece to advertise their forthcoming paper on effects of meditation. Given my personal experience with 'mindfulness' I think they are right on track. Some clips:
The practical payoff of mindfulness is backed by dozens of studies linking it to job satisfaction, rational thinking and emotional resilience...But on the face of it, mindfulness might seem counterproductive in a workplace setting. A central technique of mindfulness meditation, after all, is to accept things as they are. Yet companies want their employees to be motivated. And the very notion of motivation — striving to obtain a more desirable future — implies some degree of discontentment with the present, which seems at odds with a psychological exercise that instills equanimity and a sense of calm.
To test this hunch, we recently conducted five studies, involving hundreds of people, to see whether there was a tension between mindfulness and motivation...Among those who had meditated, motivation levels were lower on average. Those people didn’t feel as much like working on the assignments, nor did they want to spend as much time or effort to complete them. Meditation was correlated with reduced thoughts about the future and greater feelings of calm and serenity — states seemingly not conducive to wanting to tackle a work project...previous studies have found that meditation increases mental focus, suggesting that those in our studies who performed the mindfulness exercise should have performed better on the tasks. Their lower levels of motivation, however, seemed to cancel out that benefit...Mindfulness is perhaps akin to a mental nap. Napping, too, is associated with feeling calm, refreshed and less harried. Then again, who wakes up from a nap eager to organize some files?
Mindfulness might be unhelpful for dealing with difficult assignments at work, but it may be exactly what is called for in other contexts. There is no denying that mindfulness can be beneficial, bringing about calm and acceptance. Once you’ve reached a peak level of acceptance, however, you’re not going to be motivated to work harder.

Monday, June 25, 2018

Deep origins of consciousness

I want to recommend an engaging book by Peter Godfrey-Smith that I have read recently, "Other Minds: The Octopus, theSea, and the Deep Origins of Consciousness." Below are a few clips pointing to some of his core points:
Sentience is brought into being somehow from the evolution of sensing and acting; it involves being a living system with a point of view on the world around it. If we take that approach, though, a perplexity we run into immediately is the fact that those capacities are so widespread— they are found far outside the organisms that are usually thought to have experience of some kind. Even bacteria sense the world and act... A case can be made that responses to stimuli, and the controlled flow of chemicals across boundaries, are an elementary part of life itself. Unless we conclude that all living things have a modicum of subjective experience— a view I don’t regard as insane, but surely one that would need a lot of defense— there must be something about the way animals deal with the world that makes a crucial difference.
The senses can do their basic work, and actions can be produced, with all this happening “in silence” as far as the organism’s experience is concerned. Then, at some stage in evolution, extra capacities appear that do give rise to subjective experience: the sensory streams are brought together, an “internal model” of the world arises, and there’s a recognition of time and self.
What we’ve learned over the last thirty years or so is that there’s a particular style of processing— one that we use to deal especially with time, sequences, and novelty— that brings with it conscious awareness, while a lot of other quite complex activities do not.
Baars suggested that we are conscious of the information that has been brought into a centralized “workspace” in the brain. Dehaene adopted and developed this view. A related family of theories claim that we are conscious of whatever information is being fed into working memory,These views don’t hold that the lights went on in a sudden flash, but they do hold that the “waking up” came late in the history of life and was due to features that are clearly seen only in animals like us.
...certainly there’s an alternative to consider. I’ll call this the transformation view. It holds that a form of subjective experience preceded late-arising things like working memory, workspaces, the integration of the senses, and so on. These complexities, when they came along, transformed what it feels like to be an animal. Experience has been reshaped by these features, but it was not brought into being by them. The best argument I can offer for this alternative view is based on the role in our lives of what seem like old forms of subjective experience that appear as intrusions into more organized and complex mental processes. Consider the intrusion of sudden pain, or of what the physiologist Derek Denton calls the primordial emotions— feelings which register important bodily states and deficiencies, such as thirst or the feeling of not having enough air. As Denton says, these feelings have an “imperious” role when they are present: they press themselves into experience and can’t easily be ignored. Do you think that those things (pain, shortness of breath, etc.) only feel like something because of sophisticated cognitive processing in mammals that has arisen late in evolution? I doubt it. Instead, it seems plausible that an animal might feel pain or thirst without having an “inner model” of the world, or sophisticated forms of memory.
Subjective experience does not arise from the mere running of the system, but from the modulation of its state, from registering things that matter. These need not be external events; they might arise internally. But they are tracked because they matter and require a response. Sentience has some point to it. It’s not just a bathing in living activity.
By the Cambrian, the vertebrates were already on their own path (or their own collection of paths), while arthropods and mollusks were on others. Suppose it’s right that crabs, octopuses, and cats all have subjective experience of some kind. Then there were at least three separate origins for this trait, and perhaps many more than three. Later, as the machinery described by Dehaene, Baars, Milner, and Goodale comes on line, an integrated perspective on the world arises and a more definite sense of self. We then reach something closer to consciousness. I don’t see that as a single definite step. Instead, I see “consciousness” as a mixed-up and overused but useful term for forms of subjective experience that are unified and coherent in various ways. Here, too, it is likely that experience of this kind arose several times on different evolutionary paths: from white noise, through old and simple forms of experience, to consciousness.

Friday, June 22, 2018

Brain EEG activity can predict who antidepressants will work on

Pizzagalli et al. have used electroencephalography (EEG) to look at the activation level of the rostral anterior cingulate cortex (ACC) in depressed patients. In an experiment involving 300 patients at four different sites (with randomized controls) they find that higher activity in this area correlates with a higher probability that treatment with the SSRI (selective serotonin re-uptake inhibitor) sertraline hydrochloride will be effective after an eight week course of treatment. The prediction of drug efficacy made with this approach is more reliable than current guessing using other clinical and demographic characteristics that also correlate with treatment outcomes.

Thursday, June 21, 2018

Old Italian violins imitate the human voice.

A fascinating study from Tai et al. (open source):

Amati and Stradivari violins are highly appreciated by musicians and collectors, but the objective understanding of their acoustic qualities is still lacking. By applying speech analysis techniques, we found early Italian violins to emulate the vocal tract resonances of male singers, comparable to basses or baritones. Stradivari pushed these resonance peaks higher to resemble the shorter vocal tract lengths of tenors or altos. Stradivari violins also exhibit vowel qualities that correspond to lower tongue height and backness. These properties may explain the characteristic brilliance of Stradivari violins. The ideal for violin tone in the Baroque era was to imitate the human voice, and we found that Cremonese violins are capable of producing the formant features of human singers.
The shape and design of the modern violin are largely influenced by two makers from Cremona, Italy: The instrument was invented by Andrea Amati and then improved by Antonio Stradivari. Although the construction methods of Amati and Stradivari have been carefully examined, the underlying acoustic qualities which contribute to their popularity are little understood. According to Geminiani, a Baroque violinist, the ideal violin tone should “rival the most perfect human voice.” To investigate whether Amati and Stradivari violins produce voice-like features, we recorded the scales of 15 antique Italian violins as well as male and female singers. The frequency response curves are similar between the Andrea Amati violin and human singers, up to ∼4.2 kHz. By linear predictive coding analyses, the first two formants of the Amati exhibit vowel-like qualities (F1/F2 = 503/1,583 Hz), mapping to the central region on the vowel diagram. Its third and fourth formants (F3/F4 = 2,602/3,731 Hz) resemble those produced by male singers. Using F1 to F4 values to estimate the corresponding vocal tract length, we observed that antique Italian violins generally resemble basses/baritones, but Stradivari violins are closer to tenors/altos. Furthermore, the vowel qualities of Stradivari violins show reduced backness and height. The unique formant properties displayed by Stradivari violins may represent the acoustic correlate of their distinctive brilliance perceived by musicians. Our data demonstrate that the pioneering designs of Cremonese violins exhibit voice-like qualities in their acoustic output.

Wednesday, June 20, 2018

Families and politics - curtailed conversations

Chen and Rohla find that Thanksgiving dinners in which the hosts and guests lived in oppositely voting precincts were up to 50 minutes shorter than same-party-precinct dinner. It seems likely that avoiding talking about contentious subjects lead guests to simply talk less.
Research on growing American political polarization and antipathy primarily studies public institutions and political processes, ignoring private effects, including strained family ties. Using anonymized smartphone-location data and precinct-level voting, we show that Thanksgiving dinners attended by residents from opposing-party precincts were 30 to 50 minutes shorter than same-party dinners. This decline from a mean of 257 minutes survives extensive spatial and demographic controls. Reductions in the duration of Thanksgiving dinner in 2016 tripled for travelers from media markets with heavy political advertising—an effect not observed in 2015—implying a relationship to election-related behavior. Effects appear asymmetric: Although fewer Democratic-precinct residents traveled in 2016 than in 2015, Republican-precinct residents shortened their Thanksgiving dinners by more minutes in response to political differences. Nationwide, 34 million hours of cross-partisan Thanksgiving dinner discourse were lost in 2016 owing to partisan effects.

Tuesday, June 19, 2018

Tipping points in changing social conventions.

How can we explain the rapid rise of the Nazis in 1930s Germany or the rapid acceptance of gay marriage in the United States? Centola et al. approach this question by doing both modeling and experiments illustrating how a motivated minority can change a social convention. They a system of coordination in which a minority group of actors attempt to disrupt an established equilibrium behavior. In both our theoretical framework and the empirical setting, we adopt the canonical approach of using coordination on a naming convention as a general model for conventional behavior. Our experimental approach is designed to test a broad range of theoretical predictions derived from the existing literature on critical mass dynamics in social conventions.
Here is the Science Magazine summary followed by the abstract:
Once a population has converged on a consensus, how can a group with a minority viewpoint overturn it? Theoretical models have emphasized tipping points, whereby a sufficiently large minority can change the societal norm. Centola et al. devised a system to study this in controlled experiments. Groups of people who had achieved a consensus about the name of a person shown in a picture were individually exposed to a confederate who promoted a different name. The only incentive was to coordinate. When the number of confederates was roughly 25% of the group, the opinion of the majority could be tipped to that of the minority.
Theoretical models of critical mass have shown how minority groups can initiate social change dynamics in the emergence of new social conventions. Here, we study an artificial system of social conventions in which human subjects interact to establish a new coordination equilibrium. The findings provide direct empirical demonstration of the existence of a tipping point in the dynamics of changing social conventions. When minority groups reached the critical mass—that is, the critical group size for initiating social change—they were consistently able to overturn the established behavior. The size of the required critical mass is expected to vary based on theoretically identifiable features of a social setting. Our results show that the theoretically predicted dynamics of critical mass do in fact emerge as expected within an empirical system of social coordination.