Monday, July 16, 2018

What is consciousness, and could machines have it?

I want to point to a lucid article by Dehaene, Lau, and Koulder that gives the most clear review I have seen of our current state of knowledge on the nature of human consciousness, which we need to define if we wish to consider the question of machines being conscious like us.   Here is the abstract, followed by a few edited clips that attempt to communicate the core points (motivated readers can obtain the whole text by emailing me):
The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.
C1: Global availability
This corresponds to the transitive meaning of consciousness (as in “The driver is conscious of the light”)... We can recall it, act upon it, and speak about it. This sense is synonymous with “having the information in mind”; among the vast repertoire of thoughts that can become conscious at a given time, only that which is globally available constitutes the content of C1 consciousness.
C2: Self-monitoring
Another meaning of consciousness is reflexive. It refers to a self-referential relationship in which the cognitive system is able to monitor its own processing and obtain information about itself..This sense of consciousness corresponds to what is commonly called introspection, or what psychologists call “meta-cognition”—the ability to conceive and make use of internal representations of one’s own knowledge and abilities.
CO: Unconscious processing: Where most of our intelligence lies
...many computations involve neither C1 nor C2 and therefore are properly called “unconscious” ...Cognitive neuroscience confirms that complex computations such as face or speech recognition, chess-game evaluation, sentence parsing, and meaning extraction occur unconsciously in the human brain—under conditions that yield neither global reportability nor self-monitoring. The brain appears to operate, in part, as a juxtaposition of specialized processors or “modules” that operate nonconsciously and, we argue, correspond tightly to the operation of current feedforward deep-learning networks.
The phenomenon of priming illustrates the remarkable depth of unconscious processing...Subliminal digits, words, faces, or objects can be invariantly recognized and influence motor, semantic, and decision levels of processing. Neuroimaging methods reveal that the vast majority of brain areas can be activated nonconsciously...Subliminal priming generalizes across visual-auditory modalities...Even the semantic meaning of sensory input can be processed without awareness by the human brain.
...subliminal primes can influence prefrontal mechanisms of cognitive control involved in the selection of a task...Neural mechanisms of decision-making involve accumulating sensory evidence that affects the probability of the various choices until a threshold is attained. This accumulation of probabilistic knowledge continues to happen even with subliminal stimuli. Bayesian inference and evidence accumulation, which are cornerstone computations for AI, are basic unconscious mechanisms for humans.
Reinforcement learning algorithms, which capture how humans and animals shape their future actions on the basis of history of past rewards, have excelled in attaining supra-human AI performance in several applications, such as playing Go. Remarkably, in humans, such learning appears to proceed even when the cues, reward, or motivation signals are presented below the consciousness threshold.
What additional computations are required for conscious processing?

C1: Global availability of relevant information
The need for integration and coordination. Integrating all of the available evidence to converge toward a single decision is a computational requirement that, we contend, must be faced by any animal or autonomous AI system and corresponds to our first functional definition of consciousness: global availability (C1)...Such decision-making requires a sophisticated architecture for (i) efficiently pooling over all available sources of information, including multisensory and memory cues; (ii) considering the available options and selecting the best one on the basis of this large information pool; (iii) sticking to this choice over time; and (iv) coordinating all internal and external processes toward the achievement of that goal.
Consciousness as access to an internal global workspace. We hypothesize that...On top of a deep hierarchy of specialized modules, a “global neuronal workspace,” with limited capacity, evolved to select a piece of information, hold it over time, and share it across modules. We call “conscious” whichever representation, at a given time, wins the competition for access to this mental arena and gets selected for global sharing and decision-making.
Relation between consciousness and attention. William James described attention as “the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought”. This definition is close to what we mean by C1: the selection of a single piece of information for entry into the global workspace. There is, however, a clear-cut distinction between this final step, which corresponds to conscious access, and the previous stages of attentional selection, which can operate unconsciously...What we call attention is a hierarchical system of sieves that operate unconsciously. Such unconscious systems compute with probability distributions, but only a single sample, drawn from this probabilistic distribution, becomes conscious at a given time. We may become aware of several alternative interpretations, but only by sampling their unconscious distributions over time.
Evidence for all-or-none selection in a capacity-limited system. The primate brain comprises a conscious bottleneck and can only consciously access a single item at a time. For instance, rivaling pictures or ambiguous words are perceived in an all-or-none manner; at any given time, we subjectively perceive only a single interpretation out of many possible ones [even though the others continue to be processed unconsciously]...Brain imaging in humans and neuronal recordings in monkeys indicate that the conscious bottleneck is implemented by a network of neurons that is distributed through the cortex, but with a stronger emphasis on high-level associative areas. ... Single-cell recordings indicate that each specific conscious percept, such as a person’s face, is encoded by the all-or-none firing of a subset of neurons in high-level temporal and prefrontal cortices, whereas others remain silent...the stable, reproducible representation of high-quality information by a distributed activity pattern in higher cortical areas is a feature of conscious processing. Such transient “meta-stability” seems to be necessary for the nervous system to integrate information from a variety of modules and then broadcast it back to them, achieving flexible cross-module routing.
C1 consciousness in human and nonhuman animals. C1 consciousness is an elementary property that is present in human infants as well as in animals. Nonhuman primates exhibit similar visual illusions, attentional blink, and central capacity limits as human subjects.
C2: Self-monitoring
Whereas C1 consciousness reflects the capacity to access external information, consciousness in the second sense (C2) is characterized by the ability to reflexively represent oneself ("metacognition")
A probabilistic sense of confidence. Confidence can be assessed nonverbally, either retrospectively, by measuring whether humans persist in their initial choice, or prospectively, by allowing them to opt out from a task without even attempting it. Both measures have been used in nonhuman animals to show that they too possess metacognitive abilities. By contrast, most current neural networks lack them: Although they can learn, they generally lack meta-knowledge of the reliability and limits of what has been learned...Magnetic resonance imaging (MRI) studies in humans and physiological recordings in primates and even in rats have specifically linked such confidence processing to the prefrontal cortex. Inactivation of the prefrontal cortex can induce a specific deficit in second-order (metacognitive) judgements while sparing performance on the first-order task. Thus, circuits in the prefrontal cortex may have evolved to monitor the performance of other brain processes.
Error detection: Reflecting on one’s own mistakes ...just after responding, we sometimes realize that we made an error and change our mind. Error detection is reflected by two components of electroencephalography (EEG) activity: the error-relativity negativity (ERN) and the positivity upon error (Pe), which emerge in cingulate and medial prefrontal cortex just after a wrong response but before any feedback is received...A possibility compatible with the remarkable speed of error detection is that two parallel circuits, a low-level sensory-motor circuit and a higher-level intention circuit, operate on the same sensory data and signal an error whenever their conclusions diverge. Self-monitoring is such a basic ability that it is already present during infancy. The ERN, indicating error monitoring, is observed when 1-year-old infants make a wrong choice in a perceptual decision task.
Meta-memory The term “meta-memory” was coined to capture the fact that humans report feelings of knowing, confidence, and doubts on their memories. ...Meta-memory is associated with prefrontal structures whose pharmacological inactivation leads to a metacognitive impairment while sparing memory performance itself. Metamemory is crucial to human learning and education by allowing learners to develop strategies such as increasing the amount of study or adapting the time allocated to memory encoding and rehearsal.
Reality monitoring. In addition to monitoring the quality of sensory and memory representations, the human brain must also distinguish self-generated versus externally driven representations - we can perceive things, but we also conjure them from imagination or memory...Neuroimaging studies have linked this kind of reality monitoring to the anterior prefrontal cortex
Dissociations between C1 and C2
According to our analysis, C1 and C2 are largely orthogonal and complementary dimensions of what we call consciousness. On one side of this double dissociation, self-monitoring can exist for unreportable stimuli (C2 without C1). Automatic typing provides a good example: Subjects slow down after a typing mistake, even when they fail to consciously notice the error. Similarly, at the neural level, an ERN can occur for subjectively undetected errors. On the other side of this dissociation, consciously reportable contents sometimes fail to be accompanied with an adequate sense of confidence (C1 without C2). For instance, when we retrieve a memory, it pops into consciousness (C1) but sometimes without any accurate evaluation of its confidence (C2), leading to false memories.
Synergies between C1 and C2 consciousness
Because C1 and C2 are orthogonal, their joint possession may have synergistic benefits to organisms. In one direction, bringing probabilistic metacognitive information (C2) into the global workspace (C1) allows it to be held over time, integrated into explicit long-term reflection, and shared with others...In the converse direction, the possession of an explicit repertoire of one’s own abilities (C2) improves the efficiency with which C1 information is processed. During mental arithmetic, children can perform a C2-level evaluation of their available competences (for example, counting, adding, multiplying, or memory retrieval) and use this information to evaluate how to best face a given arithmetic problem.
Endowing machines with C1 and C2
[Note: I am not abstracting this section as I did the above descriptions of consciousness. It describes numerous approaches rising above the level of most present day machines to making machines able to perform C1 and C2 operations.]
Most present-day machine-learning systems are devoid of any self-monitoring; they compute (C0) without representing the extent and limits of their knowledge or the fact that others may have a different viewpoint than their own. There are a few exceptions: Bayesian networks or programs compute with probability distributions and therefore keep track of how likely they are to be correct. Even when the primary computation is performed by a classical CNN, and is therefore opaque to introspection, it is possible to train a second, hierarchically higher neural network to predict the first one’s performance.
Concluding remarks
Our stance is based on a simple hypothesis: What we call “consciousness” results from specific types of information-processing computations, physically realized by the hardware of the brain. It differs from other theories in being resolutely computational; we surmise that mere information-theoretic quantities do not suffice to define consciousness unless one also considers the nature and depth of the information being processed.
We contend that a machine endowed with C1 and C2 would behave as though it were conscious; for instance, it would know that it is seeing something, would express confidence in it, would report it to others, could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans. Still, such a purely functional definition of consciousness may leave some readers unsatisfied. Are we “over-intellectualizing” consciousness, by assuming that some high-level cognitive functions are necessarily tied to consciousness? Are we leaving aside the experiential component (“what it is like” to be conscious)? Does subjective experience escape a computational definition?
Although those philosophical questions lie beyond the scope of the present paper, we close by noting that empirically, in humans the loss of C1 and C2 computations covaries with a loss of subjective experience. For example, in humans, damage to the primary visual cortex may lead to a neurological condition called “blindsight,” in which the patients report being blind in the affected visual field. Remarkably, those patients can localize visual stimuli in their blind field but cannot report them (C1), nor can they effectively assess their likelihood of success (C2)—they believe that they are merely “guessing.” In this example, at least, subjective experience appears to cohere with possession of C1 and C2. Although centuries of philosophical dualism have led us to consider consciousness as unreducible to physical interactions, the empirical evidence is compatible with the possibility that consciousness arises from nothing more than specific computations.



Friday, July 13, 2018

Playing with proteins in virtual reality.

Much of my mental effort while I was doing laboratory research on the mechanisms of visual transduction (changing light into a nerve signal in our retinal rod and cone photoreceptor cells) was devoted to trying to visualize how proteins might interact with each other. I spent many hours using molecular model kits of color-coded plastic atoms one could plug together with flexible joints, like the Tinkertoys of my childhood. If I had only had the system now described by O'Connor et al! Have a look at the video below showing manipulating molecular dynamics in a VR environment,  and here is their abstract:
We describe a framework for interactive molecular dynamics in a multiuser virtual reality (VR) environment, combining rigorous cloud-mounted atomistic physics simulations with commodity VR hardware, which we have made accessible to readers (see isci.itch.io/nsb-imd). It allows users to visualize and sample, with atomic-level precision, the structures and dynamics of complex molecular structures “on the fly” and to interact with other users in the same virtual environment. A series of controlled studies, in which participants were tasked with a range of molecular manipulation goals (threading methane through a nanotube, changing helical screw sense, and tying a protein knot), quantitatively demonstrate that users within the interactive VR environment can complete sophisticated molecular modeling tasks more quickly than they can using conventional interfaces, especially for molecular pathways and structural transitions whose conformational choreographies are intrinsically three-dimensional. This framework should accelerate progress in nanoscale molecular engineering areas including conformational mapping, drug development, synthetic biology, and catalyst design. More broadly, our findings highlight the potential of VR in scientific domains where three-dimensional dynamics matter, spanning research and education.

Sampling molecular conformational dynamics in virtual reality from david glowacki on Vimeo.




Thursday, July 12, 2018

Increasing despair among poor Americans.

A survey by Goldman et al. of more than 4,600 American adults conducted in 1995-1996 and in 2011-2014 suggests that among individuals of low socioeconomic status, negative affect increased significantly between the two survey waves, and life satisfaction and psychological well-being decreased:

Significance
In the past few years, references to the opioid epidemic, drug poisonings, and associated feelings of despair among Americans, primarily working-class whites, have flooded the media, and related patterns of mortality have been of increasing interest to social scientists. Yet, despite recurring references to distress or despair in journalistic accounts and academic studies, there has been little analysis of whether psychological health among American adults has worsened over the past two decades. Here, we use data from national samples of adults in the mid-1990s and early 2010s to demonstrate increasing distress and declining well-being that was concentrated among low-socioeconomic-status individuals but spanned the age range from young to older adults.
Abstract
Although there is little dispute about the impact of the US opioid epidemic on recent mortality, there is less consensus about whether trends reflect increasing despair among American adults. The issue is complicated by the absence of established scales or definitions of despair as well as a paucity of studies examining changes in psychological health, especially well-being, since the 1990s. We contribute evidence using two cross-sectional waves of the Midlife in the United States (MIDUS) study to assess changes in measures of psychological distress and well-being. These measures capture negative emotions such as sadness, hopelessness, and worthlessness, and positive emotions such as happiness, fulfillment, and life satisfaction. Most of the measures reveal increasing distress and decreasing well-being across the age span for those of low relative socioeconomic position, in contrast to little decline or modest improvement for persons of high relative position.

Wednesday, July 11, 2018

A fundamental advance in brain imaging techniques.

I want to pass on the abstract, along with a bit of text and two figures, from an article by Coalson, Van Essen, and Glasser that argues for a fundamental change is how functional cortical areas of the brain are recorded and reported.  They demonstrate that surface based parcellation is 3-fold more accurate than traditional volume based parcellations.:

Significance
Most human brain-imaging studies have traditionally used low-resolution images, inaccurate methods of cross-subject alignment, and extensive blurring. Recently, a high-resolution approach with more accurate alignment and minimized blurring was used by the Human Connectome Project to generate a multimodal map of human cortical areas in hundreds of individuals. Starting from these data, we systematically compared these two approaches, showing that the traditional approach is nearly three times worse than the Human Connectome Project’s improved approach in two objective measures of spatial localization of cortical areas. Furthermore, we demonstrate considerable challenges in comparing data across the two approaches and, as a result, argue that there is an urgent need for the field to adopt more accurate methods of data acquisition and analysis.
Abstract
Localizing human brain functions is a long-standing goal in systems neuroscience. Toward this goal, neuroimaging studies have traditionally used volume-based smoothing, registered data to volume-based standard spaces, and reported results relative to volume-based parcellations. A novel 360-area surface-based cortical parcellation was recently generated using multimodal data from the Human Connectome Project, and a volume-based version of this parcellation has frequently been requested for use with traditional volume-based analyses. However, given the major methodological differences between traditional volumetric and Human Connectome Project-style processing, the utility and interpretability of such an altered parcellation must first be established. By starting from automatically generated individual-subject parcellations and processing them with different methodological approaches, we show that traditional processing steps, especially volume-based smoothing and registration, substantially degrade cortical area localization compared with surface-based approaches. We also show that surface-based registration using features closely tied to cortical areas, rather than to folding patterns alone, improves the alignment of areas, and that the benefits of high-resolution acquisitions are largely unexploited by traditional volume-based methods. Quantitatively, we show that the most common version of the traditional approach has spatial localization that is only 35% as good as the best surface-based method as assessed using two objective measures (peak areal probabilities and “captured area fraction” for maximum probability maps). Finally, we demonstrate that substantial challenges exist when attempting to accurately represent volume-based group analysis results on the surface, which has important implications for the interpretability of studies, both past and future, that use these volume-based methods.
Some context from their introduction:
The recently reported HCP-MMP1.0 multimodal cortical parcellation (https://balsa.wustl.edu/study/RVVG, see the graphic below) contains 180 distinct areas per hemisphere and was generated from hundreds of healthy young adult subjects from the Human Connectome Project (HCP) using data precisely aligned with the HCP’s surface-based neuroimaging analysis approach. Each cortical area is defined by multiple features, such as those representing architecture, function, connectivity, or topographic maps of visual space. This multimodal parcellation has generated widespread interest, with many investigators asking how to relate its cortical areas to data processed using the traditional neuroimaging approach. Because volume-registered analysis of the cortex in MNI space is still widely used, this has often translated into concrete requests, such as: “Please provide the HCP-MMP1.0 parcellation in standard MNI volume space.” Here, we investigate quantitatively the drawbacks of traditional volume-based analyses and document that much of the HCP-MMP1.0 parcellation cannot be faithfully represented when mapped to a traditional volume-based atlas.
Here is a graphic from the parcellation analysis


And here is Figure 1 and its explanation from the Coalson et al. paper.


The figure shows a probabilistic maps of five exemplar areas spanning a range of peak probabilities. Each area is shown as localized by areal-feature–based surface registration (Lower, Center), and as localized by volume-based methods. One area (3b in Fig. 1) has a peak probability of 0.92 in the volume (orange, red), whereas the other four have volumetric peak probabilities in the range of 0.35–0.7 (blue, yellow). Notably, the peak probabilities of these five areas are all higher on the surface (Figure, Lower, Center) (range 0.90–1) than in the volume, indicating that MSMAll nonlinear surface-based registration provides substantially better functional alignment across subjects than does nonlinear volume-based registration.

Tuesday, July 10, 2018

Mindfulness training increases strength of right insula connections.

Sharp et al.(open source) suggest that:
The endeavor to understand how mindfulness works will likely be advanced by using recently developed tools and theory within the nascent field of brain connectomics. The connectomic framework conceives of the brain’s functional and structural architecture as a complex, dynamic network. This network view of brain function partly arose from the lack of support for highly selective, modular regions instantiating specialized functions. That is, large meta-analyses consisting mostly of univariate fMRI analyses disconfirm that, for example, the amygdala is exclusively selective for fear processing. Indeed, more fruitful mechanistic knowledge of how neural systems function may emerge from delineating how different regions communicate functionally across a range of environments, and by identifying the underlying structural connections that constrain such functional dynamics.
Their abstract, and a summary figure:
Although mindfulness meditation is known to provide a wealth of psychological benefits, the neural mechanisms involved in these effects remain to be well characterized. A central question is whether the observed benefits of mindfulness training derive from specific changes in the structural brain connectome that do not result from alternative forms of experimental intervention. Measures of whole-brain and node-level structural connectome changes induced by mindfulness training were compared with those induced by cognitive and physical fitness training within a large, multi-group intervention protocol (n = 86). Whole-brain analyses examined global graph-theoretical changes in structural network topology. A hypothesis-driven approach was taken to investigate connectivity changes within the insula, which was predicted here to mediate interoceptive awareness skills that have been shown to improve through mindfulness training. No global changes were observed in whole-brain network topology. However, node-level results confirmed a priori hypotheses, demonstrating significant increases in mean connection strength in right insula across all of its connections. Present findings suggest that mindfulness strengthens interoception, operationalized here as the mean insula connection strength within the overall connectome. This finding further elucidates the neural mechanisms of mindfulness meditation and motivates new perspectives about the unique benefits of mindfulness training compared to contemporary cognitive and physical fitness interventions.
Legend - Anatomical representation of tractography pathways between right insula and highly connected regions. Connections displayed (only corticocortical, here) comprised the top 80% connection strengths across all insula pathways. (A) Displays pre-training connections in right insula, which showed the greatest structural reorganization across mindfulness training. (B) Represents the same image as in (A) except at post-training.

Monday, July 09, 2018

Mortality rates level off at extreme age

Interesting work from Barbi et al. showing that human death rates increase exponentially up to about age 80, then decelerate, and plateau after age 105. At that point, the odds of someone dying from one birthday to the next are roughly 50:50. This implies that there might be no natural limit to how long humans can live, contrary to the view of most demographers and biologists.:
Theories about biological limits to life span and evolutionary shaping of human longevity depend on facts about mortality at extreme ages, but these facts have remained a matter of debate. Do hazard curves typically level out into high plateaus eventually, as seen in other species, or do exponential increases persist? In this study, we estimated hazard rates from data on all inhabitants of Italy aged 105 and older between 2009 and 2015 (born 1896–1910), a total of 3836 documented cases. We observed level hazard curves, which were essentially constant beyond age 105. Our estimates are free from artifacts of aggregation that limited earlier studies and provide the best evidence to date for the existence of extreme-age mortality plateaus in humans.

Friday, July 06, 2018

Brain imaging predicts who will be a good musical performer.

Fascinating observations from Zatorre's group:

Significance
In sophisticated auditory–motor learning such as musical instrument learning, little is understood about how brain plasticity develops over time and how the related individual variability is reflected in the neural architecture. In a longitudinal fMRI training study on cello learning, we reveal the integrative function of the dorsal cortical stream in auditory–motor information processing, which comes online quickly during learning. Additionally, our data show that better performers optimize the recruitment of regions involved in auditory encoding and motor control and reveal the critical role of the pre-supplementary motor area and its interaction with auditory areas as predictors of musical proficiency. The present study provides unprecedented understanding of the neural substrates of individual learning variability and therefore has implications for pedagogy and rehabilitation.
Abstract
The auditory and motor neural systems are closely intertwined, enabling people to carry out tasks such as playing a musical instrument whose mapping between action and sound is extremely sophisticated. While the dorsal auditory stream has been shown to mediate these audio–motor transformations, little is known about how such mapping emerges with training. Here, we use longitudinal training on a cello as a model for brain plasticity during the acquisition of specific complex skills, including continuous and many-to-one audio–motor mapping, and we investigate individual differences in learning. We trained participants with no musical background to play on a specially designed MRI-compatible cello and scanned them before and after 1 and 4 wk of training. Activation of the auditory-to-motor dorsal cortical stream emerged rapidly during the training and was similarly activated during passive listening and cello performance of trained melodies. This network activation was independent of performance accuracy and therefore appears to be a prerequisite of music playing. In contrast, greater recruitment of regions involved in auditory encoding and motor control over the training was related to better musical proficiency. Additionally, pre-supplementary motor area activity and its connectivity with the auditory cortex during passive listening before training was predictive of final training success, revealing the integrative function of this network in auditory–motor information processing. Together, these results clarify the critical role of the dorsal stream and its interaction with auditory areas in complex audio–motor learning.

Thursday, July 05, 2018

Why are religious people trusted more?

A prevailing view is that religious behavior facilitate trust, primarily toward coreligionists, and particularly when it is diagnostic of belief in moralizing deities. Moon et al. suggest a further reason that religious people are viewed as more trustworthy than non-religious people: they follow 'slow life-history' strategies that tend to be sexually restricted, invested in family, nonimpulsive, and nonaggressive, all traits that associated with cooperativeness and prosociality. They find that direct information about life history reproductive strategy (i.e., a subject's “dating preferences”) tend to override the effects of religious information. Their abstract:
Religious people are more trusted than nonreligious people. Although most theorists attribute these perceptions to the beliefs of religious targets, religious individuals also differ in behavioral ways that might cue trust. We examined whether perceivers might trust religious targets more because they heuristically associate religion with slow life-history strategies. In three experiments, we found that religious targets are viewed as slow life-history strategists and that these findings are not the result of a universally positive halo effect; that the effect of target religion on trust is significantly mediated by the target’s life-history traits (i.e., perceived reproductive strategy); and that when perceivers have direct information about a target’s reproductive strategy, their ratings of trust are driven primarily by his or her reproductive strategy, rather than religion. These effects operate over and above targets’ belief in moralizing gods and offer a novel theoretical perspective on religion and trust.

Wednesday, July 04, 2018

Seven creepy Facebook patents

I'll follow yesterday's post with yet another post on creepy high tech patents, this time from Facebook, showing their ongoing intention to invade our privacy as much as possible. From the article by Chinoy:

Reading your relationships
One patent application discusses predicting whether you’re in a romantic relationship using information such as how many times you visit another user’s page, the number of people in your profile picture and the percentage of your friends of a different gender.
Classifying your personality
Another proposes using your posts and messages to infer personality traits. It describes judging your degree of extroversion, openness or emotional stability, then using those characteristics to select which news stories or ads to display.
Predicting your future
This patent application describes using your posts and messages, in addition to your credit card transactions and location, to predict when a major life event, such as a birth, death or graduation, is likely to occur.
Identifying your camera
Another considers analyzing pictures to create a unique camera “signature” using faulty pixels or lens scratches. That signature could be used to figure out that you know someone who uploads pictures taken on your device, even if you weren’t previously connected. Or it might be used to guess the “affinity” between you and a friend based on how frequently you use the same camera.
Listening to your environment
This patent application explores using your phone microphone to identify the television shows you watched and whether ads were muted. It also proposes using the electrical interference pattern created by your television power cable to guess which show is playing.
Tracking your routine
Another patent application discusses tracking your weekly routine and sending notifications to other users of deviations from the routine. In addition, it describes using your phone’s location in the middle of the night to establish where you live.
Inferring your habits
This patent proposes correlating the location of your phone to locations of your friends’ phones to deduce whom you socialize with most often. It also proposes monitoring when your phone is stationary to track how many hours you sleep.

Tuesday, July 03, 2018

A patent for emotional robots?

I recently received an interesting email from Deepak Gupta, who is with a patent research services company, who, having seen my post on robots and emotion, thought to point me to a description of a patent application filed by Samsung. It seems so straightforward that I would have thought that such a patent would have been proposed years ago. It is an attempt to get past the "uncanny valley" issue, the feelings of eeriness and revulsion people can have in observing or interacting with robots. (It still seems a bit scary to me.)
...an electronic robot includes a motor mounted inside its head. A head part of the robot consists of a display unit, to display emotional expressions and a motor control unit that can rotate the robot’s head in a clockwise or counter-clockwise direction on its axis.
The electronic robot decides which emotional expression to be displayed on a display panel by sensing information from its various sensors, such as a camera sensor, a pressure sensor, a geomagnetic sensor, and a microphone to sense the motion of a user. Accordingly, the electronic robot tracks the major feature points of the face such as eyes, nose, mouth, and eyebrows from user images captured by the camera sensor and recognizes user emotional information, based on basic facial expressions conveying happiness, surprise, anger, sadness, and sorrow. Once the information is received from the sensors, the data is then analyzed by the processor that sends an appropriate voltage signal to the display unit and the motor control unit in order to express relevant emotion. The emotional states expressed by the electronic robot include anger, disgust, sadness, interest, happiness, impassiveness, surprise, agreement (i.e., “yes”), and rejection (i.e., “no”).
To express emotional states, the electronic robot stores motion information of the head predefined for each emotional state in a storage unit. The head motion information includes head motion type, angle, speed (or rotational force), and direction.
The technology disclosed in the patent document allows robots to express emotions. Therefore, these robots can communicate or interact with humans more effectively. These robots can be used in the applications that require interaction with humans, for instance, communicating with patients in a hospital in the absence of actual staff, or even interacting with pets such as dogs or cats that are left alone by their owners during their working hours.

Monday, July 02, 2018

An optimistic outlook creates a rosy past.

From Devitt and Schacter in the Harvard Psychology department, a study recruiting the usual gaggle of psychology undergraduate students as subjects:
People frequently engage in future thinking in everyday life, but it is unknown how simulating an event in advance changes how that event is remembered once it takes place. To initiate study of this important topic, we conducted two experiments in which participants simulated emotional events before learning the hypothetical outcome of each event via narratives. Memory was assessed for emotional details contained in those narratives. Positive simulation resulted in a liberal response bias for positive information and a conservative bias for negative information. Events preceded by positive simulation were considered more favorably in retrospect. In contrast, negative simulation had no impact on subsequent memory. Results were similar across an immediate and delayed memory test and for past and future simulation. These results provide novel insights into the cognitive consequences of episodic future simulation and build on the optimism-bias literature by showing that adopting a favorable outlook results in a rosy memory.

Friday, June 29, 2018

Stress and autoimmune disease

I am now in my 77th year, and not all that pleased that my inflammatory, immune, and stress systems are becoming more excitable, more likely to be activated by small environmental perturbations that went unnoticed 10 or 20 years ago. Increases in sympathetic nervous system and inflammatory process with aging have been documented in many studies, and there is a literature linking stress with immune system dysfunction at all ages. This post points to yet another study of the linkage of stress with immune system activity. Song et al. report a massive study of over one million people in a Swedish cohort over a thirty year period that shows a strong association between persons suffering stress disorders and increased risk of developing autoimmune diseases such as arthritis and Crohn's disease. This affirms the strong link between psychological stress and physical inflammatory conditions. Here is a truncated and edited statement of results from the JAMA paper:
The median age at diagnosis of stress-related disorders was 41 years, and 40% of the exposed (i.e. suffering from stress-related disorder) patients were male. During a mean follow-up of 10 years, the incidence rate of autoimmune diseases was 9.1, 6.0, and 6.5 per 1000 person-years among the exposed, matched unexposed, and sibling cohorts, respectively. Compared with the unexposed population, patients with stress-related disorders were at increased risk of autoimmune disease... Persistent use of selective serotonin reuptake inhibitors during the first year of posttraumatic stress disorder diagnosis was associated with attenuated relative risk of autoimmune disease.

Thursday, June 28, 2018

Social media and democracy

This post passes on links to articles on social media discussed recently in the Univ. of Wisconsin Chaos and Complex Systems seminar that I attend. (Two previous MindBlog posts have noted Jaron Lanier's critique of social media, "Ten arguments for deleting your social media accounts right now," which coins an acronym, BUMMER, for what Lanier considers one of its core evils. BUMMER is “Behaviors of Users Modified, and Made Into an Empire for Rent.”)  I want to point now to Franklin Foer's review of Lanier's book (Foer claims he actually deleted his Facebook account after reading the book. I haven't managed to do that.)

As an antidote to the current pessimism about the social effects of social media, Ethan Zuckerman offers an essay "Six or seven things social media can do for democracy." He takes his title from an article by Schudson, “Six or Seven Things News Can Do for Democracy,” that anchored his book, "Why Democracies Need an Unloveable Press." Schudson's list of functions for journalism or news is to inform, investigate, analyze, be a public forum, be a tool for social empathy, be a force for mobilization, and promote representative democracy.

Zukerman makes a similar list for the functions of social media, that I condense here:
1. Social media can inform us. (Arab Spring, Ferguson Missouri) but also misinform us (fake news, pizza parlor prostitution ring.)
2.   Social media can an amplify important voices and issues (which might either strengthen or weaken democracy)
3. Social media can be a tool for connection and solidarity (of either good or evil groups, pro or anti democratic.) 
4. Social media can be a space for mobilization (Tahrir Square, Taksim Gezi Park, Charlottesville, pro or anti-democratic.)
5. Social media can be a space for deliberation debate (yet also disappointing, mean, petty, superficial, uncivil.)  There needs to be more effort to build civil platforms.
6. Social media can show us diversity of views and perspectives.    (However, bubbles, homophily, racism, ethnonationalism, can inhibit this).  Conscious intervention is needed change this.
7. Social media can be a model for democratically governed spaces.   Social platforms should seek, be based on consent of their communities. (Example of Reddit, ~1000 volunteer moderators, rules, policing of rule breakers.)

One of Zuckerman's concluding paragraphs   
Finally, it’s also fair to note that there’s a dark side to every democratic function I’ve listed. The tools that allow marginalized people to report their news and influence media are the same ones that allow fake news to be injected into the media ecosystem. Amplification is a technique used by everyone from Black Lives Matter to neo-Nazis, as is mobilization, and the spaces for solidarity that allow Jen Brea to manage her disease allow “incels” to push each other towards violence. While I feel comfortable advocating for respectful dialog and diverse points of view, someone will see my advocacy as an attempt to push politically correct multiculturalism down their throat, or to silence the exclusive truth of their perspectives through dialog. The bad news is that making social media work better for democracy likely means making it work better for the Nazis as well. The good news is that there’s a lot more participatory democrats than there are Nazis.

Wednesday, June 27, 2018

The Philosophy we need: Personalism

To his credit, David Brooks keeps searching for moral visions that might be an antidote to our current social and political malaise. A recent Op-Ed piece is on Personalism, the:
...philosophic tendency built on the infinite uniqueness and depth of each person. Over the years people like Walt Whitman, Martin Luther King, William James, Peter Maurin and Wojtyla (who went on to become Pope John Paul II) have called themselves personalists, but the movement is still something of a philosophic nub.
...our culture does a pretty good job of ignoring the uniqueness and depth of each person. Pollsters see in terms of broad demographic groups. Big data counts people as if it were counting apples. At the extreme, evolutionary psychology reduces people to biological drives, capitalism reduces people to economic self-interest, modern Marxism to their class position and multiculturalism to their racial one. Consumerism treats people as mere selves — as shallow creatures concerned merely with the experience of pleasure and the acquisition of stuff.
..personalism is a middle way between authoritarian collectivism and radical individualism. The former subsumes the individual within the collective. The latter uses the group to serve the interests of the self.
Personalism demands that we change the way we structure our institutions. A company that treats people as units to simply maximize shareholder return is showing contempt for its own workers. Schools that treat students as brains on a stick are not preparing them to lead whole lives.
The big point is that today’s social fragmentation didn’t spring from shallow roots. It sprang from worldviews that amputated people from their own depths and divided them into simplistic, flattened identities. That has to change. As Charles Péguy said, “The revolution is moral or not at all.”

Tuesday, June 26, 2018

Mindfulness can lower your motivation.

Vohs and Hafenbrack do an NYTimes opinion piece to advertise their forthcoming paper on effects of meditation. Given my personal experience with 'mindfulness' I think they are right on track. Some clips:
The practical payoff of mindfulness is backed by dozens of studies linking it to job satisfaction, rational thinking and emotional resilience...But on the face of it, mindfulness might seem counterproductive in a workplace setting. A central technique of mindfulness meditation, after all, is to accept things as they are. Yet companies want their employees to be motivated. And the very notion of motivation — striving to obtain a more desirable future — implies some degree of discontentment with the present, which seems at odds with a psychological exercise that instills equanimity and a sense of calm.
To test this hunch, we recently conducted five studies, involving hundreds of people, to see whether there was a tension between mindfulness and motivation...Among those who had meditated, motivation levels were lower on average. Those people didn’t feel as much like working on the assignments, nor did they want to spend as much time or effort to complete them. Meditation was correlated with reduced thoughts about the future and greater feelings of calm and serenity — states seemingly not conducive to wanting to tackle a work project...previous studies have found that meditation increases mental focus, suggesting that those in our studies who performed the mindfulness exercise should have performed better on the tasks. Their lower levels of motivation, however, seemed to cancel out that benefit...Mindfulness is perhaps akin to a mental nap. Napping, too, is associated with feeling calm, refreshed and less harried. Then again, who wakes up from a nap eager to organize some files?
Mindfulness might be unhelpful for dealing with difficult assignments at work, but it may be exactly what is called for in other contexts. There is no denying that mindfulness can be beneficial, bringing about calm and acceptance. Once you’ve reached a peak level of acceptance, however, you’re not going to be motivated to work harder.

Monday, June 25, 2018

Deep origins of consciousness

I want to recommend an engaging book by Peter Godfrey-Smith that I have read recently, "Other Minds: The Octopus, theSea, and the Deep Origins of Consciousness." Below are a few clips pointing to some of his core points:
Sentience is brought into being somehow from the evolution of sensing and acting; it involves being a living system with a point of view on the world around it. If we take that approach, though, a perplexity we run into immediately is the fact that those capacities are so widespread— they are found far outside the organisms that are usually thought to have experience of some kind. Even bacteria sense the world and act... A case can be made that responses to stimuli, and the controlled flow of chemicals across boundaries, are an elementary part of life itself. Unless we conclude that all living things have a modicum of subjective experience— a view I don’t regard as insane, but surely one that would need a lot of defense— there must be something about the way animals deal with the world that makes a crucial difference.
The senses can do their basic work, and actions can be produced, with all this happening “in silence” as far as the organism’s experience is concerned. Then, at some stage in evolution, extra capacities appear that do give rise to subjective experience: the sensory streams are brought together, an “internal model” of the world arises, and there’s a recognition of time and self.
What we’ve learned over the last thirty years or so is that there’s a particular style of processing— one that we use to deal especially with time, sequences, and novelty— that brings with it conscious awareness, while a lot of other quite complex activities do not.
Baars suggested that we are conscious of the information that has been brought into a centralized “workspace” in the brain. Dehaene adopted and developed this view. A related family of theories claim that we are conscious of whatever information is being fed into working memory,These views don’t hold that the lights went on in a sudden flash, but they do hold that the “waking up” came late in the history of life and was due to features that are clearly seen only in animals like us.
...certainly there’s an alternative to consider. I’ll call this the transformation view. It holds that a form of subjective experience preceded late-arising things like working memory, workspaces, the integration of the senses, and so on. These complexities, when they came along, transformed what it feels like to be an animal. Experience has been reshaped by these features, but it was not brought into being by them. The best argument I can offer for this alternative view is based on the role in our lives of what seem like old forms of subjective experience that appear as intrusions into more organized and complex mental processes. Consider the intrusion of sudden pain, or of what the physiologist Derek Denton calls the primordial emotions— feelings which register important bodily states and deficiencies, such as thirst or the feeling of not having enough air. As Denton says, these feelings have an “imperious” role when they are present: they press themselves into experience and can’t easily be ignored. Do you think that those things (pain, shortness of breath, etc.) only feel like something because of sophisticated cognitive processing in mammals that has arisen late in evolution? I doubt it. Instead, it seems plausible that an animal might feel pain or thirst without having an “inner model” of the world, or sophisticated forms of memory.
Subjective experience does not arise from the mere running of the system, but from the modulation of its state, from registering things that matter. These need not be external events; they might arise internally. But they are tracked because they matter and require a response. Sentience has some point to it. It’s not just a bathing in living activity.
By the Cambrian, the vertebrates were already on their own path (or their own collection of paths), while arthropods and mollusks were on others. Suppose it’s right that crabs, octopuses, and cats all have subjective experience of some kind. Then there were at least three separate origins for this trait, and perhaps many more than three. Later, as the machinery described by Dehaene, Baars, Milner, and Goodale comes on line, an integrated perspective on the world arises and a more definite sense of self. We then reach something closer to consciousness. I don’t see that as a single definite step. Instead, I see “consciousness” as a mixed-up and overused but useful term for forms of subjective experience that are unified and coherent in various ways. Here, too, it is likely that experience of this kind arose several times on different evolutionary paths: from white noise, through old and simple forms of experience, to consciousness.

Friday, June 22, 2018

Brain EEG activity can predict who antidepressants will work on

Pizzagalli et al. have used electroencephalography (EEG) to look at the activation level of the rostral anterior cingulate cortex (ACC) in depressed patients. In an experiment involving 300 patients at four different sites (with randomized controls) they find that higher activity in this area correlates with a higher probability that treatment with the SSRI (selective serotonin re-uptake inhibitor) sertraline hydrochloride will be effective after an eight week course of treatment. The prediction of drug efficacy made with this approach is more reliable than current guessing using other clinical and demographic characteristics that also correlate with treatment outcomes.

Thursday, June 21, 2018

Old Italian violins imitate the human voice.

A fascinating study from Tai et al. (open source):

Significance
Amati and Stradivari violins are highly appreciated by musicians and collectors, but the objective understanding of their acoustic qualities is still lacking. By applying speech analysis techniques, we found early Italian violins to emulate the vocal tract resonances of male singers, comparable to basses or baritones. Stradivari pushed these resonance peaks higher to resemble the shorter vocal tract lengths of tenors or altos. Stradivari violins also exhibit vowel qualities that correspond to lower tongue height and backness. These properties may explain the characteristic brilliance of Stradivari violins. The ideal for violin tone in the Baroque era was to imitate the human voice, and we found that Cremonese violins are capable of producing the formant features of human singers.
Abstract
The shape and design of the modern violin are largely influenced by two makers from Cremona, Italy: The instrument was invented by Andrea Amati and then improved by Antonio Stradivari. Although the construction methods of Amati and Stradivari have been carefully examined, the underlying acoustic qualities which contribute to their popularity are little understood. According to Geminiani, a Baroque violinist, the ideal violin tone should “rival the most perfect human voice.” To investigate whether Amati and Stradivari violins produce voice-like features, we recorded the scales of 15 antique Italian violins as well as male and female singers. The frequency response curves are similar between the Andrea Amati violin and human singers, up to ∼4.2 kHz. By linear predictive coding analyses, the first two formants of the Amati exhibit vowel-like qualities (F1/F2 = 503/1,583 Hz), mapping to the central region on the vowel diagram. Its third and fourth formants (F3/F4 = 2,602/3,731 Hz) resemble those produced by male singers. Using F1 to F4 values to estimate the corresponding vocal tract length, we observed that antique Italian violins generally resemble basses/baritones, but Stradivari violins are closer to tenors/altos. Furthermore, the vowel qualities of Stradivari violins show reduced backness and height. The unique formant properties displayed by Stradivari violins may represent the acoustic correlate of their distinctive brilliance perceived by musicians. Our data demonstrate that the pioneering designs of Cremonese violins exhibit voice-like qualities in their acoustic output.

Wednesday, June 20, 2018

Families and politics - curtailed conversations

Chen and Rohla find that Thanksgiving dinners in which the hosts and guests lived in oppositely voting precincts were up to 50 minutes shorter than same-party-precinct dinner. It seems likely that avoiding talking about contentious subjects lead guests to simply talk less.
Research on growing American political polarization and antipathy primarily studies public institutions and political processes, ignoring private effects, including strained family ties. Using anonymized smartphone-location data and precinct-level voting, we show that Thanksgiving dinners attended by residents from opposing-party precincts were 30 to 50 minutes shorter than same-party dinners. This decline from a mean of 257 minutes survives extensive spatial and demographic controls. Reductions in the duration of Thanksgiving dinner in 2016 tripled for travelers from media markets with heavy political advertising—an effect not observed in 2015—implying a relationship to election-related behavior. Effects appear asymmetric: Although fewer Democratic-precinct residents traveled in 2016 than in 2015, Republican-precinct residents shortened their Thanksgiving dinners by more minutes in response to political differences. Nationwide, 34 million hours of cross-partisan Thanksgiving dinner discourse were lost in 2016 owing to partisan effects.

Tuesday, June 19, 2018

Tipping points in changing social conventions.

How can we explain the rapid rise of the Nazis in 1930s Germany or the rapid acceptance of gay marriage in the United States? Centola et al. approach this question by doing both modeling and experiments illustrating how a motivated minority can change a social convention. They
...study a system of coordination in which a minority group of actors attempt to disrupt an established equilibrium behavior. In both our theoretical framework and the empirical setting, we adopt the canonical approach of using coordination on a naming convention as a general model for conventional behavior. Our experimental approach is designed to test a broad range of theoretical predictions derived from the existing literature on critical mass dynamics in social conventions.
Here is the Science Magazine summary followed by the abstract:
Once a population has converged on a consensus, how can a group with a minority viewpoint overturn it? Theoretical models have emphasized tipping points, whereby a sufficiently large minority can change the societal norm. Centola et al. devised a system to study this in controlled experiments. Groups of people who had achieved a consensus about the name of a person shown in a picture were individually exposed to a confederate who promoted a different name. The only incentive was to coordinate. When the number of confederates was roughly 25% of the group, the opinion of the majority could be tipped to that of the minority.
Abstract
Theoretical models of critical mass have shown how minority groups can initiate social change dynamics in the emergence of new social conventions. Here, we study an artificial system of social conventions in which human subjects interact to establish a new coordination equilibrium. The findings provide direct empirical demonstration of the existence of a tipping point in the dynamics of changing social conventions. When minority groups reached the critical mass—that is, the critical group size for initiating social change—they were consistently able to overturn the established behavior. The size of the required critical mass is expected to vary based on theoretically identifiable features of a social setting. Our results show that the theoretically predicted dynamics of critical mass do in fact emerge as expected within an empirical system of social coordination.

Monday, June 18, 2018

The misunderstood sixth mass extinction

Paul Ehrlich, 50 years after the publication of "The Population Bomb," together with Gerardo Ceballos, has written a concise rebuttal of scientists (and the politicians who listen to them) who suggest that the current anthropogenic mass extinction will not have dire consequences:
Scientific misunderstanding about the nature and consequences of the sixth mass extinction has led to confusion among policy-makers and the public. Scientists agree that there have been five mass extinctions in the past 600 million years (1). Although scientists also agree that Earth is now suffering the sixth mass extinction, they disagree about its consequences. Mass extinctions are defined as the loss of the majority of species in a relatively short geological time, caused by a catastrophic natural event (2). Some scientists argue that there is no reason for concern about the sixth mass extinction because extinction is normal, simply an inevitable consequence of the process of evolution (3, 4). This misunderstanding ignores some critical issues. First, the rate of species extinction is now as much as 100 times that of the “normal rate” throughout geological time (5, 6). Second, like the past mass extinctions, the current episode is not an inevitable consequence of the process of evolution. Rather, it is the result of a rare event changing the environment so quickly that many organisms cannot evolve in response to it.
In theory, evolution on Earth could proceed as long as conditions permitted with no mass extinction events. That has been the case for vast stretches of geological time between occasional encounters with unusual environmental circumstances. Extinctions did occur, but not suddenly and nearly universally, as is happening now (7, 8). The rate and extent of current extinctions is similar to those of past mass extinctions, not the intervals between them (9, 10). If past mass extinctions are any guide to the rate at which usual evolutionary diversification processes could restore a reasonable level of biodiversity and ecosystem services, the wait is likely to be millions, or even tens of millions of years (8, 9).
At the time of the past mass extinctions, there was no industrialized human population of almost 8 billion people utterly dependent on the ecosystem services biodiversity helps provide, such as pollination, pest control, and climate amelioration (7, 8, 11). Scientists who deny that the current mass extinction has dire consequences, and policy-makers who listen to them, fail to appreciate the penalties human civilization will suffer for continuing on society's business-as-usual course (2–5). Moreover, beyond the consequences to humans, exterminating most of the only known living things with which we share the universe is clearly wrong (5–8, 12). The future of life on Earth, and human well-being, depends on the actions that we take to reduce the extinction of populations and species in the next two decades (8). It is irresponsible and unethical not to act despite the overwhelming scientific evidence indicating the severity of the current mass extinction event.
References (see Google Scholar for all)
1. W. J. Ripple et al., Bioscience 67, 197 (2017). 2. A. Hallam, P. B. Wignall, Mass Extinctions and Their Aftermath (Oxford University Press, UK. 1997). 3. S. Brand, “Rethinking extinction” (2015); https://aeon.co/essays/we-are-not-edging-up-to-a-mass-extinction. 4. C. D. Thomas, Inheritors of the Earth (Hachette, UK, 2017). 5. S. L. Pimm et al., Science, 344, 1246752 (2014). 6. G. Ceballos et al., Sci. Adv. 1, e1400253 (2015). 7. R. Dirzo et al., Science 345, 401 (2014). 8. G. Ceballos et al., The Annihilation of Nature: Human Extinction of Birds and Mammals (JHU Press, 2015). 9. D. Jablonski, Evol. Biol. 44, 451 (2017). 10.A. D. Barnosky et al., Nature 471, 51 (2011). 11.C. A. Hallmann, PLOS One 12, e0185809 (2017).C 12.P. R. Ehrlich, A. H. Ehrlich, Proc. R. Soc. B 280, 20122845 (2013).

Friday, June 15, 2018

Comparing human prefrontal cortex to that of other primates

From Donahue et al.:

Significance
A longstanding controversy in neuroscience pertains to differences in human prefrontal cortex (PFC) compared with other primate species; specifically, is human PFC disproportionately large? Distinctively human behavioral capacities related to higher cognition and affect presumably arose from evolutionary modifications since humans and great apes diverged from a common ancestor about 6–8 Mya. Accurate determination of regional differences in the amount of cortical gray and subcortical white matter content in humans, great apes, and Old World monkeys can further our understanding of the link between structure and function of the human brain. Using tissue volume analyses, we show a disproportionately large amount of gray and white matter corresponding to PFC in humans compared with nonhuman primates.
Abstract
Humans have the largest cerebral cortex among primates. The question of whether association cortex, particularly prefrontal cortex (PFC), is disproportionately larger in humans compared with nonhuman primates is controversial: Some studies report that human PFC is relatively larger, whereas others report a more uniform PFC scaling. We address this controversy using MRI-derived cortical surfaces of many individual humans, chimpanzees, and macaques. We present two parcellation-based PFC delineations based on cytoarchitecture and function and show that a previously used morphological surrogate (cortex anterior to the genu of the corpus callosum) substantially underestimates PFC extent, especially in humans. We find that the proportion of cortical gray matter occupied by PFC in humans is up to 1.9-fold greater than in macaques and 1.2-fold greater than in chimpanzees. The disparity is even more prominent for the proportion of subcortical white matter underlying the PFC, which is 2.4-fold greater in humans than in macaques and 1.7-fold greater than in chimpanzees.

Thursday, June 14, 2018

Evolutionary cognition - bees demonstrate an understanding of zero

Howard et al. demonstrate an astounding evolutionary convergence by showing that insects have developed the concept of zero, a capability once thought to be a unique major intellectual advance in humans. The last common ancestor of humans and the honeybees used in the experiments lived more than 600 million years ago. Their abstract:
Some vertebrates demonstrate complex numerosity concepts—including addition, sequential ordering of numbers, or even the concept of zero—but whether an insect can develop an understanding for such concepts remains unknown. We trained individual honey bees to the numerical concepts of “greater than” or “less than” using stimuli containing one to six elemental features. Bees could subsequently extrapolate the concept of less than to order zero numerosity at the lower end of the numerical continuum. Bees demonstrated an understanding that parallels animals such as the African grey parrot, nonhuman primates, and even preschool children.
...and a description of their experiment from a review by Nieder:
...the authors lured free-flying honey bees from maintained hives to their testing apparatus (see the figure) and marked the insects with color for identification. They rewarded the bees for discriminating displays on a screen that showed different numbers (numerosities) of items. The researchers controlled for systematic changes in the appearance of the numerosity displays that occur when the number of items is changed. They thus ensured that the bees were discriminating between different numbers, rather than responding to low-level visual cues.
First, the researchers trained the bees to rank two numerosity displays at a time. Over the course of training, they changed the numbers presented to encourage rule learning. Bees from one group were rewarded with a sugar solution whenever they flew to the display showing more items, thereby following a greater-than rule. The other group of bees was trained on the less-than rule and rewarded for landing at the display that presented fewer items. The bees learned to master this task with displays consisting of one to four items; they were able to do so not only for familiar numerosity displays but also for new displays.
Next, the researchers occasionally inserted displays containing no item. Would the bees understand that empty displays could be ranked with countable numerosities? Indeed, the bees obeying the less-than rule spontaneously landed on displays showing no item, that is, an empty set (see the figure). In doing so, bees understood that the empty set was numerically smaller than sets of one, two, or more items. Further experiments confirmed that this behavior was related to quantity estimation and not a product of the learning history.

Wednesday, June 13, 2018

Jaron Lanier on why you should delete your social media accounts.

I have read through Jared Lanier's "Ten Arguments for Deleting Your Social Media Accounts Right Now." His critiques are important and compelling, and I want to pass on just a few clips of text that give you gist of his arguments:

ARGUMENT ONE: YOU ARE LOSING YOUR FREE WILL
We’re being tracked and measured constantly, and receiving engineered feedback all the time. We’re being hypnotized little by little by technicians we can’t see, for purposes we don’t know. We’re all lab animals now...Now everyone who is on social media is getting individualized, continuously adjusted stimuli, without a break, so long as they use their smartphones. What might once have been called advertising must now be understood as continuous behavior modification on a titanic scale...This book argues in ten ways that what has become suddenly normal— pervasive surveillance and constant, subtle manipulation— is unethical, cruel, dangerous, and inhumane. Dangerous? Oh, yes, because who knows who’s going to use that power, and for what?
The core process that allows social media to make money and that also does the damage to society is behavior modification. Behavior modification entails methodical techniques that change behavioral patterns in animals and people. It can be used to treat addictions, but it can also be used to create them...Using symbols instead of real rewards has become an essential trick in the behavior modification toolbox. For instance, a smartphone game like Candy Crush uses shiny images of candy instead of real candy to become addictive...somewhat random or unpredictable feedback can be more engaging than perfect feedback.
The prime directive to be engaging reinforces itself, and no one even notices that negative emotions are being amplified more than positive ones. Engagement is not meant to serve any particular purpose other than its own enhancement, and yet the result is an unnatural global amplification of the “easy” emotions, which happen to be the negative ones.
Social media is biased, not to the Left or the Right, but downward. The relative ease of using negative emotions for the purposes of addiction and manipulation makes it relatively easier to achieve undignified results. An unfortunate combination of biology and math favors degradation of the human world. Information warfare units sway elections, hate groups recruit, and nihilists get amazing bang for the buck when they try to bring society down.
One of the main reasons to delete your social media accounts is that there isn’t a real choice to move to different social media accounts. Quitting entirely is the only option for change. If you don’t quit, you are not creating the space in which Silicon Valley can act to improve itself...the problem isn’t behavior modification in itself. The problem is relentless, robotic, ultimately meaningless behavior modification in the service of unseen manipulators and uncaring algorithms.
ARGUMENT TWO: QUITTING SOCIAL MEDIA IS THE MOST FINELY TARGETED WAY TO RESIST THE INSANITY OF OUR TIMES
I speak as a computer scientist, not as a social scientist or psychologist. From that perspective, I can see that time is running out. The world is changing rapidly under our command, so doing nothing is not an option. We don’t have as much in the way of rigorous science as would be ideal for understanding our situation, but we have enough results to describe the problem we must solve, just not a lot of time in which to solve it. Seems like a good moment to coin an acronym so I don’t have to repeat, over and over, the same account of the pieces that make up the problem. How about “Behaviors of Users Modified, and Made into an Empire for Rent”? BUMMER.
ARGUMENT EIGHT: SOCIAL MEDIA DOESN’T WANT YOU TO HAVE ECONOMIC DIGNITY
The corp perspective
One problem with the BUMMER model is that it’s like oil for a petrostate. A BUMMER-dependent company can diversify its activities— its cost centers— all it wants, but it can never diversify its profit centers, because it always has to prioritize free services in order to grab more data to run the manipulation services. Consumers are addicted, but so are the BUMMER empires.
BUMMER makes tech companies brittle and weirdly stagnant. Of the big five tech companies, only two depend on the BUMMER model. Apple, Amazon, and Microsoft all indulge in a little BUMMER, but they all do just fine without depending on BUMMER. The non-BUMMER big tech companies have successfully diversified. There are plenty of reasons you might want to criticize and change those three companies, but the amount of BUMMER they foster is not an existential threat to civilization.
The two tech giants that are hooked on BUMMER, Google and Facebook, are way hooked. They make the preponderance of their profits from BUMMER despite massive investments in trying to start up other types of businesses. No matter the scale, a company based on a single trick is vulnerable. Sooner or later some disruption will come along, and then a BUMMER company, no matter how large, will quickly collapse.
So why is it again, that BUMMER is such a great long-term strategy for tech companies? It isn’t. It trades the short term against the long term, just like a petrostate...Instead of trying to shut down BUMMER companies, we should ask them to innovate their business models, for their own good.
The user perspective
It might sound undesirable to someday have to pay for things that are currently free, but remember, you’d also be able to make money from those things. And paying for stuff sometimes really does make the world better for everyone. Techies who advocated a free/ open future used to argue that paying for movies or TV was a terrible thing, and that the culture of the future would be made of volunteerism, with the digital distribution funded by advertising, of course. This was practically a religious belief in Silicon Valley when the big BUMMER companies were founded. It was sacrilege to challenge it.
But then companies like Netflix and HBO convinced people to pay a monthly fee, and the result is what is often called “peak TV.” Why couldn’t there also be an era of paid “peak social media” and “peak search”? ...Watch the end credits on a movie on Netflix or HBO. It’s good discipline for lengthening your attention span! Look at all those names scrolling by. All those people who aren’t stars made their rent by working to bring you that show.
BUMMER only supports stars. If you are one of those rare, rare people who are making a decent living off BUMMER as an influencer, 4 for instance, you have to understand that you are in a tiny club and you are vulnerable. Please make backup plans! I hate raining on dreams, but if you think you are about to make a living as an influencer or similar, the statistics are voraciously against you, no matter how deserving you are and no matter how many get-rich-quick stories you’ve been fed. The problem isn’t that there are only a few stars; that’s always true, by definition. The problem is that BUMMER economics allow for almost no remunerative roles for near-stars. In a genuine, deep economy, there are many roles. You might not become a pro football player, but you might get into management, sports media, or a world of other related professions. But there are vanishingly few economic roles adjacent to a star influencer. Have a backup plan.
When social media companies are paid directly by users instead of by hidden third parties, then they will serve those users. It’s so simple. Someone will be able to pay to see poisonous propaganda, but they won’t be able to pay to have that poison directed at someone else. The incentive for poisoning the world will be undone...I won’t have an account on Facebook, Google, or Twitter until I can pay for it— and I unambiguously own and set the price for using my data, and it’s easy and normal to earn money if my data is valuable. I might have to wait a while, but it’ll be worth it. 
ARGUMENT 10: SOCIAL MEDIA HATES YOUR SOUL.
It’s almost impossible to write about the deepest spiritual or philosophical topics, because people are on such hair triggers about them, but it would be a cop-out to avoid declaring a statement of beliefs regarding the basic questions that BUMMER is trying to dominate… I am conscious. I have faith that you are also conscious. We each experience. It’s a marvel… Acknowledging that experience exists might make us kinder, since we understand people to be more than machines. We might be a little more likely to think before hurting someone if we believe there’s a whole other center of experience cloaked in that person, a whole universe, a soul.
The BUMMER business is interwoven with a new religion that grants empathy to computer programs— calling them AI programs— as a way to avoid noticing that it is degrading the dignity, stature, and rights of real humans … The BUMMER experience is that you’re just one lowly cell in the great superorganism of the BUMMER platform. We talk to our BUMMER-connected gadgets kind of as if they’re people, and the “conversation” works better if we talk in a way that makes us kind of like machines. When you live as if there’s nothing special, no mystical spark inside you, you gradually start to believe it.
The issues that are tearing the United States apart are all about whether people are special, about where the soul might be found, if it is there at all. Is abortion acceptable? Will people become obsolete, so that everyone but a few elite techies will have to be supported by a charitable basic income scheme? Should we treat all humans as being equally worthy, or are some humans more deserving of self-determination because they are good at nerdy tasks? These questions might all look different at first, but on closer inspection they are all versions of the same question: What is a person? Whatever a person might be, if you want to be one, delete your accounts.
(above clips taken from Lanier, Jaron. Ten Arguments for Deleting Your Social Media Accounts Right Now Henry Holt and Co.. Kindle Edition. )

Tuesday, June 12, 2018

What makes a man attractive?

In many species female mate preference is influenced by signs indicating the health and robustness of the potential male partner. Rehm points to work by Versluys et al. examining the effects of male arm to body and intra-limb ratio on the preferences of heterosexual US women by showing them computer-generated male images that vary average body proportions from 9000 US military men by making arms and legs slightly longer or shorter. Previous research had shown that women prefer men with legs that are about half their height (legs that are too short have been linked to type 2 diabetes). How long the model’s arms were relative to his height didn’t seem to matter; and women cared only a little about how the elbow or knee divided a limb. But as seen in previous work, women noticed if the legs made up more or less than half his height—and they didn’t like it.


Monday, June 11, 2018

Socioeconomic status moderates age-related differences in brain organization throughout adulthood.

Work by Chan et al. (open source article) suggests that higher socioeconomic status (SES) may be a protective factor against age-related brain decline.

Significance
An individual’s socioeconomic status (SES) is a central feature of their environmental surroundings and has been shown to relate to the development and maturation of their brain in childhood. Here, we demonstrate that an individual’s present (adult) SES relates to their brain function and anatomy across a broad range of middle-age adulthood. In middle-aged adults (35–64 years), lower SES individuals exhibit less organized functional brain networks and reduced cortical thickness compared with higher SES individuals. These relationships cannot be fully explained by differences in health, demographics, or cognition. Additionally, childhood SES does not explain the relation between SES and brain network organization. These observations provide support for a powerful relationship between the environment and the brain that is evident in adult middle age.
Abstract
An individual’s environmental surroundings interact with the development and maturation of their brain. An important aspect of an individual’s environment is his or her socioeconomic status (SES), which estimates access to material resources and social prestige. Previous characterizations of the relation between SES and the brain have primarily focused on earlier or later epochs of the lifespan (i.e., childhood, older age). We broaden this work to examine the relationship between SES and the brain across a wide range of human adulthood (20–89 years), including individuals from the less studied middle-age range. SES, defined by education attainment and occupational socioeconomic characteristics, moderates previously reported age-related differences in the brain’s functional network organization and whole-brain cortical structure. Across middle age (35–64 years), lower SES is associated with reduced resting-state system segregation (a measure of effective functional network organization). A similar but less robust relationship exists between SES and age with respect to brain anatomy: Lower SES is associated with reduced cortical gray matter thickness in middle age. Conversely, younger and older adulthood do not exhibit consistent SES-related difference in the brain measures. The SES–brain relationships persist after controlling for measures of physical and mental health, cognitive ability, and participant demographics. Critically, an individual’s childhood SES cannot account for the relationship between their current SES and functional network organization. These findings provide evidence that SES relates to the brain’s functional network organization and anatomy across adult middle age, and that higher SES may be a protective factor against age-related brain decline.

Figure - (click to enlarge) Lower SES adults exhibit reduced segregation of their resting-state functional brain networks and lower mean cortical thickness in middle-age adulthood. For each age group, brain system segregation (A) and mean cortical thickness (B) are plotted for higher and lower SES (stratified using a median split across the entire participant sample; error bars depict standard error of the mean). Higher SES is associated with greater brain system segregation and mean cortical thickness in middle-age groups (ME, 35–49 y; ML, 50–64 y). Primary statistical models were completed using general linear modeling, where SES was modeled continuously.

Friday, June 08, 2018

Move into your virtual body avatar with just your hands and feet

Steph Yin points to work by Kitazaki's group at Toyohashi University of Technology in Japan, who try...
...to figure out the minimal amount of body we need to feel a sense of self, especially in digital environments where more and more people may find themselves for work or play...they show that animating virtual hands and feet alone is enough to make people feel their sense of body drift toward an invisible avatar...Using an Oculus Rift virtual reality headset and a motion sensor, Dr. Kitazaki’s team performed a series of experiments in which volunteers watched disembodied hands and feet move two meters in front of them in a virtual room. In one experiment, when the hands and feet mirrored the participants’ own movements, people reported feeling as if the space between the appendages were their own bodies.
The article abstract:
Body ownership can be modulated through illusory visual-tactile integration or visual-motor synchronicity/contingency. Recently, it has been reported that illusory ownership of an invisible body can be induced by illusory visual-tactile integration from a first-person view. We aimed to test whether a similar illusory ownership of the invisible body could be induced by the active method of visual-motor synchronicity and if the illusory invisible body could be experienced in front of and facing away from the observer. Participants observed left and right white gloves and socks in front of them, at a distance of 2 m, in a virtual room through a head-mounted display. The white gloves and socks were synchronized with the observers’ actions. In the experiments, we tested the effect of synchronization, and compared this to a whole-body avatar, measuring self-localization drift. We observed that visual hands and feet were sufficient to induce illusory body ownership, and this effect was as strong as using a whole-body avatar.

 Here is their video "Your body is transparentized in a virtual environment."

Thursday, June 07, 2018

Social contagion of ethnic hostility.

From Bauer et al.:

Significance
We provide experimental evidence on peer effects and show that behavior that harms members of a different ethnic group is twice as contagious as behavior that harms coethnics. The findings may help to explain why ethnic hostilities can spread quickly (even in societies with few visible signs of interethnic hatred) and why many countries have adopted hate crime laws, and illustrate the need to study not only the existence of discrimination, but also the stability of attitudes and behaviors toward outgroup members.
Abstract
Interethnic conflicts often escalate rapidly. Why does the behavior of masses easily change from cooperation to aggression? This paper provides an experimental test of whether ethnic hostility is contagious. Using incentivized tasks, we measured willingness to sacrifice one’s own resources to harm others among adolescents from a region with a history of animosities toward the Roma people, the largest ethnic minority in Europe. To identify the influence of peers, subjects made choices after observing either destructive or peaceful behavior of peers in the same task. We found that susceptibility to follow destructive behavior more than doubled when harm was targeted against Roma rather than against coethnics. When peers were peaceful, subjects did not discriminate. We observed very similar patterns in a norms-elicitation experiment: destructive behavior toward Roma was not generally rated as more socially appropriate than when directed at coethnics, but the ratings were more sensitive to social contexts. The findings may illuminate why ethnic hostilities can spread quickly, even in societies with few visible signs of interethnic hatred.

Wednesday, June 06, 2018

Placebo treatment facilitates social trust.

Nasal sprays containing oxytocin have been shown to facilitate pro-social behaviors. Xan et al. have now shown that this effect can be obtained using nasal sprays containing only saline, if subjects are told the spray contains oxytocin and are educated on its expected pro-social effects:
Placebo effect refers to beneficial changes induced by the use of inert treatment, such as placebo-induced relief of physical pain and attenuation of negative affect. To date, we know little about whether placebo treatment could facilitate social functioning, a crucial aspect for well-being of a social species. In the present study, we develop and validate a paradigm to induce placebo effects on social trust and approach behavior (social placebo effect), and show robust evidence that placebo treatment promotes trust in others and increases preference for a closer interpersonal distance. We further examine placebo effects in real-life social interaction and show that placebo treatment makes single, but not pair-bonded, males keep closer to an attractive first-met female and perceive less social anxiety in the female. Finally, we show evidence that the effects of placebo treatment on social trust and approach behavior can be as strong as the effect of intranasal administration of oxytocin, a neuropeptide known for its function in facilitating social cognition and behavior. The finding of the social placebo effect extends our understanding of placebo effects on improvement of physical, mental, and social well-being and suggests clinical potentials in the treatment of social dysfunction.

Tuesday, June 05, 2018

The End of Humanism - Homo Deus

I want to pass on a useful precis of two books by Yuval Harari prepared by my colleague Terry Allard for a meeting of the Chaos and Complexity Seminar at the University of Wisconsin, Madison (where I am an emeritus professor and still maintain an office during my summer months away from Austin TX in Madison WI.) Here is his summary of Harari's "A Brief History of Humankind" and "Homo Deus: A Brief History of Tomorrow."
In these 2 volumes, historian Yuval Harari, reviews the successive transformations of humanity and human civilizations from small bands of hunter-gatherers, through the agrarian and industrial revolutions to today’s scientific revolution while reflecting on what it means to be human. Our collective belief in abstract stories like money, corporations, nations and religions enables human cooperation on a large scale and differentiates us from all other animals. Today’s discussion will focus on a possible transition from the humanist values of individual freedoms and “free will” to a disturbing dystopian future where individualism is devalued and people are managed by artificially intelligent systems. This transition is enabled by reductions in Famine, plague and war that have historically motivated human behavior. Further advances in biotechnology, psychology and computer science could produce a superhuman elite having the resources and opportunity to benefit directly from technological enhancements while leaving the majority of humankind behind.
Allard's suggested discussion questions:
1. Does technology, social stratification and empire enhance the human experience? Are we happier than hunter-gatherers?
2. What is humanism?
3. Are people really just the sum of their biological algorithms?
4. When will we trust artificial intelligence? Is AI the inevitable next evolutionary step?
5. What do we (humans) really want the future to be? What are our transcendent values?
Harari quotes from an interview in The Guardian (19March2017):

Humanity’s biggest myth? “gaining more power over the world, over the environment, we will be able to make ourselves happier and more satisfied with life. Looking again from a perspective of thousands of years, we have gained enormous power over the world and it doesn’t seem to make people significantly more satisfied than in the stone age.”
On Morality: “we are very close to really having divine powers of creation and destruction. The future of the entire ecological system and the future of the whole of life is really now in our hands. And what to do with it is an ethical question and also a scientific question.”
On Inequality: “With the new revolution in artificial intelligence and biotechnology, there is a danger that again all the power and benefits will be monopolised by a very small elite, and most people will end up worse off than before.”
On timing: “I think that Homo sapiens as we know them will probably disappear within a century or so, not destroyed by killer robots or things like that, but changed and upgraded with biotechnology and artificial intelligence into something else, into something different.”

Monday, June 04, 2018

More on the sociopathy of social media

I want to pass on to MindBlog readers some material from Michael Kaplan, who recently pointed me to his YouTube channel OneHandClap, and in particular his video "How Facebook Makes You Depressed." It notes a December 2016 Facebook article published on their official blog, "Hard Questions: Is Spending Time on Social Media Bad for Us?," and summarizes an assortment of recent research that links social media use to depression.  It also explores how social media sites like Facebook make use of addictive neurochemical mechanisms.



If you really want the details on how we are being screwed by the social media, particularly Google and Facebook,  read Jaron Lanier's "Ten Arguments for Deleting Your Social Media Accounts Right Now."  I downloaded the Kindle version several days ago, and am finding it incredibly sobering reading, given that the platform for mindblog.dericbownds.net  is provided by Google (Blogger),  posts like this one are automatically sent on from Blogger to my Facebook and Twitter feeds, my piano performances are on Google's YouTube, my email, my calendar, etc., etc......  A few clips from Lanier:
We’re being tracked and measured constantly, and receiving engineered feedback all the time. We’re being hypnotized little by little by technicians we can’t see, for purposes we don’t know. We’re all lab animals now.
Now everyone who is on social media is getting individualized, continuously adjusted stimuli, without a break, so long as they use their smartphones. What might once have been called advertising must now be understood as continuous behavior modification on a titanic scale.
This book argues in ten ways that what has become suddenly normal— pervasive surveillance and constant, subtle manipulation— is unethical, cruel, dangerous, and inhumane. Dangerous? Oh, yes, because who knows who’s going to use that power, and for what?
The core process that allows social media to make money and that also does the damage to society is behavior modification. Behavior modification entails methodical techniques that change behavioral patterns in animals and people. It can be used to treat addictions, but it can also be used to create them.
(Lanier, Jaron. Ten Arguments for Deleting Your Social Media Accounts Right Now. Henry Holt and Co. Kindle Edition.)
Finally, check out "Hands off my data! 15 default privacy settings you should change right now"

Friday, June 01, 2018

How much should A.I. frighten us?

Continuing the artificial intelligence topic of yesterday's post, I want to pass on the concluding paragraphs of a fascinating New Yorker article by Tad Friend. Friend suggest that thinking about artificial intelligence can help clarify what makes us human 0 for better and for worse. He points to several recent books writing about the presumed inevitability of our developing an artificial general intelligence (A.G.I.) that far exceeds our current human capabilities. His final paragraphs:
The real risk of an A.G.I.... may stem not from malice, or emergent self-consciousness, but simply from autonomy. Intelligence entails control, and an A.G.I. will be the apex cogitator. From this perspective, an A.G.I., however well intentioned, would likely behave in a way as destructive to us as any Bond villain. “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” Bostrom writes in his 2014 book, “Superintelligence,” a closely reasoned, cumulatively terrifying examination of all the ways in which we’re unprepared to make our masters. A recursive, self-improving A.G.I. won’t be smart like Einstein but “smart in the sense that an average human being is smart compared with a beetle or a worm.” How the machines take dominion is just a detail: Bostrom suggests that “at a pre-set time, nanofactories producing nerve gas or target-seeking mosquito-like robots might then burgeon forth simultaneously from every square meter of the globe.” That sounds screenplay-ready—but, ever the killjoy, he notes, “In particular, the AI does not adopt a plan so stupid that even we present-day humans can foresee how it would inevitably fail. This criterion rules out many science fiction scenarios that end in human triumph.”
If we can’t control an A.G.I., can we at least load it with beneficent values and insure that it retains them once it begins to modify itself? Max Tegmark observes that a woke A.G.I. may well find the goal of protecting us “as banal or misguided as we find compulsive reproduction.” He lays out twelve potential “AI Aftermath Scenarios,” including “Libertarian Utopia,” “Zookeeper,” “1984,” and “Self-Destruction.” Even the nominally preferable outcomes seem worse than the status quo. In “Benevolent Dictator,” the A.G.I. “uses quite a subtle and complex definition of human flourishing, and has turned Earth into a highly enriched zoo environment that’s really fun for humans to live in. As a result, most people find their lives highly fulfilling and meaningful.” And more or less indistinguishable from highly immersive video games or a simulation.
Trying to stay optimistic, by his lights—bear in mind that Tegmark is a physicist—he points out that an A.G.I. could explore and comprehend the universe at a level we can’t even imagine. He therefore encourages us to view ourselves as mere packets of information that A.I.s could beam to other galaxies as a colonizing force. “This could be done either rather low-tech by simply transmitting the two gigabytes of information needed to specify a person’s DNA and then incubating a baby to be raised by the AI, or the AI could nanoassemble quarks and electrons into full-grown people who would have all the memories scanned from their originals back on Earth.” Easy peasy. He notes that this colonization scenario should make us highly suspicious of any blueprints an alien species beams at us. It’s less clear why we ought to fear alien blueprints from another galaxy, yet embrace the ones we’re about to bequeath to our descendants (if any).
A.G.I. may be a recurrent evolutionary cul-de-sac that explains Fermi’s paradox: while conditions for intelligent life likely exist on billions of planets in our galaxy alone, we don’t see any. Tegmark concludes that “it appears that we humans are a historical accident, and aren’t the optimal solution to any well-defined physics problem. This suggests that a superintelligent AI with a rigorously defined goal will be able to improve its goal attainment by eliminating us.” Therefore, “to program a friendly AI, we need to capture the meaning of life.” Uh-huh.
In the meantime, we need a Plan B. Bostrom’s starts with an effort to slow the race to create an A.G.I. in order to allow more time for precautionary trouble-shooting. Astoundingly, however, he advises that, once the A.G.I. arrives, we give it the utmost possible deference. Not only should we listen to the machine; we should ask it to figure out what we want. The misalignment-of-goals problem would seem to make that extremely risky, but Bostrom believes that trying to negotiate the terms of our surrender is better than the alternative, which is relying on ourselves, “foolish, ignorant, and narrow-minded that we are.” Tegmark also concludes that we should inch toward an A.G.I. It’s the only way to extend meaning in the universe that gave life to us: “Without technology, our human extinction is imminent in the cosmic context of tens of billions of years, rendering the entire drama of life in our Universe merely a brief and transient flash of beauty.” We are the analog prelude to the digital main event.
So the plan, after we create our own god, would be to bow to it and hope it doesn’t require a blood sacrifice. An autonomous-car engineer named Anthony Levandowski has set out to start a religion in Silicon Valley, called Way of the Future, that proposes to do just that. After “The Transition,” the church’s believers will venerate “a Godhead based on Artificial Intelligence.” Worship of the intelligence that will control us, Levandowski told a Wired reporter, is the only path to salvation; we should use such wits as we have to choose the manner of our submission. “Do you want to be a pet or livestock?” he asked. I’m thinking, I’m thinking . . . ♦