Monday, July 16, 2018

What is consciousness, and could machines have it?

I want to point to a lucid article by Dehaene, Lau, and Koulder that gives the most clear review I have seen of our current state of knowledge on the nature of human consciousness, which we need to define if we wish to consider the question of machines being conscious like us.   Here is the abstract, followed by a few edited clips that attempt to communicate the core points (motivated readers can obtain the whole text by emailing me):
The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.
C1: Global availability
This corresponds to the transitive meaning of consciousness (as in “The driver is conscious of the light”)... We can recall it, act upon it, and speak about it. This sense is synonymous with “having the information in mind”; among the vast repertoire of thoughts that can become conscious at a given time, only that which is globally available constitutes the content of C1 consciousness.
C2: Self-monitoring
Another meaning of consciousness is reflexive. It refers to a self-referential relationship in which the cognitive system is able to monitor its own processing and obtain information about itself..This sense of consciousness corresponds to what is commonly called introspection, or what psychologists call “meta-cognition”—the ability to conceive and make use of internal representations of one’s own knowledge and abilities.
CO: Unconscious processing: Where most of our intelligence lies
...many computations involve neither C1 nor C2 and therefore are properly called “unconscious” ...Cognitive neuroscience confirms that complex computations such as face or speech recognition, chess-game evaluation, sentence parsing, and meaning extraction occur unconsciously in the human brain—under conditions that yield neither global reportability nor self-monitoring. The brain appears to operate, in part, as a juxtaposition of specialized processors or “modules” that operate nonconsciously and, we argue, correspond tightly to the operation of current feedforward deep-learning networks.
The phenomenon of priming illustrates the remarkable depth of unconscious processing...Subliminal digits, words, faces, or objects can be invariantly recognized and influence motor, semantic, and decision levels of processing. Neuroimaging methods reveal that the vast majority of brain areas can be activated nonconsciously...Subliminal priming generalizes across visual-auditory modalities...Even the semantic meaning of sensory input can be processed without awareness by the human brain.
...subliminal primes can influence prefrontal mechanisms of cognitive control involved in the selection of a task...Neural mechanisms of decision-making involve accumulating sensory evidence that affects the probability of the various choices until a threshold is attained. This accumulation of probabilistic knowledge continues to happen even with subliminal stimuli. Bayesian inference and evidence accumulation, which are cornerstone computations for AI, are basic unconscious mechanisms for humans.
Reinforcement learning algorithms, which capture how humans and animals shape their future actions on the basis of history of past rewards, have excelled in attaining supra-human AI performance in several applications, such as playing Go. Remarkably, in humans, such learning appears to proceed even when the cues, reward, or motivation signals are presented below the consciousness threshold.
What additional computations are required for conscious processing?

C1: Global availability of relevant information
The need for integration and coordination. Integrating all of the available evidence to converge toward a single decision is a computational requirement that, we contend, must be faced by any animal or autonomous AI system and corresponds to our first functional definition of consciousness: global availability (C1)...Such decision-making requires a sophisticated architecture for (i) efficiently pooling over all available sources of information, including multisensory and memory cues; (ii) considering the available options and selecting the best one on the basis of this large information pool; (iii) sticking to this choice over time; and (iv) coordinating all internal and external processes toward the achievement of that goal.
Consciousness as access to an internal global workspace. We hypothesize that...On top of a deep hierarchy of specialized modules, a “global neuronal workspace,” with limited capacity, evolved to select a piece of information, hold it over time, and share it across modules. We call “conscious” whichever representation, at a given time, wins the competition for access to this mental arena and gets selected for global sharing and decision-making.
Relation between consciousness and attention. William James described attention as “the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought”. This definition is close to what we mean by C1: the selection of a single piece of information for entry into the global workspace. There is, however, a clear-cut distinction between this final step, which corresponds to conscious access, and the previous stages of attentional selection, which can operate unconsciously...What we call attention is a hierarchical system of sieves that operate unconsciously. Such unconscious systems compute with probability distributions, but only a single sample, drawn from this probabilistic distribution, becomes conscious at a given time. We may become aware of several alternative interpretations, but only by sampling their unconscious distributions over time.
Evidence for all-or-none selection in a capacity-limited system. The primate brain comprises a conscious bottleneck and can only consciously access a single item at a time. For instance, rivaling pictures or ambiguous words are perceived in an all-or-none manner; at any given time, we subjectively perceive only a single interpretation out of many possible ones [even though the others continue to be processed unconsciously]...Brain imaging in humans and neuronal recordings in monkeys indicate that the conscious bottleneck is implemented by a network of neurons that is distributed through the cortex, but with a stronger emphasis on high-level associative areas. ... Single-cell recordings indicate that each specific conscious percept, such as a person’s face, is encoded by the all-or-none firing of a subset of neurons in high-level temporal and prefrontal cortices, whereas others remain silent...the stable, reproducible representation of high-quality information by a distributed activity pattern in higher cortical areas is a feature of conscious processing. Such transient “meta-stability” seems to be necessary for the nervous system to integrate information from a variety of modules and then broadcast it back to them, achieving flexible cross-module routing.
C1 consciousness in human and nonhuman animals. C1 consciousness is an elementary property that is present in human infants as well as in animals. Nonhuman primates exhibit similar visual illusions, attentional blink, and central capacity limits as human subjects.
C2: Self-monitoring
Whereas C1 consciousness reflects the capacity to access external information, consciousness in the second sense (C2) is characterized by the ability to reflexively represent oneself ("metacognition")
A probabilistic sense of confidence. Confidence can be assessed nonverbally, either retrospectively, by measuring whether humans persist in their initial choice, or prospectively, by allowing them to opt out from a task without even attempting it. Both measures have been used in nonhuman animals to show that they too possess metacognitive abilities. By contrast, most current neural networks lack them: Although they can learn, they generally lack meta-knowledge of the reliability and limits of what has been learned...Magnetic resonance imaging (MRI) studies in humans and physiological recordings in primates and even in rats have specifically linked such confidence processing to the prefrontal cortex. Inactivation of the prefrontal cortex can induce a specific deficit in second-order (metacognitive) judgements while sparing performance on the first-order task. Thus, circuits in the prefrontal cortex may have evolved to monitor the performance of other brain processes.
Error detection: Reflecting on one’s own mistakes ...just after responding, we sometimes realize that we made an error and change our mind. Error detection is reflected by two components of electroencephalography (EEG) activity: the error-relativity negativity (ERN) and the positivity upon error (Pe), which emerge in cingulate and medial prefrontal cortex just after a wrong response but before any feedback is received...A possibility compatible with the remarkable speed of error detection is that two parallel circuits, a low-level sensory-motor circuit and a higher-level intention circuit, operate on the same sensory data and signal an error whenever their conclusions diverge. Self-monitoring is such a basic ability that it is already present during infancy. The ERN, indicating error monitoring, is observed when 1-year-old infants make a wrong choice in a perceptual decision task.
Meta-memory The term “meta-memory” was coined to capture the fact that humans report feelings of knowing, confidence, and doubts on their memories. ...Meta-memory is associated with prefrontal structures whose pharmacological inactivation leads to a metacognitive impairment while sparing memory performance itself. Metamemory is crucial to human learning and education by allowing learners to develop strategies such as increasing the amount of study or adapting the time allocated to memory encoding and rehearsal.
Reality monitoring. In addition to monitoring the quality of sensory and memory representations, the human brain must also distinguish self-generated versus externally driven representations - we can perceive things, but we also conjure them from imagination or memory...Neuroimaging studies have linked this kind of reality monitoring to the anterior prefrontal cortex
Dissociations between C1 and C2
According to our analysis, C1 and C2 are largely orthogonal and complementary dimensions of what we call consciousness. On one side of this double dissociation, self-monitoring can exist for unreportable stimuli (C2 without C1). Automatic typing provides a good example: Subjects slow down after a typing mistake, even when they fail to consciously notice the error. Similarly, at the neural level, an ERN can occur for subjectively undetected errors. On the other side of this dissociation, consciously reportable contents sometimes fail to be accompanied with an adequate sense of confidence (C1 without C2). For instance, when we retrieve a memory, it pops into consciousness (C1) but sometimes without any accurate evaluation of its confidence (C2), leading to false memories.
Synergies between C1 and C2 consciousness
Because C1 and C2 are orthogonal, their joint possession may have synergistic benefits to organisms. In one direction, bringing probabilistic metacognitive information (C2) into the global workspace (C1) allows it to be held over time, integrated into explicit long-term reflection, and shared with others...In the converse direction, the possession of an explicit repertoire of one’s own abilities (C2) improves the efficiency with which C1 information is processed. During mental arithmetic, children can perform a C2-level evaluation of their available competences (for example, counting, adding, multiplying, or memory retrieval) and use this information to evaluate how to best face a given arithmetic problem.
Endowing machines with C1 and C2
[Note: I am not abstracting this section as I did the above descriptions of consciousness. It describes numerous approaches rising above the level of most present day machines to making machines able to perform C1 and C2 operations.]
Most present-day machine-learning systems are devoid of any self-monitoring; they compute (C0) without representing the extent and limits of their knowledge or the fact that others may have a different viewpoint than their own. There are a few exceptions: Bayesian networks or programs compute with probability distributions and therefore keep track of how likely they are to be correct. Even when the primary computation is performed by a classical CNN, and is therefore opaque to introspection, it is possible to train a second, hierarchically higher neural network to predict the first one’s performance.
Concluding remarks
Our stance is based on a simple hypothesis: What we call “consciousness” results from specific types of information-processing computations, physically realized by the hardware of the brain. It differs from other theories in being resolutely computational; we surmise that mere information-theoretic quantities do not suffice to define consciousness unless one also considers the nature and depth of the information being processed.
We contend that a machine endowed with C1 and C2 would behave as though it were conscious; for instance, it would know that it is seeing something, would express confidence in it, would report it to others, could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans. Still, such a purely functional definition of consciousness may leave some readers unsatisfied. Are we “over-intellectualizing” consciousness, by assuming that some high-level cognitive functions are necessarily tied to consciousness? Are we leaving aside the experiential component (“what it is like” to be conscious)? Does subjective experience escape a computational definition?
Although those philosophical questions lie beyond the scope of the present paper, we close by noting that empirically, in humans the loss of C1 and C2 computations covaries with a loss of subjective experience. For example, in humans, damage to the primary visual cortex may lead to a neurological condition called “blindsight,” in which the patients report being blind in the affected visual field. Remarkably, those patients can localize visual stimuli in their blind field but cannot report them (C1), nor can they effectively assess their likelihood of success (C2)—they believe that they are merely “guessing.” In this example, at least, subjective experience appears to cohere with possession of C1 and C2. Although centuries of philosophical dualism have led us to consider consciousness as unreducible to physical interactions, the empirical evidence is compatible with the possibility that consciousness arises from nothing more than specific computations.



Friday, July 13, 2018

Playing with proteins in virtual reality.

Much of my mental effort while I was doing laboratory research on the mechanisms of visual transduction (changing light into a nerve signal in our retinal rod and cone photoreceptor cells) was devoted to trying to visualize how proteins might interact with each other. I spent many hours using molecular model kits of color-coded plastic atoms one could plug together with flexible joints, like the Tinkertoys of my childhood. If I had only had the system now described by O'Connor et al! Have a look at the video below showing manipulating molecular dynamics in a VR environment,  and here is their abstract:
We describe a framework for interactive molecular dynamics in a multiuser virtual reality (VR) environment, combining rigorous cloud-mounted atomistic physics simulations with commodity VR hardware, which we have made accessible to readers (see isci.itch.io/nsb-imd). It allows users to visualize and sample, with atomic-level precision, the structures and dynamics of complex molecular structures “on the fly” and to interact with other users in the same virtual environment. A series of controlled studies, in which participants were tasked with a range of molecular manipulation goals (threading methane through a nanotube, changing helical screw sense, and tying a protein knot), quantitatively demonstrate that users within the interactive VR environment can complete sophisticated molecular modeling tasks more quickly than they can using conventional interfaces, especially for molecular pathways and structural transitions whose conformational choreographies are intrinsically three-dimensional. This framework should accelerate progress in nanoscale molecular engineering areas including conformational mapping, drug development, synthetic biology, and catalyst design. More broadly, our findings highlight the potential of VR in scientific domains where three-dimensional dynamics matter, spanning research and education.

Sampling molecular conformational dynamics in virtual reality from david glowacki on Vimeo.




Thursday, July 12, 2018

Increasing despair among poor Americans.

A survey by Goldman et al. of more than 4,600 American adults conducted in 1995-1996 and in 2011-2014 suggests that among individuals of low socioeconomic status, negative affect increased significantly between the two survey waves, and life satisfaction and psychological well-being decreased:

Significance
In the past few years, references to the opioid epidemic, drug poisonings, and associated feelings of despair among Americans, primarily working-class whites, have flooded the media, and related patterns of mortality have been of increasing interest to social scientists. Yet, despite recurring references to distress or despair in journalistic accounts and academic studies, there has been little analysis of whether psychological health among American adults has worsened over the past two decades. Here, we use data from national samples of adults in the mid-1990s and early 2010s to demonstrate increasing distress and declining well-being that was concentrated among low-socioeconomic-status individuals but spanned the age range from young to older adults.
Abstract
Although there is little dispute about the impact of the US opioid epidemic on recent mortality, there is less consensus about whether trends reflect increasing despair among American adults. The issue is complicated by the absence of established scales or definitions of despair as well as a paucity of studies examining changes in psychological health, especially well-being, since the 1990s. We contribute evidence using two cross-sectional waves of the Midlife in the United States (MIDUS) study to assess changes in measures of psychological distress and well-being. These measures capture negative emotions such as sadness, hopelessness, and worthlessness, and positive emotions such as happiness, fulfillment, and life satisfaction. Most of the measures reveal increasing distress and decreasing well-being across the age span for those of low relative socioeconomic position, in contrast to little decline or modest improvement for persons of high relative position.

Wednesday, July 11, 2018

A fundamental advance in brain imaging techniques.

I want to pass on the abstract, along with a bit of text and two figures, from an article by Coalson, Van Essen, and Glasser that argues for a fundamental change is how functional cortical areas of the brain are recorded and reported.  They demonstrate that surface based parcellation is 3-fold more accurate than traditional volume based parcellations.:

Significance
Most human brain-imaging studies have traditionally used low-resolution images, inaccurate methods of cross-subject alignment, and extensive blurring. Recently, a high-resolution approach with more accurate alignment and minimized blurring was used by the Human Connectome Project to generate a multimodal map of human cortical areas in hundreds of individuals. Starting from these data, we systematically compared these two approaches, showing that the traditional approach is nearly three times worse than the Human Connectome Project’s improved approach in two objective measures of spatial localization of cortical areas. Furthermore, we demonstrate considerable challenges in comparing data across the two approaches and, as a result, argue that there is an urgent need for the field to adopt more accurate methods of data acquisition and analysis.
Abstract
Localizing human brain functions is a long-standing goal in systems neuroscience. Toward this goal, neuroimaging studies have traditionally used volume-based smoothing, registered data to volume-based standard spaces, and reported results relative to volume-based parcellations. A novel 360-area surface-based cortical parcellation was recently generated using multimodal data from the Human Connectome Project, and a volume-based version of this parcellation has frequently been requested for use with traditional volume-based analyses. However, given the major methodological differences between traditional volumetric and Human Connectome Project-style processing, the utility and interpretability of such an altered parcellation must first be established. By starting from automatically generated individual-subject parcellations and processing them with different methodological approaches, we show that traditional processing steps, especially volume-based smoothing and registration, substantially degrade cortical area localization compared with surface-based approaches. We also show that surface-based registration using features closely tied to cortical areas, rather than to folding patterns alone, improves the alignment of areas, and that the benefits of high-resolution acquisitions are largely unexploited by traditional volume-based methods. Quantitatively, we show that the most common version of the traditional approach has spatial localization that is only 35% as good as the best surface-based method as assessed using two objective measures (peak areal probabilities and “captured area fraction” for maximum probability maps). Finally, we demonstrate that substantial challenges exist when attempting to accurately represent volume-based group analysis results on the surface, which has important implications for the interpretability of studies, both past and future, that use these volume-based methods.
Some context from their introduction:
The recently reported HCP-MMP1.0 multimodal cortical parcellation (https://balsa.wustl.edu/study/RVVG, see the graphic below) contains 180 distinct areas per hemisphere and was generated from hundreds of healthy young adult subjects from the Human Connectome Project (HCP) using data precisely aligned with the HCP’s surface-based neuroimaging analysis approach. Each cortical area is defined by multiple features, such as those representing architecture, function, connectivity, or topographic maps of visual space. This multimodal parcellation has generated widespread interest, with many investigators asking how to relate its cortical areas to data processed using the traditional neuroimaging approach. Because volume-registered analysis of the cortex in MNI space is still widely used, this has often translated into concrete requests, such as: “Please provide the HCP-MMP1.0 parcellation in standard MNI volume space.” Here, we investigate quantitatively the drawbacks of traditional volume-based analyses and document that much of the HCP-MMP1.0 parcellation cannot be faithfully represented when mapped to a traditional volume-based atlas.
Here is a graphic from the parcellation analysis


And here is Figure 1 and its explanation from the Coalson et al. paper.


The figure shows a probabilistic maps of five exemplar areas spanning a range of peak probabilities. Each area is shown as localized by areal-feature–based surface registration (Lower, Center), and as localized by volume-based methods. One area (3b in Fig. 1) has a peak probability of 0.92 in the volume (orange, red), whereas the other four have volumetric peak probabilities in the range of 0.35–0.7 (blue, yellow). Notably, the peak probabilities of these five areas are all higher on the surface (Figure, Lower, Center) (range 0.90–1) than in the volume, indicating that MSMAll nonlinear surface-based registration provides substantially better functional alignment across subjects than does nonlinear volume-based registration.

Tuesday, July 10, 2018

Mindfulness training increases strength of right insula connections.

Sharp et al.(open source) suggest that:
The endeavor to understand how mindfulness works will likely be advanced by using recently developed tools and theory within the nascent field of brain connectomics. The connectomic framework conceives of the brain’s functional and structural architecture as a complex, dynamic network. This network view of brain function partly arose from the lack of support for highly selective, modular regions instantiating specialized functions. That is, large meta-analyses consisting mostly of univariate fMRI analyses disconfirm that, for example, the amygdala is exclusively selective for fear processing. Indeed, more fruitful mechanistic knowledge of how neural systems function may emerge from delineating how different regions communicate functionally across a range of environments, and by identifying the underlying structural connections that constrain such functional dynamics.
Their abstract, and a summary figure:
Although mindfulness meditation is known to provide a wealth of psychological benefits, the neural mechanisms involved in these effects remain to be well characterized. A central question is whether the observed benefits of mindfulness training derive from specific changes in the structural brain connectome that do not result from alternative forms of experimental intervention. Measures of whole-brain and node-level structural connectome changes induced by mindfulness training were compared with those induced by cognitive and physical fitness training within a large, multi-group intervention protocol (n = 86). Whole-brain analyses examined global graph-theoretical changes in structural network topology. A hypothesis-driven approach was taken to investigate connectivity changes within the insula, which was predicted here to mediate interoceptive awareness skills that have been shown to improve through mindfulness training. No global changes were observed in whole-brain network topology. However, node-level results confirmed a priori hypotheses, demonstrating significant increases in mean connection strength in right insula across all of its connections. Present findings suggest that mindfulness strengthens interoception, operationalized here as the mean insula connection strength within the overall connectome. This finding further elucidates the neural mechanisms of mindfulness meditation and motivates new perspectives about the unique benefits of mindfulness training compared to contemporary cognitive and physical fitness interventions.
Legend - Anatomical representation of tractography pathways between right insula and highly connected regions. Connections displayed (only corticocortical, here) comprised the top 80% connection strengths across all insula pathways. (A) Displays pre-training connections in right insula, which showed the greatest structural reorganization across mindfulness training. (B) Represents the same image as in (A) except at post-training.

Monday, July 09, 2018

Mortality rates level off at extreme age

Interesting work from Barbi et al. showing that human death rates increase exponentially up to about age 80, then decelerate, and plateau after age 105. At that point, the odds of someone dying from one birthday to the next are roughly 50:50. This implies that there might be no natural limit to how long humans can live, contrary to the view of most demographers and biologists.:
Theories about biological limits to life span and evolutionary shaping of human longevity depend on facts about mortality at extreme ages, but these facts have remained a matter of debate. Do hazard curves typically level out into high plateaus eventually, as seen in other species, or do exponential increases persist? In this study, we estimated hazard rates from data on all inhabitants of Italy aged 105 and older between 2009 and 2015 (born 1896–1910), a total of 3836 documented cases. We observed level hazard curves, which were essentially constant beyond age 105. Our estimates are free from artifacts of aggregation that limited earlier studies and provide the best evidence to date for the existence of extreme-age mortality plateaus in humans.

Friday, July 06, 2018

Brain imaging predicts who will be a good musical performer.

Fascinating observations from Zatorre's group:

Significance
In sophisticated auditory–motor learning such as musical instrument learning, little is understood about how brain plasticity develops over time and how the related individual variability is reflected in the neural architecture. In a longitudinal fMRI training study on cello learning, we reveal the integrative function of the dorsal cortical stream in auditory–motor information processing, which comes online quickly during learning. Additionally, our data show that better performers optimize the recruitment of regions involved in auditory encoding and motor control and reveal the critical role of the pre-supplementary motor area and its interaction with auditory areas as predictors of musical proficiency. The present study provides unprecedented understanding of the neural substrates of individual learning variability and therefore has implications for pedagogy and rehabilitation.
Abstract
The auditory and motor neural systems are closely intertwined, enabling people to carry out tasks such as playing a musical instrument whose mapping between action and sound is extremely sophisticated. While the dorsal auditory stream has been shown to mediate these audio–motor transformations, little is known about how such mapping emerges with training. Here, we use longitudinal training on a cello as a model for brain plasticity during the acquisition of specific complex skills, including continuous and many-to-one audio–motor mapping, and we investigate individual differences in learning. We trained participants with no musical background to play on a specially designed MRI-compatible cello and scanned them before and after 1 and 4 wk of training. Activation of the auditory-to-motor dorsal cortical stream emerged rapidly during the training and was similarly activated during passive listening and cello performance of trained melodies. This network activation was independent of performance accuracy and therefore appears to be a prerequisite of music playing. In contrast, greater recruitment of regions involved in auditory encoding and motor control over the training was related to better musical proficiency. Additionally, pre-supplementary motor area activity and its connectivity with the auditory cortex during passive listening before training was predictive of final training success, revealing the integrative function of this network in auditory–motor information processing. Together, these results clarify the critical role of the dorsal stream and its interaction with auditory areas in complex audio–motor learning.

Thursday, July 05, 2018

Why are religious people trusted more?

A prevailing view is that religious behavior facilitate trust, primarily toward coreligionists, and particularly when it is diagnostic of belief in moralizing deities. Moon et al. suggest a further reason that religious people are viewed as more trustworthy than non-religious people: they follow 'slow life-history' strategies that tend to be sexually restricted, invested in family, nonimpulsive, and nonaggressive, all traits that associated with cooperativeness and prosociality. They find that direct information about life history reproductive strategy (i.e., a subject's “dating preferences”) tend to override the effects of religious information. Their abstract:
Religious people are more trusted than nonreligious people. Although most theorists attribute these perceptions to the beliefs of religious targets, religious individuals also differ in behavioral ways that might cue trust. We examined whether perceivers might trust religious targets more because they heuristically associate religion with slow life-history strategies. In three experiments, we found that religious targets are viewed as slow life-history strategists and that these findings are not the result of a universally positive halo effect; that the effect of target religion on trust is significantly mediated by the target’s life-history traits (i.e., perceived reproductive strategy); and that when perceivers have direct information about a target’s reproductive strategy, their ratings of trust are driven primarily by his or her reproductive strategy, rather than religion. These effects operate over and above targets’ belief in moralizing gods and offer a novel theoretical perspective on religion and trust.

Wednesday, July 04, 2018

Seven creepy Facebook patents

I'll follow yesterday's post with yet another post on creepy high tech patents, this time from Facebook, showing their ongoing intention to invade our privacy as much as possible. From the article by Chinoy:

Reading your relationships
One patent application discusses predicting whether you’re in a romantic relationship using information such as how many times you visit another user’s page, the number of people in your profile picture and the percentage of your friends of a different gender.
Classifying your personality
Another proposes using your posts and messages to infer personality traits. It describes judging your degree of extroversion, openness or emotional stability, then using those characteristics to select which news stories or ads to display.
Predicting your future
This patent application describes using your posts and messages, in addition to your credit card transactions and location, to predict when a major life event, such as a birth, death or graduation, is likely to occur.
Identifying your camera
Another considers analyzing pictures to create a unique camera “signature” using faulty pixels or lens scratches. That signature could be used to figure out that you know someone who uploads pictures taken on your device, even if you weren’t previously connected. Or it might be used to guess the “affinity” between you and a friend based on how frequently you use the same camera.
Listening to your environment
This patent application explores using your phone microphone to identify the television shows you watched and whether ads were muted. It also proposes using the electrical interference pattern created by your television power cable to guess which show is playing.
Tracking your routine
Another patent application discusses tracking your weekly routine and sending notifications to other users of deviations from the routine. In addition, it describes using your phone’s location in the middle of the night to establish where you live.
Inferring your habits
This patent proposes correlating the location of your phone to locations of your friends’ phones to deduce whom you socialize with most often. It also proposes monitoring when your phone is stationary to track how many hours you sleep.

Tuesday, July 03, 2018

A patent for emotional robots?

I recently received an interesting email from Deepak Gupta, who is with a patent research services company, who, having seen my post on robots and emotion, thought to point me to a description of a patent application filed by Samsung. It seems so straightforward that I would have thought that such a patent would have been proposed years ago. It is an attempt to get past the "uncanny valley" issue, the feelings of eeriness and revulsion people can have in observing or interacting with robots. (It still seems a bit scary to me.)
...an electronic robot includes a motor mounted inside its head. A head part of the robot consists of a display unit, to display emotional expressions and a motor control unit that can rotate the robot’s head in a clockwise or counter-clockwise direction on its axis.
The electronic robot decides which emotional expression to be displayed on a display panel by sensing information from its various sensors, such as a camera sensor, a pressure sensor, a geomagnetic sensor, and a microphone to sense the motion of a user. Accordingly, the electronic robot tracks the major feature points of the face such as eyes, nose, mouth, and eyebrows from user images captured by the camera sensor and recognizes user emotional information, based on basic facial expressions conveying happiness, surprise, anger, sadness, and sorrow. Once the information is received from the sensors, the data is then analyzed by the processor that sends an appropriate voltage signal to the display unit and the motor control unit in order to express relevant emotion. The emotional states expressed by the electronic robot include anger, disgust, sadness, interest, happiness, impassiveness, surprise, agreement (i.e., “yes”), and rejection (i.e., “no”).
To express emotional states, the electronic robot stores motion information of the head predefined for each emotional state in a storage unit. The head motion information includes head motion type, angle, speed (or rotational force), and direction.
The technology disclosed in the patent document allows robots to express emotions. Therefore, these robots can communicate or interact with humans more effectively. These robots can be used in the applications that require interaction with humans, for instance, communicating with patients in a hospital in the absence of actual staff, or even interacting with pets such as dogs or cats that are left alone by their owners during their working hours.

Monday, July 02, 2018

An optimistic outlook creates a rosy past.

From Devitt and Schacter in the Harvard Psychology department, a study recruiting the usual gaggle of psychology undergraduate students as subjects:
People frequently engage in future thinking in everyday life, but it is unknown how simulating an event in advance changes how that event is remembered once it takes place. To initiate study of this important topic, we conducted two experiments in which participants simulated emotional events before learning the hypothetical outcome of each event via narratives. Memory was assessed for emotional details contained in those narratives. Positive simulation resulted in a liberal response bias for positive information and a conservative bias for negative information. Events preceded by positive simulation were considered more favorably in retrospect. In contrast, negative simulation had no impact on subsequent memory. Results were similar across an immediate and delayed memory test and for past and future simulation. These results provide novel insights into the cognitive consequences of episodic future simulation and build on the optimism-bias literature by showing that adopting a favorable outlook results in a rosy memory.