Thursday, October 05, 2017

Curtailing proactive policing can reduce major crime.

Weisburd points to work by Sullivan and O’Keeffe, yielding counter-intuitive results, that "took advantage of a natural experiment in New York City that resulted from the strangling death of Eric Garner in Staten Island. Subsequent political events led to the New York City Police Department (NYPD) engaging in a ‘slowdown’ characterized by dramatic reductions in arrests and summonses. One would have expected crime to go up in this period if this type of proactivity was effective. Instead, analyzing several years of data obtained from the NYPD, they find that civilian complaints of major crimes decreased. Accordingly, they conclude that prior proactivity did not reduce crime, but led to increases in crime." Here is the Sullivan and O'Keeffe abstract:
Governments employ police to prevent criminal acts. But it remains in dispute whether high rates of police stops, criminal summonses and aggressive low-level arrests reduce serious crime1,2,3,4,5,6,7. Police officers target their efforts at areas where crime is anticipated and/or where they expect enforcement will be most effective. Simultaneously, citizens decide to comply with the law or commit crime partly on the basis of police deployment and enforcement strategies. In other words, policing and crime are endogenous to unobservable strategic interaction, which frustrates causal analysis. Here, we resolve these challenges and present evidence that proactive policing—which involves systematic and aggressive enforcement of low-level violations—is positively related to reports of major crime. We examine a political shock that caused the New York Police Department (NYPD) to effectively halt proactive policing in late 2014 and early 2015. Analysing several years of unique data obtained from the NYPD, we find that civilian complaints of major crimes (such as burglary, felony assault and grand larceny) decreased during and shortly after sharp reductions in proactive policing. The results challenge prevailing scholarship as well as conventional wisdom on authority and legal compliance, as they imply that aggressively enforcing minor legal statutes incites more severe criminal acts.

Wednesday, October 04, 2017

Brain circuits that modulate sociability.

The social bonding neuropeptide oxytocin can be traced over 500 million years, with analogous peptides found in birds, reptiles, fish, amphibians, and some invertebrates. Hung et al. have found that release of oxytocin in the ventral tegmental area of the brain increases prosocial behaviors in mice. Optogenetic manipulation of oxytocin release influences sociability in a context-dependent manner. Oxytocin increases activity in dopamine cells that project to the nucleus accumbens, another key node of reward circuitry in the brain. Here is their abstract, followed by a nice graphic of the relevant systems in the human brain.
The reward generated by social interactions is critical for promoting prosocial behaviors. Here we present evidence that oxytocin (OXT) release in the ventral tegmental area (VTA), a key node of the brain’s reward circuitry, is necessary to elicit social reward. During social interactions, activity in paraventricular nucleus (PVN) OXT neurons increased. Direct activation of these neurons in the PVN or their terminals in the VTA enhanced prosocial behaviors. Conversely, inhibition of PVN OXT axon terminals in the VTA decreased social interactions. OXT increased excitatory drive onto reward-specific VTA dopamine (DA) neurons. These results demonstrate that OXT promotes prosocial behavior through direct effects on VTA DA neurons, thus providing mechanistic insight into how social interactions can generate rewarding experiences.


Tuesday, October 03, 2017

You want younger or older?

Interesting piece from Mona Chalabi:

(According to the Census Bureau, the average age difference between men and their wives is 2.3 years.)

Monday, October 02, 2017

This year's Ig Nobel prizes.

If you want a few chuckles, have a look at this link. The prize winning work this year shows that cats can be simultaneously solid and liquid because of their ability to adopt the shape of their container.


Friday, September 29, 2017

Does it matter whether we believe in free will or not?

From Genschow et al.:

Significance
The question whether free will exists or not has been a matter of debate in philosophy for centuries. Recently, researchers claimed that free will is nothing more than a myth. Although the validity of this claim is debatable, it attracted much attention in the general public. This raises the crucial question whether it matters if people believe in free will or not. In six studies, we tested whether believing in free will is related to the correspondence bias—that is, people’s automatic tendency to overestimate the influence of internal as compared to external factors when interpreting others’ behavior. Overall, we demonstrate that believing in free will increases the correspondence bias and predicts prescribed punishment and reward behavior.
Abstract
Free will is a cornerstone of our society, and psychological research demonstrates that questioning its existence impacts social behavior. In six studies, we tested whether believing in free will is related to the correspondence bias, which reflects people’s automatic tendency to overestimate the influence of internal as compared to external factors when interpreting others’ behavior. All studies demonstrate a positive relationship between the strength of the belief in free will and the correspondence bias. Moreover, in two experimental studies, we showed that weakening participants’ belief in free will leads to a reduction of the correspondence bias. Finally, the last study demonstrates that believing in free will predicts prescribed punishment and reward behavior, and that this relation is mediated by the correspondence bias. Overall, these studies show that believing in free will impacts fundamental social-cognitive processes that are involved in the understanding of others’ behavior.
Also, you should have a look at Frith's essay on how our illusory sense of agency has a deeply important social purpose. Belief in free will and agency is important if a distinction critical to all legal systems is to be made: between intentional and accidental wrongs. Further,
Responsibility... is the real currency of conscious experience. In turn, it is also the bedrock of culture. Humans are social animals, but we’d be unable to cooperate or get along in communities if we couldn’t agree on the kinds of creatures we are and the sort of world we inhabit. It’s only by reflecting, sharing and accounting for our experiences that we can find such common ground. To date, the scientific method is the most advanced cognitive technology we’ve developed for honing the accuracy of our consensus – a method involving continuous experimentation, discussion and replication.

Thursday, September 28, 2017

Greater internet use does not correlate with faster growth of political polarization.

Continuing a topic from MindBlog's April 20 post...Most writing on the increase in political polarization over the past decades argues that it is facilitated by more extensive use of the internet, enhancing formation of social sites for like minded people which form isolated 'echo chambers.' Boxell et al. find, to the contrary, that polarization has increased the most among the demographic groups least likely to use the Internet and social media.
We combine eight previously proposed measures to construct an index of political polarization among US adults. We find that polarization has increased the most among the demographic groups least likely to use the Internet and social media. Our overall index and all but one of the individual measures show greater increases for those older than 65 than for those aged 18–39. A linear model estimated at the age-group level implies that the Internet explains a small share of the recent growth in polarization.

Wednesday, September 27, 2017

“No problem” vs “you’re welcome”

My daughter pointed me to this piece by Gretchen McCulloch, which gives me some insight into what I have considered the annoying habit of younger people to always say 'no problem' instead of 'you're welcome.' A clip:
Speaking of linguistics, there’s one particular linguistic tick that I think clearly separates Baby Boomers from Millennials: how we reply when someone says “thank you.”
You almost never hear a Millennial say “you’re welcome.” At least not when someone thanks them. It just isn’t done. Not because Millennials are ingrates lacking all manners, but because the polite response is “No problem.” Millennials only use “you’re welcome” sarcastically when they haven’t been thanked or when something has been taken from/done to them without their consent. It’s a phrase that’s used to point out someone else’s rudeness. A Millennial would typically be fairly uncomfortable saying “you’re welcome” as an acknowledgement of genuine thanks because the phrase is only ever used disingenuously.
Baby Boomers, however, get really miffed if someone says “no problem” in response to being thanked. From their perspective, saying “no problem” means that whatever they’re thanking someone for was in fact a problem, but the other person did it anyway as a personal favor. To them “You’re welcome” is the standard polite response.
“You’re welcome” means to Millennials what “no problem” means to Baby Boomers, and vice versa.The two phrases have converse meanings to the different age sets. I’m not sure exactly where this line gets drawn, but it’s somewhere in the middle of Gen X. This is a real pain in the ass if you work in customer service because everyone thinks that everyone else is being rude when they’re really being polite in their own language.

Tuesday, September 26, 2017

The science of emotion - now at least 27 categories of emotional states.

So...I guess we knew emotions are complicated. There has been intense debate on how to describe them in semantic and geometric dimensions such as valence and arousal. Cowan and Keltner use a natural history approach to gather and analyze self descriptions of emotional states elicited by 2,185 emotionally evocative short videos (check out the geometrical space of their results in the link below):

Significance
Claims about how reported emotional experiences are geometrically organized within a semantic space have shaped the study of emotion. Using statistical methods to analyze reports of emotional states elicited by 2,185 emotionally evocative short videos with richly varying situational content, we uncovered 27 varieties of reported emotional experience. Reported experience is better captured by categories such as “amusement” than by ratings of widely measured affective dimensions such as valence and arousal. Although categories are found to organize dimensional appraisals in a coherent and powerful fashion, many categories are linked by smooth gradients, contrary to discrete theories. Our results comprise an approximation of a geometric structure of reported emotional experience.
Abstract
Emotions are centered in subjective experiences that people represent, in part, with hundreds, if not thousands, of semantic terms. Claims about the distribution of reported emotional states and the boundaries between emotion categories—that is, the geometric organization of the semantic space of emotion—have sparked intense debate. Here we introduce a conceptual framework to analyze reported emotional states elicited by 2,185 short videos, examining the richest array of reported emotional experiences studied to date and the extent to which reported experiences of emotion are structured by discrete and dimensional geometries. Across self-report methods, we find that the videos reliably elicit 27 distinct varieties of reported emotional experience. Further analyses revealed that categorical labels such as amusement better capture reports of subjective experience than commonly measured affective dimensions (e.g., valence and arousal). Although reported emotional experiences are represented within a semantic space best captured by categorical labels, the boundaries between categories of emotion are fuzzy rather than discrete. By analyzing the distribution of reported emotional states we uncover gradients of emotion—from anxiety to fear to horror to disgust, calmness to aesthetic appreciation to awe, and others—that correspond to smooth variation in affective dimensions such as valence and dominance. Reported emotional states occupy a complex, high-dimensional categorical space. In addition, our library of videos and an interactive map of the emotional states they elicit (https://s3-us-west-1.amazonaws.com/emogifs/map.html) are made available to advance the science of emotion.

Monday, September 25, 2017

Will you be above or below “the API” in the emerging economy?

An application programming interface (API) is a set of subroutine definitions, protocols, and tools that are building blocks for application software...software of the sort that Uber uses to connect taxi drivers to customers without other human intervention. From Anthony Wing Kosner:
Customers use an app interface to enter their data into the system. The app sends a request that includes account data, pickup and dropoff locations via API to Uber's servers that poll available drivers nearby and dispatches one to the customer to fulfill the request. The only two humans involved are the customer and the driver. Danny DeVito has been furloughed!
From Peter Reinhardt:
Drivers are opting into a dichotomous workforce: the worker bees below the software layer have no opportunity for on-the-job training that advances their career, and compassionate social connections don’t pierce the software layer either. The skills they develop in driving are not an investment in their future. Once you introduce the software layer between ‘management’ (Uber’s full-time employees building the app and computer systems) and the human workers below the software layer (Uber’s drivers, Instacart’s delivery people), there’s no obvious path upwards. In fact, there’s a massive gap and no systems in place to bridge it.
Kosner notes some of the longer term implication of such software:
Uber drivers, Amazon Mechanical Turk workers, 99design contestants, TaskRabbit taskers and HomeJoy cleaners are all targets for further automation...Yes, self-driving cars on the way, and it is likely that automated taxi fleets will be the first commercial application of this technology... Uberization of work may soon be coming to your chosen profession, affecting not just cab drivers and house cleaners, but extending to lawyers, doctors and even (some day) venture capitalists.

Friday, September 22, 2017

Color naming across languages reflects color use

Gibson et al. do a study showing that warm colors are communicated more efficiently than cool colors, and that this cross-linguistic pattern reflects the color statistics of the world:

Significance
The number of color terms varies drastically across languages. Yet despite these differences, certain terms (e.g., red) are prevalent, which has been attributed to perceptual salience. This work provides evidence for an alternative hypothesis: The use of color terms depends on communicative needs. Across languages, from the hunter-gatherer Tsimane' people of the Amazon to students in Boston, warm colors are communicated more efficiently than cool colors. This cross-linguistic pattern reflects the color statistics of the world: Objects (what we talk about) are typically warm-colored, and backgrounds are cool-colored. Communicative needs also explain why the number of color terms varies across languages: Cultures vary in how useful color is. Industrialization, which creates objects distinguishable solely based on color, increases color usefulness.
Abstract
What determines how languages categorize colors? We analyzed results of the World Color Survey (WCS) of 110 languages to show that despite gross differences across languages, communication of chromatic chips is always better for warm colors (yellows/reds) than cool colors (blues/greens). We present an analysis of color statistics in a large databank of natural images curated by human observers for salient objects and show that objects tend to have warm rather than cool colors. These results suggest that the cross-linguistic similarity in color-naming efficiency reflects colors of universal usefulness and provide an account of a principle (color use) that governs how color categories come about. We show that potential methodological issues with the WCS do not corrupt information-theoretic analyses, by collecting original data using two extreme versions of the color-naming task, in three groups: the Tsimane', a remote Amazonian hunter-gatherer isolate; Bolivian-Spanish speakers; and English speakers. These data also enabled us to test another prediction of the color-usefulness hypothesis: that differences in color categorization between languages are caused by differences in overall usefulness of color to a culture. In support, we found that color naming among Tsimane' had relatively low communicative efficiency, and the Tsimane' were less likely to use color terms when describing familiar objects. Color-naming among Tsimane' was boosted when naming artificially colored objects compared with natural objects, suggesting that industrialization promotes color usefulness.

Thursday, September 21, 2017

Separate prefrontal areas code desirability versus availability.

When we make a decision, we calculate its “expected value,” by multiplying the value of something (how much we want or need it) with the probability that we might be able to obtain it, a concept first introduced by 17th-century mathematician Blaise Pascal. Rudebeck et al. show that this value determination involves two separate prefrontal areas:
Advantageous foraging choices benefit from an estimation of two aspects of a resource’s value: its current desirability and availability. Both orbitofrontal and ventrolateral prefrontal areas contribute to updating these valuations, but their precise roles remain unclear. To explore their specializations, we trained macaque monkeys on two tasks: one required updating representations of a predicted outcome’s desirability, as adjusted by selective satiation, and the other required updating representations of an outcome’s availability, as indexed by its probability. We evaluated performance on both tasks in three groups of monkeys: unoperated controls and those with selective, fiber-sparing lesions of either the OFC or VLPFC. Representations that depend on the VLPFC but not the OFC play a necessary role in choices based on outcome availability; in contrast, representations that depend on the OFC but not the VLPFC play a necessary role in choices based on outcome desirability.
Both OFC and VLPFC send connections to the ventromedial prefrontal cortex (VMPFC), and functional magnetic resonance imaging suggests that the VMPFC may be where choices ultimately get made.

Wednesday, September 20, 2017

How we see what we expect to see.

Kok et al. show that expectations can induce the preactivation of stimulus templates in our brain that resemble the neural signals actually generated when the stimuls is presented:

Significance
The way that we perceive the world is partly shaped by what we expect to see at any given moment. However, it is unclear how this process is neurally implemented. Recently, it has been proposed that the brain generates stimulus templates in sensory cortex to preempt expected inputs. Here, we provide evidence that a representation of the expected stimulus is present in the neural signal shortly before it is presented, showing that expectations can indeed induce the preactivation of stimulus templates. Importantly, these expectation signals resembled the neural signal evoked by an actually presented stimulus, suggesting that expectations induce similar patterns of activations in visual cortex as sensory stimuli.
Abstract
Perception can be described as a process of inference, integrating bottom-up sensory inputs and top-down expectations. However, it is unclear how this process is neurally implemented. It has been proposed that expectations lead to prestimulus baseline increases in sensory neurons tuned to the expected stimulus, which in turn, affect the processing of subsequent stimuli. Recent fMRI studies have revealed stimulus-specific patterns of activation in sensory cortex as a result of expectation, but this method lacks the temporal resolution necessary to distinguish pre- from poststimulus processes. Here, we combined human magnetoencephalography (MEG) with multivariate decoding techniques to probe the representational content of neural signals in a time-resolved manner. We observed a representation of expected stimuli in the neural signal shortly before they were presented, showing that expectations indeed induce a preactivation of stimulus templates. The strength of these prestimulus expectation templates correlated with participants’ behavioral improvement when the expected feature was task-relevant. These results suggest a mechanism for how predictive perception can be neurally implemented.

Tuesday, September 19, 2017

Computer design cues taken from human brains

Metz does an interesting article on the waning of do-it-all chips, central processing units of the sort that are running my MacBook Air as I type this, in favor distributed systems that offload specialized tasks, like hearing and seeing, to A.I. (artificial intelligence) chips specialized for those tasks, much as the human brain stem oversees the system and sends different jobs to different specialized parts of the surrounding cortex (auditory, visual, somatosensory, motor, executive, motivational, etc.):
...machines that spread computations across vast numbers of tiny, low-power chips can operate more like the human brain, which efficiently uses the energy at its disposal.
…the leading internet companies are now training their neural networks with help from another type of chip called a graphics processing unit, or G.P.U. These low-power chips — usually made by Nvidia — were originally designed to render images for games and other software, and they worked hand-in-hand with the chip — usually made by Intel — at the center of a computer. G.P.U.s can process the math required by neural networks far more efficiently than C.P.U.s.
G.P.U.s are the primary vehicles that companies use to teach their neural networks a particular task, but that is only part of the process. Once a neural network is trained for a task, it must perform it, and that requires a different kind of computing power.
After training a speech-recognition algorithm, for example, Microsoft offers it up as an online service, and it actually starts identifying commands that people speak into their smartphones. G.P.U.s are not quite as efficient during this stage of the process. So, many companies are now building chips specifically to do what the other chips have learned.
Google built its own specialty chip, a Tensor Processing Unit, or T.P.U. Nvidia is building a similar chip. And Microsoft has reprogrammed specialized chips from Altera, which was acquired by Intel, so that it too can run neural networks more easily.
The hope is that this new breed of mobile chip can help devices handle more, and more complex, tasks on their own, without calling back to distant data centers: phones recognizing spoken commands without accessing the internet; driverless cars recognizing the world around them with a speed and accuracy that is not possible now.
In other words, a driverless car needs cameras and radar and lasers. But it also needs a brain.

Monday, September 18, 2017

It’s all about tribes - not ideas, morals, or principles.

Thomas Edsall does another excellent piece on what is happening in our politics. I suggest you read it...here are a few clips:

Since the advent of Trump,
...white evangelicals went from being the least likely to the most likely group to agree that a candidate’s personal immorality has no bearing on his performance in public office.


Christopher Achen and Larry Bartels, political scientists at Princeton and Vanderbilt:
In the conventional view, democracy begins with the voters. Ordinary people have preferences about what their government should do. They choose leaders who will do those things, or they enact their preferences directly in referendums. In either case, what the majority wants becomes government policy ..... the more realistic view is that Citizens’ perceptions of parties’ policy stands and their own policy views are significantly colored by their party preferences. Even on purely factual questions with clear right answers, citizens are sometimes willing to believe the opposite if it makes them feel better about their partisanship and vote choices....group and partisan loyalties, not policy preferences or ideologies, are fundamental in democratic politics.
Edsall cites further work showing that those with strongest Republican identification are most likely to embrace Trump's swings in political stance to either the right or the left.

Friday, September 15, 2017

The intractability of racial discrimination

Sobering findings from Quillian et al.:

Significance
Many scholars have argued that discrimination in American society has decreased over time, while others point to persisting race and ethnic gaps and subtle forms of prejudice. The question has remained unsettled due to the indirect methods often used to assess levels of discrimination. We assess trends in hiring discrimination against African Americans and Latinos over time by analyzing callback rates from all available field experiments of hiring, capitalizing on the direct measure of discrimination and strong causal validity of these studies. We find no change in the levels of discrimination against African Americans since 1989, although we do find some indication of declining discrimination against Latinos. The results document a striking persistence of racial discrimination in US labor markets.
Abstract
This study investigates change over time in the level of hiring discrimination in US labor markets. We perform a meta-analysis of every available field experiment of hiring discrimination against African Americans or Latinos (n = 28). Together, these studies represent 55,842 applications submitted for 26,326 positions. We focus on trends since 1989 (n = 24 studies), when field experiments became more common and improved methodologically. Since 1989, whites receive on average 36% more callbacks than African Americans, and 24% more callbacks than Latinos. We observe no change in the level of hiring discrimination against African Americans over the past 25 years, although we find modest evidence of a decline in discrimination against Latinos. Accounting for applicant education, applicant gender, study method, occupational groups, and local labor market conditions does little to alter this result. Contrary to claims of declining discrimination in American society, our estimates suggest that levels of discrimination remain largely unchanged, at least at the point of hire.

Thursday, September 14, 2017

Neuroforecasting crowd funding outcomes

Genevsky et al. find that directly measuring brain activities in the nucleus accumbens of individuals while they decide whether to fund proposed projects described on an Internet crowdfunding website proves to be a better predictor of crowding funding outcomes (weeks later) than direct behavioral measurements on the same individuals:

Abstract
Although traditional economic and psychological theories imply that individual choice best scales to aggregate choice, primary components of choice reflected in neural activity may support even more generalizable forecasts. Crowdfunding represents a significant and growing platform for funding new and unique projects, causes, and products. To test whether neural activity could forecast market-level crowdfunding outcomes weeks later, 30 human subjects (14 female) decided whether to fund proposed projects described on an Internet crowdfunding website while undergoing scanning with functional magnetic resonance imaging. Although activity in both the nucleus accumbens (NAcc) and medial prefrontal cortex predicted individual choices to fund on a trial-to-trial basis in the neuroimaging sample, only NAcc activity generalized to forecast market funding outcomes weeks later on the Internet. Behavioral measures from the neuroimaging sample, however, did not forecast market funding outcomes. This pattern of associations was replicated in a second study. These findings demonstrate that a subset of the neural predictors of individual choice can generalize to forecast market-level crowdfunding outcomes—even better than choice itself.
SIGNIFICANCE
Forecasting aggregate behavior with individual neural data has proven elusive; even when successful, neural forecasts have not historically supplanted behavioral forecasts. In the current research, we find that neural responses can forecast market-level choice and outperform behavioral measures in a novel Internet crowdfunding context. Targeted as well as model-free analyses convergently indicated that nucleus accumbens activity can support aggregate forecasts. Beyond providing initial evidence for neuropsychological processes implicated in crowdfunding choices, these findings highlight the ability of neural features to forecast aggregate choice, which could inform applications relevant to business and policy.

Wednesday, September 13, 2017

Do Americans care about rising inequality?

Interesting ideas from McCall et al.:

Significance
Although rising economic inequality in the United States has alarmed many, research across the social sciences repeatedly concludes that Americans are largely unconcerned about it. We argue that this conclusion may be premature. Here, we present the results of three experiments that test a different perspective—the opportunity model of beliefs about inequality. Tempering the conclusions of past work, the findings suggest that perceptions of rising economic inequality spark skepticism about the existence of economic opportunity in society that, in turn, may motivate support for equity-enhancing policies. Hence, this work calls for new theoretical and methodological approaches to the study of rising economic inequality, especially those that bridge disciplinary boundaries, as well as the largely separate experimental and correlational liter
Abstract
Economic inequality has been on the rise in the United States since the 1980s and by some measures stands at levels not seen since before the Great Depression. Although the strikingly high and rising level of economic inequality in the nation has alarmed scholars, pundits, and elected officials alike, research across the social sciences repeatedly concludes that Americans are largely unconcerned about it. Considerable research has documented, for instance, the important role of psychological processes, such as system justification and American Dream ideology, in engendering Americans’ relative insensitivity to economic inequality. The present work offers, and reports experimental tests of, a different perspective—the opportunity model of beliefs about economic inequality. Specifically, two convenience samples (study 1, n = 480; and study 2, n = 1,305) and one representative sample (study 3, n = 1,501) of American adults were exposed to information about rising economic inequality in the United States (or control information) and then asked about their beliefs regarding the roles of structural (e.g., being born wealthy) and individual (e.g., hard work) factors in getting ahead in society (i.e., opportunity beliefs). They then responded to policy questions regarding the roles of business and government actors in reducing economic inequality. Rather than revealing insensitivity to rising inequality, the results suggest that rising economic inequality in contemporary society can spark skepticism about the existence of economic opportunity in society that, in turn, may motivate support for policies designed to redress economic inequality.

Tuesday, September 12, 2017

How to regulate Artificial Intelligence

As a postscript to MindBlog's Aug. 23 post on Artificial Intelligence (AI), I pass on chunks from Oren Etzioni's more recent piece on how to artificial intelligence might be regulated in an effort to respond to apocolytic fears being voiced by Elon Musk, Stephen Hawking, and others. While caution is in order, he doesn't think progress in AI should be slowed down over concerns over it will run Amok, because competition with China for primacy is intense. He suggests amending:
...the “three laws of robotics” that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.
Pointing out their ambiguity, he suggests an alternative set of rules, as a starting point for further discussion:
1...an A.I. system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don’t want A.I. to engage in cyberbullying, stock manipulation or terrorist threats; we don’t want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don’t want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.
2...an A.I. system must clearly disclose that it is not human. As we have seen in the case of bots — computer programs that can engage in increasingly sophisticated dialogue with real people — society needs assurances that A.I. systems are clearly labeled as such. In 2016, a bot known as Jill Watson, which served as a teaching assistant for an online course at Georgia Tech, fooled students into thinking it was human. A more serious example is the widespread use of pro-Trump political bots on social media in the days leading up to the 2016 elections, according to researchers at Oxford.
3...an A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information. Because of their exceptional ability to automatically elicit, record and analyze information, A.I. systems are in a prime position to acquire confidential information. Think of all the conversations that Amazon Echo — a “smart speaker” present in an increasing number of homes — is privy to, or the information that your child may inadvertently divulge to a toy such as an A.I. Barbie.

Monday, September 11, 2017

How “ought” exceeds but implies “can”

From John Turri:
This paper tests a theory about the relationship between two important topics in moral philosophy and psychology. One topic is the function of normative language, specifically claims that one “ought” to do something. Do these claims function to describe moral responsibilities, encourage specific behavior, or both? The other topic is the relationship between saying that one “ought” to do something and one’s ability to do it. In what respect, if any, does what one “ought” to do exceed what one “can” do? The theory tested here has two parts: (1) “ought” claims function to both describe responsibilities and encourage people to fulfill them (the dual-function hypothesis); (2) the two functions relate differently to ability, because the encouragement function is limited by the person’s ability, but the descriptive function is not (the interaction hypothesis). If this theory is correct, then in one respect “ought implies can” is false because people have responsibilities that exceed their abilities. But in another respect “ought implies can” is legitimate because it is not worthwhile to encourage people to do things that exceed their ability. Results from two behavioral experiments support the theory that “ought” exceeds but implies “can.” Results from a third experiment provide further evidence regarding an “ought” claim’s primary function and how contextual features can affect the interpretation of its functions.

Friday, September 08, 2017

Debate over a scientific wellness study.

Ryan Cross discusses reactions to a new "Scientific Wellness" pilot study set up by distinguished biologist Lee Hood and their recent report on the effort in Nature Biotechnology:
Leroy “Lee” Hood is one of biology's living legends. Now 78 years old, he played an influential role in the development of the first automated DNA sequencer, pioneered systems biology, and still leads an institute devoted to it in Seattle, Washington. But his latest venture may not burnish his reputation: a company promoting “scientific wellness,” the notion that intensive, costly monitoring and coaching of apparently healthy people can head off disease.
In a pilot study of the concept, Hood and colleagues compiled what he calls “personal, dense, dynamic data clouds” for 108 people: full genome sequences; blood, saliva, urine, and stool samples taken three times at 3-month intervals and analyzed for 643 metabolites and 262 proteins; and physical activity and sleep monitoring. The team reports in the August issue of Nature Biotechnology that dozens of the participants turned out to have undiscovered health risks, including prediabetes and low vitamin D, which the coaching helped them address...nearly every participant had something to worry about: Ninety-five had low vitamin D levels, 81 had high mercury levels, and 52 were considered prediabetic. One person had high blood levels of the iron-containing protein ferritin and a genetic risk for developing hemochromatosis
Hood says the findings justify commercializing the monitoring, in a service costing thousands of dollars a year. But some colleagues disagree. The effort takes health monitoring “to new heights, or depths, depending on how you look at it,” says Eric Topol, director of the Scripps Translational Science Institute in San Diego, California....many of the problems the monitoring uncovered could be detected with simpler and cheaper tests, he adds.
Clayton Lewis, one of the subjects in the first study,
...joined with study leaders Nathan Price and Hood to launch the new company, called Arivale, with Lewis as CEO. Now 2 years old, the Seattle-based company has already enrolled 2500 people. They pay a first-year $3499 subscription fee for tracking and analysis similar to the pilot study, and nearly all have opted to let their data be used in research by Hood's Institute of Systems Biology.
From Jonathan Berg, a physician scientist who studies cancer and genetics at the University of North Carolina School of Medicine in Chapel Hill:
The problem is that we don't have any idea at all how this information should be used clinically. Topol agrees, noting that he had comparable concerns about a similar barrage of tests on presumably healthy people, including genome sequencing and a full-body MRI scan, from a company launched by another genome legend, J. Craig Venter.