Friday, April 21, 2017

A.I. better at predicting heart attacks, learns implicit racial and gender bias.

Lohr notes a study that suggest we need to develop and "A.I. index" analogous to the Consumer Price Index, to track the pace and spread of artificial intelligence technology. Two recent striking finding in this field:

 Weng et al. show that AI is better at predicting heart attacks from routine clinical data on risk factors than human doctors are. Hutson notes that the best The best of the four A.I. algorithms tried — neural networks — correctly predicted 7.6% more events than the American College of Cardiology/American Heart Association (ACC/AHA) method (based on eight risk factors—including age, cholesterol level, and blood pressure, that physicians effectively add up), and it raised 1.6% fewer false alarms.

Caliskan et al. show that machines can learn word associations from written texts and that these associations mirror those learned by humans, as measured by the Implicit Association Test (IAT). In large bodies of English-language text, they decipher content corresponding to human attitudes (likes and dislikes) and stereotypes. In addition to revealing a new comprehension skill for machines, the work raises the specter that this machine ability may become an instrument of unintended discrimination based on gender, race, age, or ethnicity. Their abstract:
Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.

No comments:

Post a Comment