Here are some clips from an interesting
Op-Ed piece by Quentin Hardy on artificial intelligence. (And, by the way, a recent issue of Science Magazine has a
special section on A.I. with a series of related articles.):
...the real worry...is a computer program rapidly overdoing a single task, with no context. A machine that makes paper clips proceeds unfettered, one example goes, and becomes so proficient that overnight we are drowning in paper clips.
There is little sense among practitioners in the field of artificial intelligence that machines are anywhere close to acquiring the kind of consciousness where they could form lethal opinions about their makers...doomsday scenarios confuse the science with remote philosophical problems about the mind and consciousness...If more people learned how to write software, they’d see how literal-minded these overgrown pencils we call computers actually are.
Deep Learning relies on a hierarchical reasoning technique called neural networks, suggesting the neurons of a brain. Comparing a node in a neural network to a neuron, though, is at best like comparing a toaster to the space shuttle....But machine learning is automation, a better version of what computers have always done. The “learning” is not stored and generalized in the ways that make people smart.
DeepMind made a program that mastered simple video games, but it never took the learning from one game into another. The 22 rungs of a neural net it climbs to figure out what is in a picture do not operate much like human image recognition and are still easily defeated...Moving out of that stupidity to a broader humanlike capability is called “transfer learning.” It is at best in the research phase.
“People in A.I. know that a chess-playing computer still doesn’t yearn to capture a queen,” said Stuart Russell, a professor of Computer Science at the University of California, Berkeley... He seeks mathematical ways to ensure dumb programs don’t conflict with our complex human values.
No comments:
Post a Comment