Two recent NYTimes pieces -
one by Sherry Turtle (professor in the program in Science, Technology and Society at M.I.T.) and
the other by Andy Clark (professor of logic and metaphysics at the University of Edinburgh) - lay out starkly opposing views of the desirability of humans moving toward increased interactions with, and possible enhancements by, robots. You should read both. Turtle sees a potential diminution of our humanity:
The narrative begins with the idea that companionate robots would be “better than nothing,” better because there aren’t enough people to teach, love and tend to people. But that idea quickly shifts into another: robots would be better than most anything. Unlike people, they would not abandon you or get sick and die. They might not be capable of love, but they won’t break your heart.
From better than nothing to better than anything. These are stations on our voyage to forgetting what it means to be human. But the forgetting begins long before we have a robot companion in place; it begins when we even think of putting one in place. To build the robots, we must first rebuild ourselves as people ready to be their companions.
Clark looks towards a glorious enhancement of what it means to be human. He begins with a list that includes improving normal mental functioning and generating a wide spectrum of ways of being:
We now glimpse the next steps in human cultural and cognitive evolution, continuing the trend that started with the arrival of human language and the (much later) invention of writing and the external storage and transmission of ideas. The new steps herald an age of fluidity and demand answers to a host of questions…The two most important such questions are simply: How should we negotiate this dauntingly large space of human possibility? And what costs are we willing to tolerate along the way?
The first is a question of practice, the second of ethics. Practically speaking, it will not be easy to decide in a world of so many possible ways of being, so many enhancements and augmentations, and so many social practices, which ones are for us.
Ethically speaking, we need to ask what new costs and inequalities the freedoms and augmentations of some may mean for others. We need to ask if we are willing to tolerate some inequality as part of the rollout process for a more fluid and interconnected world. Issues of privacy and the right to control (including to trade or sell) our personal information are vividly with us. Not knowing quite where we as protected selves stop and the world around us begins, law and policy struggle to decide if (for example) information stored on our phones is enough like information stored in our heads to warrant the same protections. Law, education and social policy currently lag behind many interacting waves of change. What is up for grabs is what we humans are, and what we will become.
(Note,