Sleep facilitates memory consolidation. A widely held model assumes that this is because newly encoded memories undergo covert reactivation during sleep. We cued new memories in humans during sleep by presenting an odor that had been presented as context during prior learning, and so showed that reactivation indeed causes memory consolidation during sleep. Re-exposure to the odor during slow-wave sleep (SWS) improved the retention of hippocampus-dependent declarative memories but not of hippocampus-independent procedural memories. Odor re-exposure was ineffective during rapid eye movement sleep or wakefulness or when the odor had been omitted during prior learning. Concurring with these findings, functional magnetic resonance imaging revealed significant hippocampal activation in response to odor re-exposure during SWS.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Monday, March 12, 2007
Odor cues during sleep stimulate memory.
The March 9 issue of Science has an interesting report by Rasch et al. and commentary by Miller on experiments demonstrating that pulses of an odor (rose scent) given during a learning task, improve consolidation of the memory of that task if given also during slow-wave sleep. The abstract:
Blog Categories:
attention/perception,
memory/learning
Robot Dreams
There is a very interesting exchange in the Letter section of the March 2 issue of Science Magazine. R. Conduit comments on a perspectives article "What do robots dream of?" (17 Nov. 2006, p. 1093) by C. Adami, which provides an interesting interpretation of the Report "Resilient machines through continuous self-modeling" by J. Bongard et al. (17 Nov. 2006, p. 1118).
After a further comment letter from C. Adami, Lipson, Zykov and Bongard (the original authors) comment:
Bongard et al. designed a robot with an algorithm of its stored sensory data to indirectly infer its physical structure. The robot was able to generate forward motion more adaptively by manipulating its gait to compensate for simulated injuries. Adami equates this algorithm to "dreams" of prior actions and asks whether such modeling could extend to environmental mapping algorithms. If this were possible, then a robot could explore a landscape until it is challenged by an obstacle; overnight, it could replay its actions against its model of the environment and generate (or synthesize) new actions to overcome the obstacle (i.e., "dream up" alternative strategies). It could then return the next day with a new approach to the obstacle......
This work in robotics complements current findings regarding sleep and dreaming in humans. There is now strong evidence in human sleep research showing that performance on motor and visual tasks is strongly dependent on sleep, with improvements consistently greater when sleep occurs between test and retest. This is generally believed to be related to neural recoding processes that are possibly connected to dreaming during sleep). However, when one considers human dreaming, it is not a simple replay of daily scenarios. It has complex, distorted images from a vast variety of times and places in our memory, arranged in a random, bizarre fashion. If we are to model such activity in robots, we would need to have some form of "sleep" algorithm that randomizes memory and combines it in unique arrays. This could be a way to generate unique approaches to scenarios that could be simulated. Otherwise, how else would scenario replay be an improvement over repeated trials in the environment?when one considers human dreaming, it is not a simple replay of daily scenarios. It has complex, distorted images from a vast variety of times and places in our memory, arranged in a random, bizarre fashion. If we are to model such activity in robots, we would need to have some form of "sleep" algorithm that randomizes memory and combines it in unique arrays. This could be a way to generate unique approaches to scenarios that could be simulated. Otherwise, how else would scenario replay be an improvement over repeated trials in the environment?
After a further comment letter from C. Adami, Lipson, Zykov and Bongard (the original authors) comment:
The analogy between machine and human cognition may suggest that reported bizarre, random dreams may not be entirely random. The robot we described did not just replay its experiences to build consistent internal self-models and then "dream up" an action based on those models. Instead, it synthesized new brief actions that deliberately caused its competing internal models to disagree in their predictions, thus challenging them to falsify less plausible theories and, as a result, improving its overall knowledge of self. It is possible that the mangled experiences that people report as bizarre dreams correspond to this unconscious search for actions able to clarify their self-perceptions. Many of the intermediate candidate models and actions developed by the robot (as seen in Movie S1 in our Supporting Online Material) were indeed very contorted, but were optimized nonetheless to elucidate uncertainties. Edelman (1), Calvin (2), and others have suggested the existence of competitive processes in the brain. Perhaps the fact that human dreams appear mangled and brief is exactly because they are--as in the robot--"optimized" to challenge and improve these competing internal models?
Indeed, analogies between machines learning from past experiences and human dreaming are potentially very fruitful and may be applicable in both directions. Although robots and their onboard algorithms are clearly simpler and may bear little or no direct relation to humans and their minds, it may be much easier to test hypotheses about humans in robots. Conversely, ideas from human cognition research may help direct robotic research beyond merely serving as inspiration. Specifically, it is likely that as robots become more complex and their internal models are formed indirectly rather than being explicitly engineered and represented, indirect probing techniques developed for studying humans may become essential for analyzing machines too.
Friday, March 09, 2007
Ultimately, monopolies fail...
An essay by Barry Smith argues that attempts to dictate our tastes, our preference, our culture, our media, our political policies, or moral choices are bound in the end to fail because of the basic nature of our human cognition.
...Restless creatures that we are, we seek out variety and difference, opportunities to extend the scope of our thinking and to exercise discrimination and taste. This may make us hard to satisfy, but, ultimately, it is this lack of satisfaction that leads to progress and spells the end of hegemonies in ideology, religion, or science...I am optimistic that people who are fed a constant diet of the same ideas, the same foods, the same TV programmes, the same religious or political dogmas will eventually come to consider other possibilities...The lesson is already being learned in the corporate world where monopolies try to cope with this by diversifying their range of services. Their chance of survival will depend on how cynically or sincerely they respond to this restless aspect of the human mind.
Human cognition depends on change and movement in order to function. Evolution has built us this way. Try staring at a blank wall for several seconds without blinking and you will find the image eventually bleaching until you can see nothing. The eye’s visual workings respond to movement and change. So too do the other parts of our cognitive systems. Feed them the same inputs successively and they cease to produce very much worth having as output. Like the shark in water, we need to keep moving or, cognitively, we die.
...there is a paradox in our nature and our restless search for change. For unless we countenance change for change’s sake, or the relativist doctrine that anything goes (—and I don’t) how do we preserve the very best of our thinking, select better quality experiences, and maintain our purposes, directions and values? How do we avoid losing sight of older wisdom while rushing towards something new? It is here, perhaps, that our need for variation and discrimination serves us best. For the quick and gimmicky, the superficially appealing but weakest objects of our thinking or targets of desire will also be the least substantial and have an essential blandness that can tire us quickly. Besides, the more experience we have, the larger the background against which to compare and judge the worth or quality of what is newly encountered, and to decide if it will be ultimately rewarding. Certainly, people can be fickle or stubborn, but they are seldom fickle or stubborn for long. They will seek out better, according to what they are presently capable of responding to, and they will be dissatisfied by something not worthy of the attention they are capable of. For this reason attempts to dictate their tastes, cultural goods, ideologies or ideas are bound in the end to fail, and about that, and despite of many dark forces around us, I am optimistic.
Blog Categories:
culture/politics,
futures,
psychology
Losing a night's sleep makes you less able to form new memories.
Yoo et al. report that :
..a single night of sleep deprivation produces a significant deficit in hippocampal activity during episodic memory encoding, resulting in worse subsequent retention. Furthermore, these hippocampal impairments instantiate a different pattern of functional connectivity in basic alertness networks of the brainstem and thalamus. We also find that unique prefrontal regions predict the success of encoding for sleep-deprived individuals relative to those who have slept normally. These results demonstrate that an absence of prior sleep substantially compromises the neural and behavioral capacity for committing new experiences to memory. It therefore appears that sleep before learning is critical in preparing the human brain for next-day memory formation—a worrying finding considering society's increasing erosion of sleep time.
Thursday, March 08, 2007
Sad News....
A friend has emailed me that his beloved iMac was laid to rest today in Chicago Heights..it was an open-pallet service.
Why stronger sniffing catches weak odors...
Grosmaitre et al. report in Nature Neuroscience that up to half of mammalian olfactory sensory neurons respond to mechanical stimulation through air-pressure changes, as well as to specific smells. The responses seem to share the same cellular pathway, with increased air pressure raising the firing rate of neurons that have been weakly stimulated by odorants. This mechanism may help to synchronize the firing of neurons in the olfactory bulb with breathing.
Why do we believe - Darwin’s God
Credit: New York Times
The New York Times Sunday Magazine of 3/4/07 contains an interesting article by Robin Marantz Henig on why:
So,
The New York Times Sunday Magazine of 3/4/07 contains an interesting article by Robin Marantz Henig on why:
...there seems an inherent human drive to believe in something transcendent, unfathomable and otherworldly, something beyond the reach or understanding of science...The debate over why belief evolved is between byproduct theorists and adaptationists.Byproduct Theorists:
Darwinians who study physical evolution distinguish between traits that are themselves adaptive, like having blood cells that can transport oxygen, and traits that are byproducts of adaptations, like the redness of blood. There is no survival advantage to blood’s being red instead of turquoise; it is just a byproduct of the trait that is adaptive, having blood that contains hemoglobin.The Adaptationists:
Something similar explains aspects of brain evolution, too, say the byproduct theorists...Hardships of early human life favored the evolution of certain cognitive tools, among them the ability to infer the presence of organisms that might do harm, to come up with causal narratives for natural events and to recognize that other people have minds of their own with their own beliefs, desires and intentions. Psychologists call these tools, respectively, agent detection, causal reasoning and theory of mind (or folk psychology). [See Atran, “In Gods We Trust: The Evolutionary Landscape of Religion,” 2002.]
Folkpsychology, as Atran and his colleagues see it, is essential to getting along in the contemporary world, just as it has been since prehistoric times. It allows us to anticipate the actions of others and to lead others to believe what we want them to believe; it is at the heart of everything from marriage to office politics to poker...The process begins with positing the existence of minds, our own and others’, that we cannot see or feel. This leaves us open, almost instinctively, to belief in the separation of the body (the visible) and the mind (the invisible). If you can posit minds in other people that you cannot verify empirically, suggests Paul Bloom, a psychologist and the author of “Descartes’ Baby,” published in 2004, it is a short step to positing minds that do not have to be anchored to a body. And from there, he said, it is another short step to positing an immaterial soul and a transcendent God.
The bottom line, according to byproduct theorists, is that children are born with a tendency to believe in omniscience, invisible minds, immaterial souls — and then they grow up in cultures that fill their minds, hard-wired for belief, with specifics. It is a little like language acquisition, Paul Bloom says, with the essential difference that language is a biological adaptation and religion, in his view, is not. We are born with an innate facility for language but the specific language we learn depends on the environment in which we are raised. In much the same way, he says, we are born with an innate tendency for belief, but the specifics of what we grow up believing — whether there is one God or many, whether the soul goes to heaven or occupies another animal after death — are culturally shaped...
Trying to explain the adaptiveness of religion means looking for how it might have helped early humans survive and reproduce. As some adaptationists see it, this could have worked on two levels, individual and group. Religion made people feel better, less tormented by thoughts about death, more focused on the future, more willing to take care of themselves. As William James put it, religion filled people with “a new zest which adds itself like a gift to life . . . an assurance of safety and a temper of peace and, in relation to others, a preponderance of loving affections.”
Such sentiments, some adaptationists say, made the faithful better at finding and storing food, for instance, and helped them attract better mates because of their reputations for morality, obedience and sober living. The advantage might have worked at the group level too, with religious groups outlasting others because they were more cohesive, more likely to contain individuals willing to make sacrifices for the group and more adept at sharing resources and preparing for warfare.
One of the most vocal adaptationists is David Sloan Wilson, an occasional thorn in the side of both Scott Atran and Richard Dawkins. Wilson, an evolutionary biologist at the State University of New York at Binghamton, focuses much of his argument at the group level. “Organisms are a product of natural selection,” he wrote in “Darwin’s Cathedral: Evolution, Religion, and the Nature of Society,” which came out in 2002...Through countless generations of variation and selection, [organisms] acquire properties that enable them to survive and reproduce in their environments. My purpose is to see if human groups in general, and religious groups in particular, qualify as organismic in this sense.”
Dawkins once called Wilson’s defense of group selection “sheer, wanton, head-in-bag perversity.” Atran, too, has been dismissive of this approach, calling it “mind blind” for essentially ignoring the role of the brain’s mental machinery. The adaptationists “cannot in principle distinguish Marxism from monotheism, ideology from religious belief,” Atran wrote. “They cannot explain why people can be more steadfast in their commitment to admittedly counterfactual and counterintuitive beliefs — that Mary is both a mother and a virgin, and God is sentient but bodiless — than to the most politically, economically or scientifically persuasive account of the way things are or should be.”
So,
What can be made of atheists, then? If the evolutionary view of religion is true, they have to work hard at being atheists, to resist slipping into intrinsic habits of mind that make it easier to believe than not to believe. Atran says he faces an emotional and intellectual struggle to live without God in a nonatheist world, and he suspects that is where his little superstitions come from, his passing thought about crossing his fingers during turbulence or knocking on wood just in case. It is like an atavistic theism erupting when his guard is down. The comforts and consolations of belief are alluring even to him, he says, and probably will become more so as he gets closer to the end of his life. He fights it because he is a scientist and holds the values of rationalism higher than the values of spiritualism.
This internal push and pull between the spiritual and the rational reflects what used to be called the “God of the gaps” view of religion. The presumption was that as science was able to answer more questions about the natural world, God would be invoked to answer fewer, and religion would eventually recede. Research about the evolution of religion suggests otherwise. No matter how much science can explain, it seems, the real gap that God fills is an emptiness that our big-brained mental architecture interprets as a yearning for the supernatural. The drive to satisfy that yearning, according to both adaptationists and byproduct theorists, might be an inevitable and eternal part of what Atran calls the tragedy of human cognition.
Blog Categories:
evolutionary psychology,
human evolution,
religion
Wednesday, March 07, 2007
A membrane protein controlling social memory and maternal care in mice.
Oxytocin is gaining increasing recognition as a master regulator of affiliative behaviors in mice as well as humans. Duo Jin et al. now show that genetically knocking out CD38, a transmembrane glycoprotein required for oxytocin secretion by axon terminals in the hypothalamus, causes defective maternal nurturing and social behavior in male and female mice. Replacement of oxytocin by subcutaneous injection or lentiviral-vector-mediated delivery of human CD38 in the hypothalamus rescues social memory and maternal care.
Yet another molecule the genetic engineers might one day dink with to make us more kind and gentle people??
Yet another molecule the genetic engineers might one day dink with to make us more kind and gentle people??
Getting past "mind bugs"
From Mahzarin Banaji, Psychology Department at Harvard:
I am bullish about the mind's ability to unravel the beliefs contained within it—including beliefs about its own nature...the ability of humans everywhere to go against the grain of their own beliefs that are familiar, that feel natural and right, and that appear to be fundamentally true...
We've done this sort of unraveling many times before, whether it is about the relationship of the sun to the earth, or the relationship of other species to us. We've put aside what seemed natural, what felt right, and what came easily in favor of the opposite. I am optimistic that we are now ready to do the same with questions about the nature of our own minds. From the work of pioneers such as Herb Simon, Amos Tversky, and Danny Kahneman we know that the beliefs about our own minds that come naturally, feel right, and are easy to accept aren't necessarily true. That the bounds on rationality keep us from making decisions that are in our own interest, in the interest of those we love, in the long-term interest of our societies, even the planet, even perhaps the universe, with which we will surely have greater opportunity to interact in this century.
Here are some examples of what seems natural, feels right, and is easy to believe in—even though it isn't rational or true.
We irrationally anchor: ask people to generate their social security number and then the number of doctors in their city and the correlation between the two numbers will be significantly positive, when in fact it ought to be zero—there's no relation between the two variables. That's because we can't put the first one aside as we generate the second.
We irrationally endow: give somebody a cheap mug, and once it's "my mug" through ownership (and nothing else) it becomes, in our minds, a somewhat less cheap mug. Endowed with higher value, we are likely to demand a higher price for it than it is worth or is in our interest to demand.
We irrationally see patterns where non exist: Try to persuade a basketball player, fan, or statistician that there isn't anything to the idea of streak shooting; that chance is lumpy and that that's all there is to Michael Jordan's "hot hand".
...such "mind bugs" extend to the beliefs and preferences we have about ourselves, members of our own social groups, and those who sit farther away on a scale of social distance....We don't intend to discriminate or treat unfairly, but we do....The ability to think about one's own long range interest, to self-regulate and delay gratification, to consider the well-being of the collective, especially to view the collective as unbounded by religion, language, or nationality requires a mental leap that isn't natural or easy. And yet each new generation seems to be able to do it more successfully than the previous one...old beliefs come unraveled because such unraveling is in our self-interest...we unravel existing beliefs and preferences because we wish them to be in line with our intentions and aspirations and recognize that they are not. I see evidence of this everywhere—small acts to be the person one wishes to be rather than the person one is—and it is the constant attempt at this alignment that gives me optimism.
Tuesday, March 06, 2007
Why are primate brains smarter than rodent brains of the same size?
Herculano-Houzel et al. ask whether a difference the cellular composition of rodent and primate brains might underlie the better cognitive abilities of primates. They show that that in primates:
...brain size increases approximately isometrically as a function of cell numbers, such that an 11x larger brain is built with 10x more neurons and {approx}12x more nonneuronal cells of relatively constant average size. This isometric function is in contrast to rodent brains, which increase faster in size than in numbers of neurons. As a consequence of the linear cellular scaling rules, primate brains have a larger number of neurons than rodent brains of similar size, presumably endowing them with greater computational power and cognitive abilities.
If the same rules relating numbers of neurons to brain size in rodents also applied to primates, a brain comparable to ours, with {approx}100 billion neurons, would weigh >45 kg and belong to a body of 109 tons, about the mass of the heaviest living mammal, the blue whale. This simple calculation indicates quite dramatically that cellular scaling rules differ between rodents and primates, not surprising given the different cognitive abilities of rodents and primates of similar brain size (e.g., between agoutis and owl monkeys or between capybaras and macaque monkeys).
Understanding the brain - an inductive leap?
This clip from a brief essay by Steve Grand, A.I. researcher:
"...it seems to me that almost everything we think we understand about the brain is wrong. We know an enormous amount about it now and just about none of it makes the slightest bit of sense. That's a good sign, I think. It shows us we've been looking at the wrong page of the map.
Let me try to illustrate this with a thought experiment: Suppose I give you a very complex system to study – not a brain but something equally perplexing. You discover quite quickly that one part of the system is composed of an array of elements, of three types. These elements emit signals that vary rapidly in intensity, so you name these the alpha, beta and gamma elements, and set out eagerly to study them. Placing a sensor onto examples of each type you find that their actual signal patterns are distressingly random and unpredictable, but with effort you discover that there are statistical regularities in their behaviour: beta and gamma elements are slightly more active than alpha elements; when betas are active, gammas in the same region tend to be suppressed; if one element changes in activity, its neighbours tend to change soon after; gammas at the top of the array are more active than those at the bottom, and so on. Eventually you amass an awful lot of information about these elements, but still none of it makes sense. You're baffled.
So allow me to reveal that the system you've been studying is a television set, and the alpha, beta and gamma elements are the red, green and blue phosphor dots on the screen. Does the evidence start to fit together now? Skies are blue and tend to be at the top, while fields are green and tend to be at the bottom; objects tend to move coherently across the picture. If you know what the entire TV image represents at any one moment, you'll be able to make valid predictions about which elements are likely to light up next. By looking at the entire array of dots at once, in the context of a good system-level theory of what's actually happening, all those seemingly random signals suddenly make sense. "Aha!"
The single-electrode recordings of the equivalent elements in the brain have largely been replaced by system-wide recordings made by fMRI now, but at the moment we still don't know what any of it means because we have the wrong model in our heads. We need an "aha" moment akin to learning that the phosphor dots above belong to a TV set, upon which images of natural scenes are being projected. Once we know what the fundamental operating principles are, everything will start to make sense very quickly. Painstaking deduction won't reveal this to us; I think it will be the result of a lucky hunch. But the circumstances are in place for that inductive leap to happen soon, and I find that tremendously exciting."
Monday, March 05, 2007
Fascinating Rhythm
This is the title of a review in Nature Magazine by Mayank Mehta of "Rhythms of the Brain" ( György Buzsáki, Oxford University Press: 2006. 464 pp. £42, $69.95)
Brain waves are chaotic during an epileptic attack, as this electroencephalogram shows. (Credit, Nature Magazine).
Some clips from the review:
Brain waves are chaotic during an epileptic attack, as this electroencephalogram shows. (Credit, Nature Magazine).
Some clips from the review:
...neurons not only respond to stimuli, but often do so in a rhythmic fashion. The strength of neural rhythms can predict a subject's performance on a task. Even when we sleep, neurons in most parts of the brain are active in a highly rhythmic fashion. By contrast, epileptic fits and Parkinson's disease are accompanied by an abnormal increase in certain brain rhythms...Buzsáki describes a wide range of brain rhythms, ranging from very slow rhythms of the order of 1 cycle per second up to several hundred cycles per second. The frequency of rhythms changes as a function of development, ageing and disease. The frequency of oscillations often changes dramatically within a few seconds, as a function of the animal's behaviour...Buzsáki then moves on to describe possible functions of brain rhythms, such as resonance, synchronization of neural circuits, and improvement of signal-to-noise ratio by stochastic resonance...Buzsáki's book describes the amazing influence of oscillations on information encoding in the hippocampus and how this may be critical for learning facts and events. It ends with a discussion of some of the toughest problems in the field, such as what consciousness is, and how to irrefutably demonstrate the role of oscillations in brain function.
Jaron Lanier on transforming communication...
Some clips from his essay:
One extravagant idea is that the nature of communication itself might transform in the future as much as it did when language appeared.
Suppose you're enjoying an advanced future implementation of Virtual Reality and you can cause spontaneously designed things to appear and act and interact with the ease of sentences pouring forth during an ordinary conversation today.
Whether this is accomplished by measuring what your body does from the outside or by interacting via interior states of your brain is nothing more than an instrumentation question. Either way, we already have some clues about how the human organism might be able to improvise the content of a Virtual World.
That aspect of the brain which is optimized to control the many degrees of freedom of body motion is also well suited to controlling the many degrees of freedom of a superlative programming and design environment of the future.
Imagine a means of expression that is a cross between the three great new art forms of the 20th century: jazz improvisation, computer programming, and cinema. Suppose you could improvise anything that could be seen in a movie with the speed and facility of a jazz improviser.
A finite game is like a single game of baseball, with an end. An infinite game is like the overall phenomenon of baseball, which has no end. It is always a frontier.
So many utopian ideas are about Finite Games: End disease, solve global warming, get people to be more rational, reduce violence, and so on. As wonderful as all those achievements would (will!) be, there is something missing from them. Finite Game optimism suggests a circumscribed utopia, without frontier or mystery. The result isn't sufficiently inspiring for me- and apparently it doesn't quite grab the imaginations of a lot of other people who are endlessly fascinated by dubious religious and political utopias. The problem is heightened at the moment because there's a trope floating around in the sciences, probably unfounded, that we have already seen the outline of all the science we'll ever know, and we're just in the process of filling in the details.
The most valuable optimisms are Infinite Games, and imagining that new innovations as profound as language will come about in the future of human interaction is an example of one.
Blog Categories:
culture/politics,
futures,
language,
technology
Sunday, March 04, 2007
Would you like to be an experimental subject?
Check out the Visual Cognition Online Laboratory.
Why I am in Fort Lauderdale...
From my porch...
A friend of mine has suggested that I might increase the ratio of visual images to text in this blog, as well as lighten up its tone by throwing in more random bits from my local environment. As a nudge in that direction, here is a gentleman (or lady?) who cruised down the Middle River past the porch of my Fort Lauderdale condo the other day.
Friday, March 02, 2007
Attentional deficit overcome by fearful body language stimulus
Tamietto et al report in J. Cog. Neurosci. the interesting observation that patients having right parietal lobe damage which makes them inattentive to their left visual field notice fearful body language stimuli in that left visual field much more readily than neutral or happy body language. This demonstrates that despite pathological inattention and parietal damage, emotion and action-related information in fearful body language may be extracted automatically, biasing attentional selection and visual awareness. Apparently a neural network in intact fronto-limbic and visual areas still mediates reorienting of attention and preparation for action upon perceiving fear in others.
Blog Categories:
attention/perception,
emotion,
fear/anxiety/stress,
unconscious
A neuroethics website
'Neuroethics' is the ethics of neuroscience, analogous to the term 'bioethics' which denotes the ethics of biomedical science more generally.
Neuroethics encompasses a wide array of ethical issues emerging from different branches of clinical neuroscience (neurology, psychiatry, psychopharmacology) and basic neuroscience (cognitive neuroscience, affective neuroscience).
These include ethical problems raised by advances in functional neuroimaging, brain implants and brain-machine interfaces and psychopharmacology as well as by our growing understanding of the neural bases of behavior, personality, consciousness, and states of spiritual transcendence.
Neuroethics.upenn.edu is a source of information on neuroethics, provided by Martha Farah of the Center for Cognitive Neuroscience at the University of Pennsylvania.
Neuroethics encompasses a wide array of ethical issues emerging from different branches of clinical neuroscience (neurology, psychiatry, psychopharmacology) and basic neuroscience (cognitive neuroscience, affective neuroscience).
These include ethical problems raised by advances in functional neuroimaging, brain implants and brain-machine interfaces and psychopharmacology as well as by our growing understanding of the neural bases of behavior, personality, consciousness, and states of spiritual transcendence.
Neuroethics.upenn.edu is a source of information on neuroethics, provided by Martha Farah of the Center for Cognitive Neuroscience at the University of Pennsylvania.
Thursday, March 01, 2007
Virtual Living - Is this scary, or what???
Here is a followup on a New York Times article by David Pogue on the Second Life phenomenon. I downloaded and tried the game, and soon fled in bored confusion (and fear). Because we can't hack it in real life, we are going to retreat to a virtual world?
From Pogue::
Second Life, as about 2 million people have already discovered, is a virtual world on the Internet. You're represented by a computer-generated character (an avatar) that can walk around, fly, teleport, or exchange typed comments with other people's characters. You can make yourself young and beautiful, equip yourself with fancy clothes, build a dream house by the water, or make the sun set on command. The average member spends four hours a day in Second Life.
One thing that makes Second Life different from other online 3-D games is its economy. People make stuff and sell it to each other: clothes, rockets, cars, new hairstyles. Second Life itself is free, but members nonetheless pay real money-$220 million a year-to buy these imaginary accessories.
From Pogue's interview with Phillip Rosedale, the CEO of Linden Lab, the company behind Second Life:
DP: Is there any worry about the whole isolation thing? First iPod earbuds, and now people substituting virtual interactions for real ones?
PR: Well I'll tell ya, the history of technology has, in the past 50 years, been to increasingly isolate us. We've gone from watching movies in a movie theater, to watching them as a family at home, to watching them alone on our iPod.
But actually I think there's a next wave of technology, of which Second Life is certainly a great example, where we are bringing people back together again into the same place to have these experiences.
The thing about Second Life that is so fascinating and different is not just that it's 3-D. There are always people to share that experience with, or to ask for help. Or to laugh at something with. And that experience is an innately human one that technology has deprived us of. I think many people use Second Llife to have more friends, and more human contact, than they do in the real world.
DP: What's the hard part for the next phase?
PR: Well, we need to grow Second Life as fast as people want it to grow. And right now, that seems to be awfully fast. If you look at the number of people online at one time, that number has doubled in the last 90 days. Right now, the challenge is just scaling up the services, and the computers, and even the policies, and customer support.
Looking farther out, we have to really open it up so that a lot of people can work on it with us. [Linden Lab recently "open-sourced" the code of the Second Life program, in hopes that volunteers worldwide will comb through it for improvements.]
When you look at Second Life today, you may say, "I don't like the graphics." Or, you know, "It's clunky. It runs too slow." But you have to bear in mind that in just a few years, this is gonna look like walking into a movie screen. And that's just gonna be such an amazing thing.
From Pogue::
Second Life, as about 2 million people have already discovered, is a virtual world on the Internet. You're represented by a computer-generated character (an avatar) that can walk around, fly, teleport, or exchange typed comments with other people's characters. You can make yourself young and beautiful, equip yourself with fancy clothes, build a dream house by the water, or make the sun set on command. The average member spends four hours a day in Second Life.
One thing that makes Second Life different from other online 3-D games is its economy. People make stuff and sell it to each other: clothes, rockets, cars, new hairstyles. Second Life itself is free, but members nonetheless pay real money-$220 million a year-to buy these imaginary accessories.
From Pogue's interview with Phillip Rosedale, the CEO of Linden Lab, the company behind Second Life:
DP: Is there any worry about the whole isolation thing? First iPod earbuds, and now people substituting virtual interactions for real ones?
PR: Well I'll tell ya, the history of technology has, in the past 50 years, been to increasingly isolate us. We've gone from watching movies in a movie theater, to watching them as a family at home, to watching them alone on our iPod.
But actually I think there's a next wave of technology, of which Second Life is certainly a great example, where we are bringing people back together again into the same place to have these experiences.
The thing about Second Life that is so fascinating and different is not just that it's 3-D. There are always people to share that experience with, or to ask for help. Or to laugh at something with. And that experience is an innately human one that technology has deprived us of. I think many people use Second Llife to have more friends, and more human contact, than they do in the real world.
DP: What's the hard part for the next phase?
PR: Well, we need to grow Second Life as fast as people want it to grow. And right now, that seems to be awfully fast. If you look at the number of people online at one time, that number has doubled in the last 90 days. Right now, the challenge is just scaling up the services, and the computers, and even the policies, and customer support.
Looking farther out, we have to really open it up so that a lot of people can work on it with us. [Linden Lab recently "open-sourced" the code of the Second Life program, in hopes that volunteers worldwide will comb through it for improvements.]
When you look at Second Life today, you may say, "I don't like the graphics." Or, you know, "It's clunky. It runs too slow." But you have to bear in mind that in just a few years, this is gonna look like walking into a movie screen. And that's just gonna be such an amazing thing.
Blog Categories:
futures,
social cognition,
technology
Subscribe to:
Posts (Atom)