Digital cameras produce a reassuringly retro but artificial shutter snap when you push the button to take a photograph; cellphones have keyboards with layouts originally meant to keep typewriters from jamming; and blue jeans have pockets that are a throwback to a time when watches dangled from chains...Designers in all fields are regularly confronted with versions of this choice: whether to incorporate cues to keep people grounded in what has come before, or scrap convention completely. In transportation, for instance, the power of steam engines was initially described in relation to that of horses, a practice that has continued to the present day. Automobile designers have incorporated visual cues suggesting carriages; for example, adding nonfunctional spokes on wheels...This tension is palpable in many efforts to create new digital media experiences. The Daily, Rupert Murdoch’s publication designed specifically for tablet computers, incorporates video and interactivity into what is essentially a newspaper. At the same time, it is designed to show up on a reader’s digital doorstep once a day, a concept that seems as old-fashioned as pocket watches when compared with Web sites that are updated continually...Apple, probably the best symbol of the march into a new digital era, also encourages designers to incorporate analog references in its devices. On the iPad, users enter appointments into a calendar that is encased in an on-screen leather ledger, scrawl notes on what looks like a legal pad and advance through digital books by swiping their fingers across the screen, prompting an animation that actually looks like a page being turned.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Wednesday, February 16, 2011
Skeuomorphs - why innovation is also a throwback
Joshua Brustein writes a nice piece on how innovation usually doffs an old hat, maintaining superfluous marks of its evolution (skeuomorphs - from the Greek words for tool and form) to ease people's comfort with the transition to the new.
Sleep enhances memories relevant to the future.
Here is a fascinating bit of work from Wilhelm et al., which possibly explains why in my first moments of starting to awaken, I notice that finger sequences of piano pieces I am studying to perform are playing in my head....
The brain encodes huge amounts of information, but only a small fraction is stored for a longer time. There is now compelling evidence that the long-term storage of memories preferentially occurs during sleep. However, the factors mediating the selectivity of sleep-associated memory consolidation are poorly understood. Here, we show that the mere expectancy that a memory will be used in a future test determines whether or not sleep significantly benefits consolidation of this memory. Human subjects learned declarative memories (word paired associates) before retention periods of sleep or wakefulness. Postlearning sleep compared with wakefulness produced a strong improvement at delayed retrieval only if the subjects had been informed about the retrieval test after the learning period. If they had not been informed, retrieval after retention sleep did not differ from that after the wake retention interval. Retention during the wake intervals was not affected by retrieval expectancy. Retrieval expectancy also enhanced sleep-associated consolidation of visuospatial (two-dimensional object location task) and procedural motor memories (finger sequence tapping). Subjects expecting the retrieval displayed a robust increase in slow oscillation activity and sleep spindle count during postlearning slow-wave sleep (SWS). Sleep-associated consolidation of declarative memory was strongly correlated to slow oscillation activity and spindle count, but only if the subjects expected the retrieval test. In conclusion, our work shows that sleep preferentially benefits consolidation of memories that are relevant for future behavior, presumably through a SWS-dependent reprocessing of these memories.
Tuesday, February 15, 2011
Foundations of religious belief
Judith Shulevitz reviews James Kugel's "In the Valley of the Shadow - on the Foundations of Religious Belief." The book rose from the author's experience of still being alive seven years after being told he would die of cancer within a few years. His points on the utility of religious belief (even it if is a cognitive error) remind me of last Friday's MindBlog post on the utility of the size-weight illusion in throwing. Here are a few clips from the review:
...the recent debates about religion — is it a force for good or for evil, intrinsically violent or intrinsically peaceful? — have on the whole been a bit “narrow.” Too many pundits, anthropologists and evolutionary biologists fail to imagine their way into the rich, elusive mental condition called “believing in God” or “being religious.” They dismiss it as a neurosis, a superstition or a mistake. An otherwise appealing evolutionary theory of religion, for instance, holds that God and the gods are ghostlike entities created by a “hyperactive agent detection device” in the brain — that is, a hair-trigger response to unusual stimuli that evolved to protect us from danger, but wound up making us mistakenly attribute intention and even divinity to things that have none.
Kugel asks whether it’s the skeptics who are being willfully blind to the ancient truths bundled into these apparent errors. Consider the band of prehistoric hunter-gatherers made aware of their fragility by the magnitude of what they were up against. “This little group was endlessly overshadowed by all that was outside of them, forever on the receiving end of whatever You — immanent in the great Outside all around — happened to be dishing out,” he reminds us. To call their brains “hyperactive” because they identify that “You” as a mindful agent, Kugel says, is “ludicrous.” The “great Outside” was nearly all-powerful: why shouldn’t it mean to make things happen? “On the contrary,” Kugel writes, “it would require some sort of extraordinarily twisted spirit to look up and not see You, Your hand gloved in cloud and sky, Your voice mingling with cricket song and crashing waves, doing all the things that impinged on the little band’s existence. You were practically everything, and You completely overwhelmed their own little reality.”
Believing in God, Kugel suggests — possibly being a tad ahistorical — originally meant aligning yourself with the force of the universe, of humbly opening yourself up to its grandeur, more than it meant asserting faith in a particular deity. Kugel reviews the literature on epilepsy and the “God spot,” the “verbal conceptual association area” where various lobes of the brain come together. When stimulated, as in epileptic seizures, it has been shown to lead to visions of God or at least a sense of what one researcher called “connection with an overwhelmingly powerful being.” You could say the God spot proves that religion is a matter of brain malfunction, Kugel observes. Or you could call the epileptic’s aura “a privileged moment, an opening of the mind to something it cannot normally perceive.”
To the religious — or at least to Kugel and his sources — religion is an experience more than a cosmology. “It is not God’s sovereignty over the entire universe that is at issue so much as his sovereignty over the cubic centimeter of space that sits just in front of our own noses,” he writes. “That is to say, religion is first of all about fitting into the world and fitting into one’s borders. There may indeed be something ‘mythic’ about it, but it pales before the mythic quality of our own clumsy, modern selves.”
Improving your cognitive toolkit - part III
CContinuation of my abstracting of a few of the answers to the annual question at edge.org, "What scientific concept would improve everybody's cognitive toolkit?":
Sean Carroll - The Pointless Universe
Rudy Rucker - The World is Unpredictable
Sean Carroll - The Pointless Universe
Things happen because the laws of nature say they will — because they are the consequences of the state of the universe and the path of its evolution. Life on Earth doesn't arise in fulfillment of a grand scheme, but rather as a byproduct of the increase of entropy in an environment very far from equilibrium. Our impressive brains don't develop because life is guided toward greater levels of complexity and intelligence, but from the mechanical interactions between genes, organisms, and their surroundings.
None of which is to say that life is devoid of purpose and meaning. Only that these are things we create, not things we discover out there in the fundamental architecture of the world. The world keeps happening, in accordance with its rules; it's up to us to make sense of it and give it value.
Rudy Rucker - The World is Unpredictable
The media cast about for the proximate causes of life's windfalls and disasters. The public demands blocks against the bad and pipelines to the good. Legislators propose new regulations, fruitlessly dousing last year's fires, forever betting on yesterday's winning horses...A little-known truth: Every aspect of the world is fundamentally unpredictable. Computer scientists have long since proved this.
At a personal level, even if the world is as deterministic as a computer program, you still can't predict what you're going to do. This is because your prediction method would involve a mental simulation of you that produces its results slower than you. You can't think faster than you think. You can't stand on your own shoulders...It's a waste to chase the pipedream of a magical tiny theory that allows us to make quick and detailed calculations about the future. We can't predict and we can't control. To accept this can be a source of liberation and inner peace. We're part of the unfolding world, surfing the chaotic waves.
Monday, February 14, 2011
Anniversary of Deric's MindBlog
I just realized that Feb. 6 marked the start of year 6 of this blog. We've now clocked ~2,500 Posts, and there are roughly 2,000 subscribers (The green line in the FeedBurner summary plot) to the blog's RSS feed.
Last year's birthday notice indulged in a writing identity crisis which I will spare you from this year.
Last year's birthday notice indulged in a writing identity crisis which I will spare you from this year.
The Net Delusion
I've been meaning to point to Lee Siegel's review of Evgeny Morozov's new book "The Net Delusion - The Dark Side of Internet Freedom." A few clips:
Contrary to the “cyberutopians,” as he calls them, who consider the Internet a powerful tool of political emancipation, Morozov convincingly argues that, in freedom’s name, the Internet more often than not constricts or even abolishes freedom...He quotes the political blogger Andrew Sullivan, who proclaimed after protesters took to the streets in Tehran that “the revolution will be Twittered.” The revolution never happened, and the futilely tweeting protesters were broken with an iron hand... What was broadcast on Twitter and elsewhere was repression of the revolution. The Iranian regime used the Web to identify photographs of protesters; to find out their personal information and whereabouts (through Facebook, naturally); to distribute propagandistic videos; and to text the population into counterrevolutionary paranoia.
As Morozov points out, don’t expect corporations like Google to liberate anyone anytime soon. Google did business in China for four years before economic conditions and censorship demands — not human rights concerns — forced it out. And it is telling that both Twitter and Facebook have refused to join the Global Network Initiative, a pact that Morozov describes as “an industrywide pledge . . . to behave in accordance with the laws and standards covering the right to freedom of expression and privacy embedded in internationally recognized documents like the Universal Declaration of Human Rights.”
Morozov urges the cyberutopians to open their eyes to the fact that the asocial pursuit of profit is what drives social media. “Not surprisingly,” he writes, “the dangerous fascination with solving previously intractable social problems with the help of technology allows vested interests to disguise what essentially amounts to advertising for their commercial products in the language of freedom and liberation.” In 2007, when he was at the State Department, Jared Cohen wrote with tragic wrongheadedness that “the Internet is a place where Iranian youth can . . . say anything they want as they operate free from the grips of the police-state apparatus.” Thanks to the exciting new technology, many of those freely texting Iranian youths are in prison or dead. Cohen himself now works for Google as the director of “Google Ideas.”
Friday, February 11, 2011
The size weight illusion is not an illusion for throwing.
Zhu and Bingham offer an interesting bit of work that suggests that human throwing and speaking abilities developed in a manner that is consistent with their evolutionary history (Accurate long distance throwing ability is unique to humans):
Long-distance throwing is uniquely human and enabled Homo sapiens to survive and even thrive during the ice ages. The precise motoric timing required relates throwing and speech abilities as dependent on the same uniquely human brain structures. Evidence from studies of brain evolution is consistent with this understanding of the evolution and success of H. sapiens. Recent theories of language development find readiness to develop language capabilities in perceptual biases that help generate ability to detect relevant higher order acoustic units that underlie speech. Might human throwing capabilities exhibit similar forms of readiness? Recently, human perception of optimal objects for long-distance throwing was found to exhibit a size–weight relation similar to the size–weight illusion; greater weights were picked for larger objects and were thrown the farthest. The size–weight illusion is: lift two objects of equal mass but different size, the larger is misperceived to be less heavy than the smaller. The illusion is reliable and robust. It persists when people know the masses are equal and handle objects properly. Children less than 2 years of age exhibit it. These findings suggest the illusion is intrinsic to humans. Here we show that perception of heaviness (including the illusion) and perception of optimal objects for throwing are equivalent. Thus, the illusion is functional, not a misperception: optimal objects for throwing are picked as having a particular heaviness. The best heaviness is learned while acquiring throwing skill. We suggest that the illusion is a perceptual bias that reflects readiness to acquire fully functional throwing ability. This unites human throwing and speaking abilities in development in a manner that is consistent with the evolutionary history.
Blog Categories:
acting/choosing,
human evolution,
language
Thursday, February 10, 2011
Preverbal infants mentally represent social dominance.
Interesting observations from Thomsen et al:
Human infants face the formidable challenge of learning the structure of their social environment. Previous research indicates that infants have early-developing representations of intentional agents, and of cooperative social interactions, that help meet that challenge. Here we report five studies with 144 infant participants showing that 10- to 13-month-old, but not 8-month-old, infants recognize when two novel agents have conflicting goals, and that they use the agents’ relative size to predict the outcome of the very first dominance contests between them. These results suggest that preverbal infants mentally represent social dominance and use a cue that covaries with it phylogenetically, and marks it metaphorically across human cultures and languages, to predict which of two agents is likely to prevail in a conflict of goals.
Wednesday, February 09, 2011
Bias within - politics of the professoriat
In the Tuesday Science section of the NY Times, Tierny does a fascinating article on social psychologists, the folks who do research on racial prejudice, homophobia, sexism, stereotype threat and unconscious bias against minorities. He discusses a talk given by Jonathan Haidt at their national conference. Haight:
...polled his audience at the San Antonio Convention Center, starting by asking how many considered themselves politically liberal. A sea of hands appeared, and Dr. Haidt estimated that liberals made up 80 percent of the 1,000 psychologists in the ballroom. When he asked for centrists and libertarians, he spotted fewer than three dozen hands. And then, when he asked for conservatives, he counted a grand total of three.And one further clip from Tierney's article (which you should read).
“This is a statistically impossible lack of diversity,” Dr. Haidt concluded, noting polls showing that 40 percent of Americans are conservative and 20 percent are liberal. In his speech and in an interview, Dr. Haidt argued that social psychologists are a “tribal-moral community” united by “sacred values” that hinder research and damage their credibility — and blind them to the hostile climate they’ve created for non-liberals.
“Anywhere in the world that social psychologists see women or minorities underrepresented by a factor of two or three, our minds jump to discrimination as the explanation,” said Dr. Haidt, who called himself a longtime liberal turned centrist. “But when we find out that conservatives are underrepresented among us by a factor of more than 100, suddenly everyone finds it quite easy to generate alternate explanations.”
Moynihan was shunned by many of his colleagues at Harvard as racist,” Dr. Haidt said. “Open-minded inquiry into the problems of the black family was shut down for decades, precisely the decades in which it was most urgently needed. Only in the last few years have liberal sociologists begun to acknowledge that Moynihan was right all along.”
Similarly, Larry Summers, then president of Harvard, was ostracized in 2005 for wondering publicly whether the preponderance of male professors in some top math and science departments might be due partly to the larger variance in I.Q. scores among men (meaning there are more men at the very high and very low ends). “This was not a permissible hypothesis,” Dr. Haidt said. “It blamed the victims rather than the powerful. The outrage ultimately led to his resignation. We psychologists should have been outraged by the outrage. We should have defended his right to think freely.”
Instead, the taboo against discussing sex differences was reinforced, so universities and the National Science Foundation went on spending tens of millions of dollars on research and programs based on the assumption that female scientists faced discrimination and various forms of unconscious bias. But that assumption has been repeatedly contradicted, most recently in a study published Monday in the Proceedings of the National Academy of Sciences by two Cornell psychologists, Stephen J. Ceci and Wendy M. Williams. After reviewing two decades of research, they report that a woman in academic science typically fares as well as, if not better than, a comparable man when it comes to being interviewed, hired, promoted, financed and published.
“Thus,” they conclude, “the ongoing focus on sex discrimination in reviewing, interviewing and hiring represents costly, misplaced effort. Society is engaged in the present in solving problems of the past.” Instead of presuming discrimination in science or expecting the sexes to show equal interest in every discipline, the Cornell researchers say, universities should make it easier for women in any field to combine scholarship with family responsibilities.
Walking improves your memory.
Erickson et al. show that exercise training increases the size of our hippocampus and improves memory. They divided 120 sedentary healthy adults in their mid-60s into two groups. From a summary:
One group walked around a track three times a week, building up to 40 minutes at a stretch; the other did a variety of less aerobic exercises, including yoga and resistance training with bands. After a year, brain scans showed that among the walkers, the hippocampus had increased in volume by about 2 percent on average; in the others, it had declined by about 1.4 percent. Such a decline is normal in older adults.Here is the abstract:
The hippocampus shrinks in late adulthood, leading to impaired memory and increased risk for dementia. Hippocampal and medial temporal lobe volumes are larger in higher-fit adults, and physical activity training increases hippocampal perfusion, but the extent to which aerobic exercise training can modify hippocampal volume in late adulthood remains unknown. Here we show, in a randomized controlled trial with 120 older adults, that aerobic exercise training increases the size of the anterior hippocampus, leading to improvements in spatial memory. Exercise training increased hippocampal volume by 2%, effectively reversing age-related loss in volume by 1 to 2 y. We also demonstrate that increased hippocampal volume is associated with greater serum levels of BDNF, a mediator of neurogenesis in the dentate gyrus. Hippocampal volume declined in the control group, but higher preintervention fitness partially attenuated the decline, suggesting that fitness protects against volume loss. Caudate nucleus and thalamus volumes were unaffected by the intervention. These theoretically important findings indicate that aerobic exercise training is effective at reversing hippocampal volume loss in late adulthood, which is accompanied by improved memory function.
Tuesday, February 08, 2011
Did Frédéric Chopin have temporal lobe epilepsy?
I'm working up the incredible Chopin Fantasy in F minor for a spring concert, and am always eager to learn more about this remarkable composer (the technical requirements of his music conform to the natural musculature of the hands and arms in a way that no previous composer had...Bach and Beethoven sometimes make very unnatural and contortionistic demands.) Chopin was viewed as a tortured artist because at several performances he suddenly stopped in the middle of a piece and left the stage:
"I was about to play the [Funeral] March when, suddenly, I saw emerging from the half-open case of my piano those cursed creatures that had appeared to me on a lugubrious night at the Carthusian monastery. I had to leave for a while in order to recover myself, and after that I continued playing without saying a word."An article by by Sara Reardon points to a paper by radiologist Manuel Vásquez Caruncho of Xeral-Calde Hospital in Lugo, Spain and neurologist Francisco Brañas Fernández that
...draws heavily from descriptions of Chopin's behavior by his friends and pupils and from his own writings. Their vivid recollections report finding the composer late at night, "pale in front of the piano, with wild eyes and his hair on end," unable to recognize them for short periods. He spoke often of a "cohort of phantoms" that haunted him, of seeing his friends as the walking dead, and feeling "like steam."
Only a handful of neurological disorders produce the phantasmagoria that tormented Chopin, who didn't abuse drugs or alcohol. The visions he described, such as demons crawling out of his piano, are now known as Lilliputian hallucinations: detailed visions of people or objects that are much smaller than they are in life. The authors rule out schizophrenia and other common psychoses because Chopin's hallucinations were visual, not auditory, and because he lacked other telltale symptoms such as eye problems or migraines. His short hallucinatory episodes are a hallmark of temporal lobe epilepsy,
Monday, February 07, 2011
Improving your cognitive toolkit - part II
This posts continues my abstracting of some of my favorite responses to the Edge.org annual question "What scientific concept would improve everybody's cognitive toolkit?"
Martin Seligman - PERMA
Martin Seligman - PERMA
Is global well being possible?...The elements of well being must be exclusive, measurable independently of each other, and ideally, exhaustive. I believe there are five such elements and they have a handy acronym, PERMA, a shorthand abstraction for the enabling conditions of life:Steven Pinker - Positive-Sum Games
P Positive Emotion
E Engagement
R Positive Relationships
M Meaning and Purpose
A Accomplishment
There has been forward movement in the measurement of these over the last decade. Taken together PERMA forms a more comprehensive index of well being than "life satisfaction" and it allows for the combining of objective and subjective indicators. PERMA can index the well being of individuals, of corporations, and of cities. The United Kingdom has now undertaken the measurement of well being for the nation and as one criterion — in addition to Gross Domestic Product — of the success of its public policy.
...when people become consciously aware of the game-theoretic structure of their interaction (that is, whether it is positive-, negative-, or zero-sum), they can make choices that bring them valuable outcomes — like safety, harmony, and prosperity — without their having to become more virtuous, noble, or pure...Some examples. Squabbling colleagues or relatives agree to swallow their pride, take their losses, or lump it to enjoy the resulting comity rather than absorbing the costs of continuous bickering in hopes of prevailing in a battle of wills. Two parties in a negotiation split the difference in their initial bargaining positions to "get to yes."
Has an increasing awareness of the zero- or nonzero-sumness of interactions in the decades since 1950 (whether referred to in those terms or not) actually led to increased peace and prosperity in the world? It's not implausible. International trade and membership in international organizations has soared in the decades that game-theoretic thinking has infiltrated popular discourse. And perhaps not coincidentally, the developed world has seen both spectacular economic growth and a historically unprecedented decline in several forms of institutionalized violence, such as war between great powers, war between wealthy states, genocides, and deadly ethnic riots. Since the 1990s these gifts have started to accrue to the developing world as well, in part because they have switched their foundational ideologies from ones that glorify zero-sum class and national struggle to ones that glorify positive-sum market cooperation. (All these claims can be documented from the literature in international studies.)
The enriching and pacifying effects of participation in positive-sum games long antedate the contemporary awareness of the concept. The biologists John Maynard Smith and Eörs Szathmáry have argued that an evolutionary dynamic which creates positive-sum games drove the major transitions in the history of life: the emergence of genes, chromosomes, bacteria, cells with nuclei, organisms, sexually reproducing organisms, and animal societies. In each transition, biological agents entered into larger wholes in which they specialized, exchanged benefits, and developed safeguards to prevent one from exploiting the rest to the detriment of the whole. The journalist Robert Wright sketched a similar arc in his book Nonzero and extended it to the deep history of human societies. An explicit recognition among literate people of the shorthand abstraction "positive-sum game" and its relatives may be extending a process in the world of human choices that has been operating in the natural world for billions of years.
Friday, February 04, 2011
Skilled object recognition uses both our left and right hemispheres
Bilalić et al. make the interesting observation that skilled chess players, while no faster or better than amateurs at geometric object recognition (which mainly engages left hemisphere), are more rapid than amateurs at identifying chess positions, while at the same time engaging additional areas of their right hemisphere. This expanded use of brain areas requires extensive training. (When the subjects were shown the chess diagrams, the novices looked directly at the pieces to recognize them, while the experts looked on the middle of the boards and took everything in with their peripheral vision.) (Wan et al. report a similar study in Japan examining experts in shogi, a game similar to chess. It highlights further brain areas involved in expertise.) Here is the Bilalić et al. abstract (dorsal means along the upper part of the brain, ventral is lower):
Our object recognition abilities, a direct product of our experience with objects, are fine-tuned to perfection. Left temporal and lateral areas along the dorsal, action related stream, as well as left infero-temporal areas along the ventral, object related stream are engaged in object recognition. Here we show that expertise modulates the activity of dorsal areas in the recognition of man-made objects with clearly specified functions. Expert chess players were faster than chess novices in identifying chess objects and their functional relations. Experts' advantage was domain-specific as there were no differences between groups in a control task featuring geometrical shapes. The pattern of eye movements supported the notion that experts' extensive knowledge about domain objects and their functions enabled superior recognition even when experts were not directly fixating the objects of interest. Functional magnetic resonance imaging (fMRI) related exclusively the areas along the dorsal stream to chess specific object recognition. Besides the commonly involved left temporal and parietal lateral brain areas, we found that only in experts homologous areas on the right hemisphere were also engaged in chess specific object recognition. Based on these results, we discuss whether skilled object recognition does not only involve a more efficient version of the processes found in non-skilled recognition, but also qualitatively different cognitive processes which engage additional brain areas.
Thursday, February 03, 2011
The biology of morality
I have already pointed to a TED talk by Sam Harris, and thought I would pass on a few clips from a review of his related book, "The Moral Landscape - How Science Can Determine Human Values." On Harris:
...his dispensation is that “Faith, if it is ever right about anything, is right by accident.” In applying reason to questions of morality, Harris claims that we can define morality only as it relates to the well-being of conscious organisms and that such well-being is completely measurable using the methods of neurobiology. This suggests to him that any action can be clearly classified as moral (increasing well-being) or immoral (decreasing well-being) without ambiguity. However, it doesn't mean that there is only one answer to a question of morality. He contends that “the existence of multiple peaks on the moral landscape does not make them any less real or worthy of discovery. Nor would it make the difference between being on a peak and being stuck deep in a valley any less clear or consequential.” But Harris firmly disagrees with the moral relativist views that there is no clearly defined morality that cuts across different societies and that therefore all views of morality are equally meritorious. He writes, “Multiculturalism, moral relativism, political correctness, tolerance even of intolerance—these are the familiar consequences of separating facts and values on the left.” “My goal,” he states, “is to convince you that human knowledge and human values can no longer be kept apart.”
Harris isn't choosy when it comes to vilifying religions. He notes the willingness of many to ignore genocide or cases of sexual abuse within their churches while taking strong actions against individuals who perform abortions (or refuse to prohibit them). He also draws from history examples of undeniably immoral choices in the name of religion. Harris criticizes scientists for persisting in their faith and for failing to confront head-on a society that he thinks is mired in superstition.
Harris thinks too many scientists have compromised on principles. “Many of our secular critics worry that if we oblige people to choose between reason and faith, they will choose faith and cease to support scientific research.” Even the journal Nature upholds the idea of nonoverlapping magisteria of Gould. Harris complains, “It is one thing to be told that the pope is a peerless champion of reason and that his opposition to embryonic stem-cell research is both morally principled and completely uncontaminated by religious dogmatism; it is quite another to be told this by a Stanford physician who sits on the President's Council on Bioethics.”
One might conclude that although at one time the best way to define and enforce moral behavior was through revealed faith, as science and reason advance, we can chip away at the old edifice and build anew. Stories of a young-Earth creation now look rather untenable, but in the past they might have been the only way to instill awe and teach a new and meaningful moral code. Rather than nonoverlapping magisteria, the domains of science and religion are intermingling all the time. The Moral Landscape may represent a new beach-head in this quest.
Wednesday, February 02, 2011
Meanwhile...back in Wisconsin
As I sit at the keyboard in my Fort Lauderdale snowbird condo looking out the open patio door, gentle breeze, 72 degree Farenheit,
My younger partner, still in the working world, emails me an iPhone picture he just took of our rural snowbound Middleton, Wisconsin home. The blizzard has shut down all commercial and educational facilities.
My younger partner, still in the working world, emails me an iPhone picture he just took of our rural snowbound Middleton, Wisconsin home. The blizzard has shut down all commercial and educational facilities.
What would improve your cognitive toolkit?
My first MindBlog post in 2006 was a description of answers given to an annual question posed each year to prominent public intellectuals by Edge.org. The question for 2010 is "What scientific concept would improve everybody's cognitive toolkit?"
Howard Gardner - Try to disprove your viewpoint.
Christian Keysers - Avoid the mirror fallacy
George Lakoff - Be aware of the conceptual metaphors you are using.
A "scientific concept" may come from philosophy, logic, economics, jurisprudence, or other analytic enterprises, as long as it is a rigorous conceptual tool that may be summed up succinctly (or "in a phrase") but has broad application to understanding the world...James Flynn has defined "shorthand abstractions" (or "SHA's") as concepts drawn from science that have become part of the language and make people smarter by providing widely applicable templates ("market", "placebo", "random sample," "naturalistic fallacy," are a few of his examples). His idea is that the abstraction is available as a single cognitive chunk which can be used as an element in thinking and debate.I'm going to give brief sketches of a few responses that I found most interesting. I try to edit the author's point to a single declarative phrase, the 'single cognitive chunk' requirement suggested above (I'm surprised that in most cases the authors didn't do this more effectively). I'll list a few in this post, and as I have time to continue reading through the 164 contributions, perhaps do some further posts...
Howard Gardner - Try to disprove your viewpoint.
"If American citizens, or, for that matter, citizens anywhere were motivated to decribe the conditions under which they would relinquish their beliefs, they would begin to think scientifically. And if they admitted that empirical evidence would not change their minds, then at least they'd have indicated that their views have a religious or an ideological, rather than a scientific basis.
Christian Keysers - Avoid the mirror fallacy
...our brain mirrors the states of the people we observe...When the person we see has the exact same body and brain as we do, mirroring would tell us what the other feels. Whenever the other person is different in some relevant way, however, mirroring will mislead us...The world is full of such fallacies: we feel dolphins are happy just because their face resembles ours while we smile or we attribute pain to robots in sci-fi movies.
George Lakoff - Be aware of the conceptual metaphors you are using.
All concepts are physical brain circuits deriving their meaning via neural cascades that terminate in linkage to the body. That is how embodied cognition arises...Primary metaphors are brain mappings linking disparate brain regions, each tied to the body in a different way. For example, More Is Up (as in "prices rose") links a region coordinating quantity to another coordinating verticality...Complex conceptual metaphors arise via neural bindings, both across metaphors and from a given metaphor to a conceptual frame circuit. Metaphorical reasoning arises when source domain inference structures are used for target domain reasoning via neural mappings... A central consequence is the huge range of concepts that use metaphor cannot be defined relative to the outside world, but are instead embodied via interactions of the body and brain with the world...Every time you think of paying moral debts, or getting bogged down on a project, or losing time, or being at a crossroads in a relationship, you are unconsciously activating a conceptual metaphor circuit in your brain, reasoning using it, and quite possibly making decisions and living your life on the basis of your metaphors. And that's just normal. There's no way around it!..But it can do harm if you are unaware of it.
Blog Categories:
brain plasticity,
embodied cognition,
self
Tuesday, February 01, 2011
Modern virtue - the religion of physical fittness
As I have morphed during my life from a devout teenage christian church organist into a crusty old materialistic atheist, I have found a new church in the cult of physical exercise and fitness. Virtue and badness can be simply measured by whether I have worked out today. Until I read this fascinating tribute to Jack LeLane, who recently died at the age of 96, I had not realized what a modern invention my church is, growing from the opening of his first gym in Oakland, CA. in 1936:
With “The Jack LaLanne Show,” he also had a hand in the spread — a contagion, really — of television programs exhorting viewers to rise up from their La-Z-Boys...An army of spandex missionaries was unleashed....What he left behind when he died last week...was not only a sweaty culture of relentless crunching and spinning but also the notion that fitness equals character, and that self-actualization begins with the self-discipline to get and stay in shape. In the post-LaLanne landscape, it’s not the eyes but the abdominals that are windows to the soul...A “new you” usually means a trimmer, tauter version, not someone who has learned to speak Mandarin or picked up woodworking skills...There’s a bullying strain to the modern fitness ethos, a blurred line between cheerleading and hectoring...When exercise comes wrapped in value judgments, does it wind up entangled in an anxiety that threatens the very resolve to get fit? As Mr. LaLanne was siring new methods for shaping up, he was fathering something else, too: a potent, and in some cases immobilizing, strain of contemporary guilt.
Monday, January 31, 2011
Modern conversation.
As relevant to this morning's previous post, I had to pass on today's Doonesbury cartoon, having noted during my kid's New Year's visit how our conversations transitioned seamlessly from periods of actual talking to tapping on our iPhones and back to talking again, no requirement that one excuse oneself from actual talk, the accepted procedure was just to suddenly divert attention and start tapping on the new prosthetic device. This clearly is how the world of 20- and 30-somethings now works.
Is our technology replacing our identity?
After publishing an optimistic book about the internet in 1995 ("Life on the Screen"), MIT social science professor Sherry Turkle has now written a darker tome, "Alone Together," worrying that we are moving more of our lives online and away from real physical human contacts. The first half of the book is about social robots. From Jonah Lehrer's review:
“Dependence on a robot presents itself as risk free,” Turkle writes. “But when one becomes accustomed to ‘companionship’ without demands, life with people may seem overwhelming.” A blind date can be a fraught proposition when there’s a robot at home that knows exactly what we need. And all she needs is a power outlet...The reason robots are such a slippery slope, according to Turkle, is that they take advantage of a deeply human instinct. When it comes to the perception of other minds, we are extremely gullible, bestowing agency on even the most inanimate of objects.The second part of the book deals with Turkle's concern that the internet is becoming our way of being with other people, in a style that turns them into objects. (Why did I just text my friend instead of actually just calling and talking with him.)
...the online world is no longer a space of freedom and reinvention. Instead, we have been trapped by Facebook profiles and Google cache, in which verbs like “delete” and “erase” are mostly metaphorical...We aren’t “happy” anymore: we’re simply a semicolon followed by a parenthesis. Instead of talking on the phone, we send a text; instead of writing wistful letters, we edit our Tumblr blog...these obvious objections shouldn’t obscure the real mystery: If the Internet is such an alienating force, then why can’t we escape it? If Facebook is so insufferable, then why do hundreds of millions of people check their page every day?My own experience is that my participation in social web sites has broadened my world of real world contacts and friends, as noted by Lehrer:
...despite our misgivings about the Internet, its effects on real-life relationships seem mostly positive, if minor. A 2007 study at Michigan State University involving 800 undergraduates, for instance, found that Facebook users had more social capital than abstainers, and that the site increased measures of “psychological well-being,” especially in those suffering from low self-esteem. Other studies have found that frequent blogging leads to increased levels of social support and integration and may serve as “the core of building intimate relationships.” One recurring theme to emerge from much of this research is that most people, at least so far, are primarily using the online world to enhance their offline relationships, not supplant them.
Friday, January 28, 2011
Our social brain - what are smiles for?
In this past Tuesday's NYTimes science section, Carl Zimmer offers a brief review of some of the many social functions served by our smiling at each other (signaling happiness, social bonding, embarrassment, dominance, etc.). He focuses on the work of Paula Niedenthal. I found this particular bit interesting:
In one study, she and her colleagues are testing the idea that mimicry lets people recognize authentic smiles. They showed pictures of smiling people to a group of students. Some of the smiles were genuine and others were fake. The students could readily tell the difference between them....Then Dr. Niedenthal and her colleagues asked the students to place a pencil between their lips. This simple action engaged muscles that could otherwise produce a smile. Unable to mimic the faces they saw, the students had a much harder time telling which smiles were real and which were fake.
The scientists then ran a variation on the experiment on another group of students. They showed the same faces to the second group, but had them imagine the smiling faces belonged to salesclerks in a shoe store. In some cases the salesclerks had just sold the students a pair of shoes — in which they might well have a genuine smile of satisfaction. In other trials, they imagined that the salesclerks were trying to sell them a pair of shoes — in which case they might be trying to woo the customer with a fake smile...In reality, the scientists use a combination of real and fake smiles for both groups of salesclerks. When the students were free to mimic the smiles, their judgments were not affected by what the salesclerk was doing...But if the students put a pencil in their mouth, they could no longer rely on their mimicry. Instead, they tended to believe that the salesclerks who were trying to sell them shoes were faking their smiles — even when their smiles were genuine. Likewise, they tended to say that the salesclerks who had finished the sale were smiling for real, even when they weren’t. In other words, they were forced to rely on the circumstances of the smile, rather than the smile itself.
Blog Categories:
emotion,
faces,
happiness,
social cognition
Subscribe to:
Posts (Atom)