Friday, March 06, 2009

Why pay university tuition?

...when you can get an array of astounding courses from places like Academic Earth, with the videos of the lectures shown in your web browser. I recommend the introductory Psychology course offered by Paul Bloom at Yale.

Thursday, March 05, 2009

Our genes influence our social networks

Jackson reviews an analysis by Fowler et al. that suggests that genetic traits influence the social behavior of individuals:
...Fowler et al examined the social network characteristics of 1,110 twins from an Adolescent Health Dataset which is based on interviews of high school students. Presuming that the social environment that twins share is not influenced by whether they are monozygotic or dizygotic, if network characteristics are significantly more correlated among monozygotic twins than dizygotic twins then there is evidence for a genetic role in network formation.

The network characteristics that Fowler et al.investigate are: in-degree (how many students name a given student as a friend), out-degree (how many students a given student names as friends), transitivity (if A and B are friends, and B and C are friends, what is the likelihood that A and C are friends), and betweenness centrality (the fraction of shortest paths between other pairs of students that a given student lies on). Their statistical analysis assumes that the variation in a network characteristic can be additively separated into a component that is genetic, a component caused by the environment that would be shared with a twin, and a component caused by the environment that would not be shared with a twin. The covariance between monozygotic twins is then the variance caused by the common environment plus the variance caused by genetic factors, whereas the covariance between dizygotic twins is the variance caused by the common environment plus half of the variance caused by genetic factors. This formulation allows one to solve for the percentage of variation in a given network characteristic that is caused by each of the genetic, common environment, and unshared environment components. The figure shows that almost half of the variation in transitivity and in-degree are genetically attributable, and more than a quarter of betweenness centrality is genetically attributable, but the genetic component of the out-degree variation is too small to be statistically significant. The common environment is statistically insignificant in all cases. (click on figure to enlarge it).


Fowler et al. tried a number of network models to fit with the data and found the only one which generated a relationship between genetics and transitivity was an “Attract and Introduce” model built on two assumptions. First, some individuals are inherently more attractive than others, whether physically or otherwise, so they receive more friendship nominations. Second, some individuals are inherently more inclined to introduce new friends to existing friends (and hence such individuals will indirectly enhance their own transitivity).

Brain correlates of musical improvisation.

Berkowitz and Ansari report an fMRI study of the brains of trained pianists while they are improvising. To get control conditions for comparisons they designed a series of four activities. In the two general types of tasks, they had subjects either improvise melodies or play pre-learned patterns. Comparing brain activity in these two situations allowed them to focus on melodic improvisation. Subjects did each of these two general tasks either with or without a metronome. When there was no metronome marking time, subjects improvised their own rhythms. Comparing conditions with and without metronome allowed them to look at rhythmic improvisation. A key point is that when the subjects played patterns (instead of improvised melodies), they could choose to play them in any order. Thus there was still some spontaneity in decision making, but the choices were more limited than during improvisation.

The authors observed an overlap between melodic improvisation and rhythmic improvisation in three areas of the brain: the dorsal premotor cortex (dPMC), the anterior cingulate (ACC), and the inferior frontal gyrus/ventral premotor cortex (IFG/vPMC). From a summary of the work by Bannatyne:
“The dPMC takes information about where the body is in space, makes a motor plan, and sends it to the motor cortex to execute the plan. The fact [that] it’s involved in improvisation is not surprising, since it is a motor activity. The ACC is a part of the brain that appears to be involved in conflict monitoring — when you’re trying to sort out two conflicting possibilities, like when you to read the word BLUE when it’s printed in the color red. It’s involved with decision making, which also makes sense — improvisation is decision making, deciding what to play and how to play it.” The IFG/vPMC is perhaps one of the most interesting findings of their study. “This area is known to be involved when people speak and understand language. It’s also active when people hear and understand music. What we’ve shown is that it’s involved when people create music.”

Improvising, from a neurobiological perspective, involves generating, selecting, and executing musical-motor sequences, something that wouldn’t surprise musicians. But in terms of brain research, it’s a new piece of information.

Wednesday, March 04, 2009

Erasing fear responses and preventing the return of fear.

Kindt et al. demonstrate an interesting effect of a beta-blocker that one thinks might become part of clinical practice soon. They found that a conditioned fear response can be weakened by disrupting the reconsolidation of the fear memory with propranolol and that this disruption prevents the return of fear. While Propranolol disrupts the reconsolidation of the fear memory, it does not disrupt declarative memory (recall of the facts of the fear inducing event). The abstract:
Animal studies have shown that fear memories can change when recalled, a process referred to as reconsolidation. We found that oral administration of the beta-adrenergic receptor antagonist propranolol before memory reactivation in humans erased the behavioral expression of the fear memory 24 h later and prevented the return of fear. Disrupting the reconsolidation of fear memory opens up new avenues for providing a long-term cure for patients with emotional disorders.
Some details:
The conditioned fear response was measured as potentiation of the eyeblink startle reflex to a loud noise (40 ms, 104 dB) by electromyography of the right orbicularis oculi muscle. Stronger startle responses to the loud noise during the fear-conditioned stimulus (CS1+) as compared with the control stimulus (CS2-) reflects the fearful state of the participant elicited by CS1+. Startle potentiation taps directly into the amygdala, and fear-conditioning procedures yield highly reliable and robust startle potentiation.


Figure. (click to enlarge) (af) Mean startle potentiation to the fear-conditioned stimulus (CS1), the control stimulus (CS2) and noise alone (NA) trials (left) and mean expectancy scores of the unconditioned stimulus to CS1 and CS2 trials (right) during acquisition (trial 1–8), extinction (trial 1–10) and test (trial 1–5) for the placebo (n = 20, a,b), propranolol reactivation (n = 20, c,d) and propranolol without reactivation (n = 20, e,f) group. CS1+ refers to the fear conditioned stimulus during acquisition, CS1- refers to the fear conditioned stimulus during extinction and test, CS1-R refers to the reactivation of the fear conditioned stimulus and CS2- refers to the control stimulus during all phases of the experiment. Error bars represent s.e.m.

Transcendence from Neuroscience

Clip from a brief essay by Garreau:
....the new vision of transcendence coming out of neuroscience. It’s long been observed that intelligent organisms require love to develop or even just to survive. Not coincidentally, we can readily identify brain functions that allow and require us to be deeply relational with others. There are also aspects of the brain that can be shown to equip us to experience elevated moments when we transcend boundaries of self. What happens as the implications of all this research starts suggesting that particular religions are just cultural artifacts built on top of universal human physical traits?

Genetic determinants of financial risk taking

Here is an interesting bit from Kuhnen and Chiao, although I'm surprised that the reviewers let them get away with using the word 'determinants' rather than 'correlates':
Individuals vary in their willingness to take financial risks. Here we show that variants of two genes that regulate dopamine and serotonin neurotransmission and have been previously linked to emotional behavior, anxiety and addiction (5-HTTLPR and DRD4) are significant determinants of risk taking in investment decisions. We find that the 5-HTTLPR s/s allele carriers take 28% less risk than those carrying the s/l or l/l alleles of the gene. DRD4 7-repeat allele carriers take 25% more risk than individuals without the 7-repeat allele. These findings contribute to the emerging literature on the genetic determinants of economic behavior.

Tuesday, March 03, 2009

The gourmet palete - an exercise in hedonistic psychology

John Bohannon does a humorous piece in the Feb. 20 issue of Science:
What did you do on New Year's Eve? I watched my friends eat dog food. Throughout the last night of 2008, I stood in a makeshift laboratory in the corner of a packed Brooklyn house party. I presented people with bowls of paté--labeled A through E--and a pile of crackers. I explained that four of the bowls contained human food, including expensive luxury patés. One was canned dog food that had been pulsed in a food processor, giving it the same consistency as that of paté. My open-minded friends looked thoughtfully into the middle distance as they munched on mouthfuls of each, jotted down their assessment on data sheets, and then drifted back into the party. As the data rolled in, my eyes grew wide with amazement. Nobody was guessing correctly which was the dog food.

...The five samples covered a wide price range: two expensive liver patés (duck and chicken), two cheap imitation patés (puréed liverwurst and Spam), and the ultimate bargain (dog food). My subjects were hopeless at guessing which paté was dog food. But the answer was literally on the tip of their tongues. Although only one in six people correctly guessed that dish C contained the dog food, almost 75% rated it last in terms of taste. People significantly loathed the dog food (Newell and MacFarlane multiple comparison, p less than 0.1), and that did not correlate with relative sobriety. To cap it off, the average taste rankings of the five spreads exactly matched their relative prices.
Perhaps this result is not surprising, given that numerous blind taste tests involving hundreds of people have shown no correlation between the price of wines costing from $1.50 to $150 and their reported taste.

Thought for the day - the Twitter Bubble

I am incredulous that so many people seem to want to share the ongoing details of their life via twitter and facebook. Do I really care to know that friend X is about to brush his teeth and go to bed? Allesandra Stanley writes a humorous piece on this phenomenon. Some clips:
Left alone in a cage with a mountain of cocaine, a lab rat will gorge itself to death. Caught up in a housing bubble, bankers will keep selling mortgage-backed securities — and amassing bonuses — until credit markets seize, companies collapse, and millions of investors lose their jobs and homes....And news anchors and television personalities who have their own shows, Web sites, blogs and pages on Facebook.com and MySpace.com will send Twitter messages until the last follower falls into a coma.

At the height of the subprime folly, there was not enough outside regulation or inner compunction to restrain heedless excess. It’s too late for traders, but that economic mess should be a lesson for those who traffic in information. Like bankers who never feel they’ve earned enough, television anchors and correspondents apparently never feel that they have communicated enough....It’s not just television, of course. Ordinary people, bloggers and even columnists and book authors, who all already have platforms for their views, feel compelled to share their split-second aperçus, no matter how mundane.

Those who say Twitter is a harmless pastime, which skeptics are free to ignore, are ignoring the corrosive secondary effects. We already live in an era of me-first journalism, autobiographical blogs and first-person reportage. Even daytime cable news is clotted with Lou Dobbsian anchors who ooze self-regard and intemperate opinion...On-air meltdowns are the new scoops. The CNBC correspondent Rick Santelli, a former trader, delivered a rant last week on the floor of the Chicago Mercantile Exchange about the Obama administration’s mortgage bailout proposal.

Mr. Santelli, it should be noted, has not lost all restraint: he does not yet have his own Twitter account. Fans created one for him, in case he changes his mind. “Just to let everyone know,” one follower explained. “This is NOT Rick’s account, but it is a place holder for him as soon as WE can convince him to join Twitter. :)”

And that space has, as of 4:20 on Friday afternoon, 158 followers. Twitterers who maintain that their messages must have meaning since they have an audience should keep Mr. Santelli’s void in mind. There are always some people who, given the chance, will respond to anything, even nothing.

How early abuse in humans changes the adult brain.

Studies on rat models have shown that affectionate mothering alters gene expression to dampen physiological responses to stress, while early abuse has the opposite effect. Now these basic results have been extended to humans by McGowan et al., who carried out a study of people who have committed suicide. They found that that people who were abused or neglected as children showed genetic alterations that likely made them more biologically sensitive to stress. An epigenetic regulation of the glucocorticoid receptor gene, NR3C1, is observed in humans who had been abused as children that is consistent with predictions derived from a rodent model in which early postnatal experience influences adult responses to stress. (Decreases in the expression of this receptor increase reactivity to stress.) I pass on their abstract, and here is a nice explanation of what epigenetic changes are (see also the review by Benedict Carey).
Maternal care influences hypothalamic-pituitary-adrenal (HPA) function in the rat through epigenetic programming of glucocorticoid receptor expression. In humans, childhood abuse alters HPA stress responses and increases the risk of suicide. We examined epigenetic differences in a neuron-specific glucocorticoid receptor (NR3C1) promoter between postmortem hippocampus obtained from suicide victims with a history of childhood abuse and those from either suicide victims with no childhood abuse or controls. We found decreased levels of glucocorticoid receptor mRNA, as well as mRNA transcripts bearing the glucocorticoid receptor 1F splice variant and increased cytosine methylation of an NR3C1 promoter. Patch-methylated NR3C1 promoter constructs that mimicked the methylation state in samples from abused suicide victims showed decreased NGFI-A transcription factor binding and NGFI-A–inducible gene transcription. These findings translate previous results from rat to humans and suggest a common effect of parental care on the epigenetic regulation of hippocampal glucocorticoid receptor expression.

Monday, March 02, 2009

For a tranquil start to your week, Debussy with flowers

I got an email from the fellow who made this video asking if he could use my YouTube videorecording of the Debussy Reverie. I said 'sure, go ahead'.... I'm not too keen on the electronic 'enhancements' he added to my basic piano track to make the first half of the video, but here it is...

Biased minds make better inferences.

Here is an interesting open access article "Homo Heuristicus: Why Biased Minds Make Better Inferences" from the first issue of a new journal from Wiley Interscience, Topics in Cognitive Science. (Check out this free online first issue, there are a number of other fascinating articles). It makes the point that a biased mind can handle uncertainty more efficiently and robustly than an unbiased mind relying on more resource-intensive and general-purpose processing strategies. Its abstract:
Heuristics are efficient cognitive processes that ignore information. In contrast to the widely held view that less processing reduces accuracy, the study of heuristics shows that less information, computation, and time can in fact improve accuracy. We review the major progress made so far: (a) the discovery of less-is-more effects; (b) the study of the ecological rationality of heuristics, which examines in which environments a given strategy succeeds or fails, and why; (c) an advancement from vague labels to computational models of heuristics; (d) the development of a systematic theory of heuristics that identifies their building blocks and the evolved capacities they exploit, and views the cognitive system as relying on an "adaptive toolbox;" and (e) the development of an empirical methodology that accounts for individual differences, conducts competitive tests, and has provided evidence for people's adaptive use of heuristics. Homo heuristicus has a biased mind and ignores part of the available information, yet a biased mind can handle uncertainty more efficiently and robustly than an unbiased mind relying on more resource-intensive and general-purpose processing strategies.

A common brain substrate for evaluating physical and social space.

From Yamakawa et al, work that is consonant with models of embodied cognition (cf. George Lakoff and Mark Johnson) :
Across cultures, social relationships are often thought of, described, and acted out in terms of physical space (e.g. “close friends” “high lord”). Does this cognitive mapping of social concepts arise from shared brain resources for processing social and physical relationships? Using fMRI, we found that the tasks of evaluating social compatibility and of evaluating physical distances engage a common brain substrate in the parietal cortex. The present study shows the possibility of an analytic brain mechanism to process and represent complex networks of social relationships. Given parietal cortex's known role in constructing egocentric maps of physical space, our present findings may help to explain the linguistic, psychological and behavioural links between social and physical space.

Friday, February 27, 2009

Gesture and language acquisition

Gestures precede speech development and, after speech development, continue to enrich the communication process. Comparing how young children and their parents used gesture in their communications with analyses of socioeconomic status and of the child's vocabulary at age 54 months, Rowe and Goldin-Meadow find disparities in gesture use that precede vocabulary disparities (Children from lower socioeconomic brackets tend to have smaller vocabularies than children from higher socioeconomic brackets.) Their abstract:
Children from low–socioeconomic status (SES) families, on average, arrive at school with smaller vocabularies than children from high-SES families. In an effort to identify precursors to, and possible remedies for, this inequality, we videotaped 50 children from families with a range of different SES interacting with parents at 14 months and assessed their vocabulary skills at 54 months. We found that children from high-SES families frequently used gesture to communicate at 14 months, a relation that was explained by parent gesture use (with speech controlled). In turn, the fact that children from high-SES families have large vocabularies at 54 months was explained by children's gesture use at 14 months. Thus, differences in early gesture help to explain the disparities in vocabulary that children bring with them to school.

Followup on genes and language

I wanted to pass on some summary clips from a review by Berwick of the paper featured in a Feb. 12 post on an article by Chater et al. ("Language evolved to fit the human brain...").
Is language more like fashion hemlines or more like the number of fingers on each hand? On the one hand, we know that all normal people, unlike any cats or fish, uniformly grow up speaking some language, just like having 5 fingers on each hand, so language must be part of what is unique to the human genome. However, if one is born in Beijing one winds up speaking a very different language than if one is born in Mumbai, so the number-of-fingers analogy is not quite correct.
The Chater et al. article:
...maintains that the linguistic particulars distinguishing Mandarin from Hindi cannot have arisen as genetically encoded and selected-for adaptations via at least one common route linking evolution and learning, the Baldwin–Simpson effect

In the Baldwin–Simpson model, rather than direct selection for a trait, in this case a particular external behavior, there is selection for learning it. However, as is well known, this entrainment linking learning to genomic encoding works only if there is a close match between the pace of external change and genetic change, even though gene frequencies change only relatively slowly, plodding generation by generation. Applied to language evolution, the basic idea of Chater et al. is to use computer simulations to show that in general the linguistic regularities learners must acquire, such as whether sentences get packaged into verb–object order, e.g., eat apples, as in Mandarin, or object-verb order, e.g., apples eat, as in Hindi, can fluctuate too rapidly across generations to be captured and then encoded by the human genome as some kind of specialized “language instinct.” This finding runs counter to one popular view that these properties of human language were explicitly selected for, instead pointing to human language as largely adventitious, an exaptation, with many, perhaps most, details driven by culture. If this finding is correct, then the portion of the human genome devoted to language alone becomes correspondingly greatly reduced. There is no need, and more critically no informational space, for the genome to blueprint some intricate set of highly-modular, interrelated components for language, just as the genome does not spell out the precise neuron-to-neuron wiring of the developing brain.
Matters boil down to recursion, which I have mentioned in several previous posts.
Chater et al.'s report also points to a rare convergence between the results from 2 quite different fields and methodologies that have often been at odds: the simulation-based, culturally-oriented approach of the PNAS study and a recent, still controversial trend in one strand of modern theoretical linguistics. Both arrive at the same conclusion: a minimal human genome for language. The purely linguistic effort strips away all of the special properties of language, down to the bare-bones necessities distinguishing us from all other species, relegating such previously linguistic matters such as verb–object order vs. object–verb order to extralinguistic factors, such as a general nonhuman cognitive ability to process ordered sequences aligned like beads on a string. What remains? If this recent linguistic program is on the right track, there is in effect just one component left particular to human language, a special combinatorial competence: the ability to take individual items like 2 words, the and apple, and then “glue” them together, outputting a larger, structured whole, the apple, that itself can be manipulated as if it were a single object. This operation runs beyond mere concatenation, because the new object itself still has 2 parts, like water compounded from hydrogen and oxygen, along with the ability to participate in further chemical combinations. Thus this combinatorial operation can apply over and over again to its own output, recursively, yielding an infinity of ever more structurally complicated objects, ate the apple, John ate the apple, Mary knows John ate the apple, a property we immediately recognize as the hallmark of human language, an infinity of possible meaningful signs integrated with the human conceptual system, the algebraic closure of a recursive operator over our dictionary.

This open-ended quality is quite unlike the frozen 10- to 20-word vocalization repertoire that marks the maximum for any other animal species. If it is simply this combinatorial promiscuity that lies at the heart of human language, making “infinite use of finite means,” then Chater et al.'s claim that human language is an exaptation rather than a selected-for adaptation becomes not only much more likely but very nearly inescapable.

Thursday, February 26, 2009

Envy and Schadenfreude in the brain.

Takahasi et al. show that experiencing envy at another person's success activates pain-related neural circuitry, whereas experiencing schadenfreude--delight at someone else's misfortune--activates reward-related neural circuitry. A graphic from the perspectives article by Lieberman and Eisenberger:


The pain and pleasure systems. The pain network consists of the dorsal anterior cingulate cortex (dACC), insula (Ins), somatosensory cortex (SSC), thalamus (Thal), and periaqueductal gray (PAG). This network is implicated in physical and social pain processes. The reward or pleasure network consists of the ventral tegmental area (VTA), ventral striatum (VS), ventromedial prefrontal cortex (VMPFC), and the amygdala (Amyg). This network is implicated in physical and social rewards.

Fetal testosterone predicts male-typical play.

In a survey of several hundred births (112 male, 100 female), Auyeung et al. have found a significant relationship between fetal testosterone and sexually differentiated play behavior in both boys and girls.
Mammals, including humans, show sex differences in juvenile play behavior. In rodents and nonhuman primates, these behavioral sex differences result, in part, from sex differences in androgens during early development. Girls exposed to high levels of androgen prenatally, because of the genetic disorder congenital adrenal hyperplasia, show increased male-typical play, suggesting similar hormonal influences on human development, at least in females. Here, we report that fetal testosterone measured from amniotic fluid relates positively to male-typical scores on a standardized questionnaire measure of sex-typical play in both boys and girls. These results show, for the first time, a link between fetal testosterone and the development of sex-typical play in children from the general population, and are the first data linking high levels of prenatal testosterone to increased male-typical play behavior in boys.

Wednesday, February 25, 2009

Monoamine oxidase A gene predicts aggression following provocation

From McDermott et al. :
Monoamine oxidase A gene (MAOA) has earned the nickname “warrior gene” because it has been linked to aggression in observational and survey-based studies. However, no controlled experimental studies have tested whether the warrior gene actually drives behavioral manifestations of these tendencies. We report an experiment, synthesizing work in psychology and behavioral economics, which demonstrates that aggression occurs with greater intensity and frequency as provocation is experimentally manipulated upwards, especially among low activity MAOA (MAOA-L) subjects. In this study, subjects paid to punish those they believed had taken money from them by administering varying amounts of unpleasantly hot (spicy) sauce to their opponent. There is some evidence of a main effect for genotype and some evidence for a gene by environment interaction, such that MAOA is less associated with the occurrence of aggression in a low provocation condition, but significantly predicts such behavior in a high provocation situation. This new evidence for genetic influences on aggression and punishment behavior complicates characterizations of humans as “altruistic” punishers and supports theories of cooperation that propose mixed strategies in the population. It also suggests important implications for the role of individual variance in genetic factors contributing to everyday behaviors and decisions.

Musical training enhances linguistic abilities in children

An interesting report from Moreno et al. in the journal Cerebral Cortex. They:
...conducted a longitudinal study with 32 nonmusician children over 9 months to determine 1) whether functional differences between musician and nonmusician children reflect specific predispositions for music or result from musical training and 2) whether musical training improves nonmusical brain functions such as reading and linguistic pitch processing. Event-related brain potentials were recorded while 8-year-old children performed tasks designed to test the hypothesis that musical training improves pitch processing not only in music but also in speech. Following the first testing sessions nonmusician children were pseudorandomly assigned to music or to painting training for 6 months and were tested again after training using the same tests. After musical (but not painting) training, children showed enhanced reading and pitch discrimination abilities in speech. Remarkably, 6 months of musical training thus suffices to significantly improve behavior and to influence the development of neural processes as reflected in specific pattern of brain waves. These results reveal positive transfer from music to speech and highlight the influence of musical training. Finally, they demonstrate brain plasticity in showing that relatively short periods of training have strong consequences on the functional organization of the children's brain.

Tuesday, February 24, 2009

Training your working memory increases your cortical Dopamine D1 receptors

McNab et al demonstrate training-induced brain changes that indicate an unexpectedly high level of plasticity of our cortical dopamine D1 system and illustrate the mutual interdependence of our behavior and the underlying brain biochemistry. The training included a visuo-spatial working memory task, a backwards digit span task and a letter span task. These are similar to the n-back tests that I have mentioned in previous posts. The authors had previously shown increased prefrontal and parietal activity after training of working memory. Their abstract:
Working memory is a key function for human cognition, dependent on adequate dopamine neurotransmission. Here we show that the training of working memory, which improves working memory capacity, is associated with changes in the density of cortical dopamine D1 receptors. Fourteen hours of training over 5 weeks was associated with changes in both prefrontal and parietal D1 binding potential. This plasticity of the dopamine D1 receptor system demonstrates a reciprocal interplay between mental activity and brain biochemistry in vivo.
A clip from their methods description:
Participants performed working memory (WM) tasks with a difficulty level close to their individual capacity limit for about 35 min per day over a period of 5 weeks (8–10). Thirteen volunteers (healthy males 20 to 28 years old) performed the 5-week WM training. Five computer-based WM tests (three visuospatial and two verbal) were used to measure each participant's WM capacity before and after training, and they showed a significant improvement of overall WM capacity (paired t test, t = 11.1, P less than 0.001). The binding potential (BP) of D1 and D2 receptors was measured with positron emission tomography (PET) while the participants were resting, before and after training, using the radioligands [11C]SCH23390 and [11C]Raclopride, respectively.

Malthusian information famine

A view of our information future from Charles Seife:
...Vast amounts of digital memory will change the relationship that humans have with information....For the first time, we as a species have the ability to remember everything that ever happens to us. For millennia, we were starving for information to act as raw material for ideas. Now, we are about to have a surfeit.

Alas, there will be famine in the midst of all that plenty. There are some hundred million blogs, and the number is roughly doubling every year. The vast majority are unreadable. Several hundred billion e-mail messages are sent every day; most of it—current estimates run around 70%—is spam. There seems to be a Malthusian principle at work: information grows exponentially, but useful information grows only linearly. Noise will drown out signal. The moment that we, as a species, finally have the memory to store our every thought, etch our every experience into a digital medium, it will be hard to avoid slipping into a Borgesian nightmare where we are engulfed by our own mental refuse.