Showing posts with label music. Show all posts
Showing posts with label music. Show all posts

Monday, February 27, 2017

Monday morning Schubert

On Sunday Feb. 19 I gave a recital dedicated to the memory of David Goldberger, who I had performed with in several four hands recitals several years ago. He gave a recital on his 90th birthday in the summer of 2015, after his diagnosis with stomach cancer, and died in May of 2016. Franz Schubert was his passion, and his magnum opus on the life and music of Schubert was left unfinished at his death. Here is one of the pieces I played at his memorial recital.


Monday, February 20, 2017

Musical evolution in the lab exhibits rhythmic universals

Ravignani et al. have managed to grow the rhythmic universals of human music in the laboratory, suggesting that they arise from human cognitive and biological biases.  Their abstract:
Music exhibits some cross-cultural similarities, despite its variety across the world. Evidence from a broad range of human cultures suggests the existence of musical universals, here defined as strong regularities emerging across cultures above chance. In particular, humans demonstrate a general proclivity for rhythm, although little is known about why music is particularly rhythmic and why the same structural regularities are present in rhythms around the world. We empirically investigate the mechanisms underlying musical universals for rhythm, showing how music can evolve culturally from randomness. Human participants were asked to imitate sets of randomly generated drumming sequences and their imitation attempts became the training set for the next participants in independent transmission chains. By perceiving and imitating drumming sequences from each other, participants turned initially random sequences into rhythmically structured patterns. Drumming patterns developed into rhythms that are more structured, easier to learn, distinctive for each experimental cultural tradition and characterized by all six statistical universals found among world music; the patterns appear to be adapted to human learning, memory and cognition. We conclude that musical rhythm partially arises from the influence of human cognitive and biological biases on the process of cultural evolution.
And, some background from their article describing the six statistical universals found in world music:
Six rhythmic features can be considered human universals, showing a greater than chance frequency overall and appearing in all geographic regions of the world. These statistical universals are: 
-A regularly spaced (isochronous) underlying beat, akin to an implicit metronome. 
-Hierarchical organization of beats of unequal strength, so that some events in time are marked with respect to others. 
-Grouping of beats in two (for example, marches) or three (for example, waltzes). 
-A preference for binary (2-beat) groupings. 
-Clustering of beat durations around a few values distributed in less than five durational categories. 
-The use of durations from different categories to construct riffs, that is, rhythmic motifs or tunes.

Tuesday, February 14, 2017

How our brains make meaning, with the help of a little LSD

Interesting work from Preller et al:

Highlights
•LSD-induced effects are blocked by the 5-HT2A receptor antagonist ketanserin 
•LSD increased the attribution of meaning to previously meaningless music 
•Simulation of the 5-HT2A receptor is crucial for the generation of meaning 
•Changes in personal meaning attribution are mediated by cortical midline structures
Summary
A core aspect of the human self is the attribution of personal relevance to everyday stimuli enabling us to experience our environment as meaningful. However, abnormalities in the attribution of personal relevance to sensory experiences are also critical features of many psychiatric disorders. Despite their clinical relevance, the neurochemical and anatomical substrates enabling meaningful experiences are largely unknown. Therefore, we investigated the neuropharmacology of personal relevance processing in humans by combining fMRI and the administration of the mixed serotonin (5-HT) and dopamine receptor (R) agonist lysergic acid diethylamide (LSD), well known to alter the subjective meaning of percepts, with and without pretreatment with the 5-HT2AR antagonist ketanserin. General subjective LSD effects were fully blocked by ketanserin. In addition, ketanserin inhibited the LSD-induced attribution of personal relevance to previously meaningless stimuli and modulated the processing of meaningful stimuli in cortical midline structures. These findings point to the crucial role of the 5-HT2AR subtype and cortical midline regions in the generation and attribution of personal relevance. Our results thus increase our mechanistic understanding of personal relevance processing and reveal potential targets for the treatment of psychiatric illnesses characterized by alterations in personal relevance attribution.

Monday, September 19, 2016

Defining brain areas involved in music perception.

From Sihvonen et al:
Although acquired amusia is a relatively common disorder after stroke, its precise neuroanatomical basis is still unknown. To evaluate which brain regions form the neural substrate for acquired amusia and its recovery, we performed a voxel-based lesion-symptom mapping (VLSM) and morphometry (VBM) study with 77 human stroke subjects. Structural MRIs were acquired at acute and 6 month poststroke stages. Amusia and aphasia were behaviorally assessed at acute and 3 month poststroke stages using the Scale and Rhythm subtests of the Montreal Battery of Evaluation of Amusia (MBEA) and language tests. VLSM analyses indicated that amusia was associated with a lesion area comprising the superior temporal gyrus, Heschl's gyrus, insula, and striatum in the right hemisphere, clearly different from the lesion pattern associated with aphasia. Parametric analyses of MBEA Pitch and Rhythm scores showed extensive lesion overlap in the right striatum, as well as in the right Heschl's gyrus and superior temporal gyrus. Lesions associated with Rhythm scores extended more superiorly and posterolaterally. VBM analysis of volume changes from the acute to the 6 month stage showed a clear decrease in gray matter volume in the right superior and middle temporal gyri in nonrecovered amusic patients compared with nonamusic patients. This increased atrophy was more evident in anterior temporal areas in rhythm amusia and in posterior temporal and temporoparietal areas in pitch amusia. Overall, the results implicate right temporal and subcortical regions as the crucial neural substrate for acquired amusia and highlight the importance of different temporal lobe regions for the recovery of amusia after stroke.

Friday, July 22, 2016

Cultural differences in music perception

McDermott et al. find that the isolated Tsimane people, who live in the Amazonian rainforest in northwest Bolivia, have no preference for for consonance over dissonance.
Music is present in every culture, but the degree to which it is shaped by biology remains debated. One widely discussed phenomenon is that some combinations of notes are perceived by Westerners as pleasant, or consonant, whereas others are perceived as unpleasant, or dissonant. The contrast between consonance and dissonance is central to Western music and its origins have fascinated scholars since the ancient Greeks. Aesthetic responses to consonance are commonly assumed by scientists to have biological roots11, and thus to be universally present in humans. Ethnomusicologists and composers, in contrast, have argued that consonance is a creation of Western musical culture. The issue has remained unresolved, partly because little is known about the extent of cross-cultural variation in consonance preferences18. Here we report experiments with the Tsimane’—a native Amazonian society with minimal exposure to Western culture—and comparison populations in Bolivia and the United States that varied in exposure to Western music. Participants rated the pleasantness of sounds. Despite exhibiting Western-like discrimination abilities and Western-like aesthetic responses to familiar sounds and acoustic roughness, the Tsimane’ rated consonant and dissonant chords and vocal harmonies as equally pleasant. By contrast, Bolivian city- and town-dwellers exhibited significant preferences for consonance, albeit to a lesser degree than US residents. The results indicate that consonance preferences can be absent in cultures sufficiently isolated from Western music, and are thus unlikely to reflect innate biases or exposure to harmonic natural sounds. The observed variation in preferences is presumably determined by exposure to musical harmony, suggesting that culture has a dominant role in shaping aesthetic responses to music.

Wednesday, April 06, 2016

Why sad music can make us feel good.

As an update to a previous MindBlog post on why we like sad music, I want to note Ojiaku's brief mention of several articles on this subject.
Sad music might make people feel vicarious unpleasant emotions, found a study published last year in Frontiers in Psychology. But this experience can ultimately be pleasurable because it allows a negative emotion to exist indirectly, and at a safe distance. Instead of feeling the depths of despair, people can feel nostalgia for a time when they were in a similar emotional state: a non-threatening way to remember a sadness.
People who are very empathetic are more likely to take pleasure in the emotional experience of sad music, according to another study in Frontiers of Psychology. Others enjoy sad songs because they help them return to an emotionally balanced state, according to a review in Frontiers in Human Neuroscience, published in 2015. And those more open to varied experiences might enjoy the songs because the unique emotions that come up when listening to the music fulfill their need for novelty in thoughts and feelings.
From the Frontiers in Neurosciences abstract:
We offer a framework to account for how listening to sad music can lead to positive feelings, contending that this effect hinges on correcting an ongoing homeostatic imbalance. Sadness evoked by music is found pleasurable: (1) when it is perceived as non-threatening; (2) when it is aesthetically pleasing; and (3) when it produces psychological benefits such as mood regulation, and empathic feelings, caused, for example, by recollection of and reflection on past events.

Monday, March 21, 2016

Rachmaninoff Morceaux de fantasie Op. 3, No. 4, Polichinelle

This is the last of the pieces from my recent recitals that I want to pass on to readers - a robust beginning for a Monday morning!


Wednesday, March 16, 2016

Friday, March 11, 2016

Brahms Waltzes Op. 39, Nos. 1,13,14

Recorded on my Steinway B, now in my Fort Lauderdale condo. I did this piece in recitals in Madison WI and Fort Lauderdale in the past 6 months, and now have made a good quality video recording for my youtube channel. I plan to do this with several of the pieces played at the recitals.


The first of the waltzes has a very robust opening that always brings back memories of my listening to a Saturday morning radio program produced by KTBC in Austin Texas, that invited students taking music lessons to perform a piece, which I dutifully did when I was 12. The program’s opening music was the first of these Brahms waltzes, and I couldn’t imagine ever being able to play something that sounded so difficult.

Saturday, March 05, 2016

Chopin Trois Eccossaises



Recorded on my Steinway B, now in my Fort Lauderdale condo. I did this piece in recitals in Madison WI and Fort Lauderdale in the past 6 months, and now have made a good quality video recording for my youtube channel. I plan to do this with several of the pieces played at the recitals.

Wednesday, February 17, 2016

Finally...a brain area specialized for music has been found.

Norman-Haignere, Kanwisher, and McDermott have devised a new method to computationally dissect scans from functional magnetic resonance imaging of the brain to reveal an area within the major crevice, or sulcus, of the auditory cortex in the temporal lobe just above the ears that responds to music (any kind of music, drumming, whistling, pop songs, rap, anything melodic or rhythmic) independent of general properties of sound like pitch, spectrotemporal modulation, and frequency. They also found an area for speech not explainable by standard acoustic features.

It is possible that music sensitivity is more ancient and fundamental to the human brain than speech perception, with speech having evolved from music.
The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles (“components”) whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech.

Wednesday, December 30, 2015

Dorsal and Ventral Right Brain Pathways for Prosody

Sammler et al. show that analysis of the emotional content, or vocal tone, of our speech is processed in dorsal and ventral streams of the right hemisphere much as phonology, syntax, and semantics are processed by dorsal and ventral streams of the left hemisphere:
Our vocal tone—the prosody—contributes a lot to the meaning of speech beyond the actual words. Indeed, the hesitant tone of a “yes” may be more telling than its affirmative lexical meaning. The human brain contains dorsal and ventral processing streams in the left hemisphere that underlie core linguistic abilities such as phonology, syntax, and semantics. Whether or not prosody — a reportedly right-hemispheric faculty — involves analogous processing streams is a matter of debate. Functional connectivity studies on prosody leave no doubt about the existence of such streams, but opinions diverge on whether information travels along dorsal or ventral pathways. Here we show, with a novel paradigm using audio morphing combined with multimodal neuroimaging and brain stimulation, that prosody perception takes dual routes along dorsal and ventral pathways in the right hemisphere. In experiment 1, categorization of speech stimuli that gradually varied in their prosodic pitch contour (between statement and question) involved (1) an auditory ventral pathway along the superior temporal lobe and (2) auditory-motor dorsal pathways connecting posterior temporal and inferior frontal/premotor areas. In experiment 2, inhibitory stimulation of right premotor cortex as a key node of the dorsal stream decreased participants’ performance in prosody categorization, arguing for a motor involvement in prosody perception. These data draw a dual-stream picture of prosodic processing that parallels the established left-hemispheric multi-stream architecture of language, but with relative rightward asymmetry.

Thursday, December 03, 2015

The evolution of music from emotional signals

I want to pass on the slightly edited abstract of a recent article on the evolutionary origins of music, "Music evolution and neuroscience," in Progress in Brain Research, written by my Univ. of Wisconsin colleague Charles Snowdon.
There have been many attempts to discuss the evolutionary origins of music. We review theories of music origins and take the perspective that music is originally derived from emotional signals in both humans and animals. An evolutionary approach has two components: First, is music adaptive? How does it improve reproductive success? Second, what, if any, are the phylogenetic origins of music? Can we find evidence of music in other species? We show that music has adaptive value through emotional contagion, social cohesion, and improved well-being. We trace the roots of music through the emotional signals of other species suggesting that the emotional aspects of music have a long evolutionary history. We show how music and speech are closely interlinked with the musical aspects of speech conveying emotional information. We describe acoustic structures that communicate emotion in music and present evidence that these emotional features are widespread among humans and also function to induce emotions in animals. Similar acoustic structures are present in the emotional signals of nonhuman animals. We conclude with a discussion of music designed specifically to induce emotional states in animals, both cotton top tamarin monkeys and domestic cats.

Wednesday, November 25, 2015

Musical expertise modulates the brain’s entrainment to music.

Yet another study, by Doelling and Poeppel, showing effects of musical training on the brain and supporting a role for cortical oscillatory activity in music perception and cognition.:

Significance
We demonstrate that cortical oscillatory activity in both low (less than 8 Hz) and high (15–30 Hz) frequencies is tightly coupled to behavioral performance in musical listening, in a bidirectional manner. In light of previous work on speech, we propose a framework in which the brain exploits the temporal regularities in music to accurately parse individual notes from the sound stream using lower frequencies (entrainment) and in higher frequencies to generate temporal and content-based predictions of subsequent note events associated with predictive models.
Abstract
Recent studies establish that cortical oscillations track naturalistic speech in a remarkably faithful way. Here, we test whether such neural activity, particularly low-frequency (less than 8 Hz; delta–theta) oscillations, similarly entrain to music and whether experience modifies such a cortical phenomenon. Music of varying tempi was used to test entrainment at different rates. In three magnetoencephalography experiments, we recorded from nonmusicians, as well as musicians with varying years of experience. Recordings from nonmusicians demonstrate cortical entrainment that tracks musical stimuli over a typical range of tempi, but not at tempi below 1 note per second. Importantly, the observed entrainment correlates with performance on a concurrent pitch-related behavioral task. In contrast, the data from musicians show that entrainment is enhanced by years of musical training, at all presented tempi. This suggests a bidirectional relationship between behavior and cortical entrainment, a phenomenon that has not previously been reported. Additional analyses focus on responses in the beta range (∼15–30 Hz)—often linked to delta activity in the context of temporal predictions. Our findings provide evidence that the role of beta in temporal predictions scales to the complex hierarchical rhythms in natural music and enhances processing of musical content. This study builds on important findings on brainstem plasticity and represents a compelling demonstration that cortical neural entrainment is tightly coupled to both musical training and task performance, further supporting a role for cortical oscillatory activity in music perception and cognition.

Wednesday, November 18, 2015

A personal note, the Steinway B now in Fort Lauderdale - some Chopin

After a few tense moments,  my Steinway B is now moved from Wisconsin to Florida.


I've upgraded my video and audio recording equipment, finally got the bugs out of the process, and thought I would pass on my first test recording - of a Chopin Nocturne that I plan to play at a recital next February here in Fort Lauderdale.  The vers. 3 refers to the fact that this is the third recording of this piece that I have put on my YouTube channel.

 

Tuesday, October 13, 2015

Musical expertise changes the brain's functional connectivity during audiovisual integration

Music notation reading encapsulates auditory, visual, and motor information in a highly organized manner and therefore provides a useful model for studying multisensory phenomena. Paraskevopoulos et al. show that large-scale functional brain networks underpinning audiovisual integration are organized differently in musicians and nonmusicians. They examine brain responses to congruent (sound played corresponding to musical notation) and incongruent (sound played different from notation) stimuli.
Multisensory integration engages distributed cortical areas and is thought to emerge from their dynamic interplay. Nevertheless, large-scale cortical networks underpinning audiovisual perception have remained undiscovered. The present study uses magnetoencephalography and a methodological approach to perform whole-brain connectivity analysis and reveals, for the first time to our knowledge, the cortical network related to multisensory perception. The long-term training-related reorganization of this network was investigated by comparing musicians to nonmusicians. Results indicate that nonmusicians rely on processing visual clues for the integration of audiovisual information, whereas musicians use a denser cortical network that relies mostly on the corresponding auditory information. These data provide strong evidence that cortical connectivity is reorganized due to expertise in a relevant cognitive domain, indicating training-related neuroplasticity.

Figure - Paradigm of an audiovisual congruent and incongruent trial. (A) A congruent trial. (B) An incongruent trial. The line “time” represents the duration of the presentation of the auditory and visual part of the stimulus. The last picture of each trial represents the intertrial stimulus in which subjects had to answer if the trial was congruent or incongruent.

Figure - Cortical network underpinning audiovisual integration. (Upper) Statistical parametric maps of the significant networks for the congruent > incongruent comparison. Networks presented are significant at P less than 0.001, FDR corrected. The color scale indicates t values. (Lower) Node strength of the significant networks for each comparison. Strength is represented by node size.

Thursday, August 06, 2015

Benefits of High School Music training.

Maybe my avoiding gym classes in high school by being in the marching band and chorus paid off some brain benefits (in addition to my already being a pianist). This this work from Tierney et al. also suggests that the nationwide savaging of high school music curricula is a really bad idea.:
Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes.

Thursday, July 23, 2015

Universal features of human music.

I have read through a fascinating paper by Savage et al. that makes a convincing case for statistical universals in the structure and function of human music. I pass on the abstract and a few clips from the text. Motivated readers can request a PDF of the article from me.
Music has been called “the universal language of mankind.” Although contemporary theories of music evolution often invoke various musical universals, the existence of such universals has been disputed for decades and has never been empirically demonstrated. Here we combine a music-classification scheme with statistical analyses, including phylogenetic comparative methods, to examine a well-sampled global set of 304 music recordings. Our analyses reveal no absolute universals but strong support for many statistical universals that are consistent across all nine geographic regions sampled. These universals include 18 musical features that are common individually as well as a network of 10 features that are commonly associated with one another. They span not only features related to pitch and rhythm that are often cited as putative universals but also rarely cited domains including performance style and social context. These cross-cultural structural regularities of human music may relate to roles in facilitating group coordination and cohesion, as exemplified by the universal tendency to sing, play percussion instruments, and dance to simple, repetitive music in groups. Our findings highlight the need for scientists studying music evolution to expand the range of musical cultures and musical features under consideration. The statistical universals we identified represent important candidates for future investigation.
The 18 universal features:
Pitch: Music tends to use discrete pitches (1) to form nonequidistant scales (2) containing seven or fewer scale degrees per octave (3). Music also tends to use descending or arched melodic contours (4) composed of small intervals (5) of less than 750 cents (i.e., a perfect fifth or smaller).
Rhythm: Music tends to use an isochronous beat (6) organized according to metrical hierarchies (7) based on multiples of two or three beats (8)—especially multiples of two beats (9). This beat tends to be used to construct motivic patterns (10) based on fewer than five durational values (11).
Form: Music tends to consist of short phrases (12) less than 9 s long.
Instrumentation: Music tends to use both the voice (13) and (nonvocal) instruments (14), often together in the form of accompanied vocal song.
Performance style: Music tends to use the chest voice (i.e., modal register) (15) to sing words (16), rather than vocables (nonlexical syllables).
Social context: Music tends to be performed predominantly in groups (17) and by males (18). The bias toward male performance is true of singing, but even more so of instrumental performance.
The geographic distribution of the recordings analyzed:

The 304 recordings from the Garland Encyclopedia of World Music show a widespread geographic distribution. They are grouped into nine regions specified a priori by the Encyclopedia’s editors, as color-coded in the legend at bottom: North America (n = 33 recordings), Central/South America (39), Europe (40), Africa (21), the Middle East (35), South Asia (34), East Asia (34), Southeast Asia (14), and Oceania (54).

Thursday, May 14, 2015

Literary (like musical and athletic) expertise shifts brain activity to the caudate nucleus.

Erhard et al. find that creative writing by expert versus amateur writers is associated with more activation in the caudate nucleus, the same area that become more active in expert versus amateur athletes and musicians. The increased recruitment of the basal ganglia network with increasing levels of expertise correlates with behavioral automatization that facilitates complex cognitive tasks.
The aim of the present study was to explore brain activities associated with creativity and expertise in literary writing. Using functional magnetic resonance imaging (fMRI), we applied a real-life neuroscientific setting that consisted of different writing phases (brainstorming and creative writing; reading and copying as control conditions) to well-selected expert writers and to an inexperienced control group.
During creative writing, experts showed cerebral activation in a predominantly left-hemispheric fronto-parieto-temporal network. When compared to inexperienced writers, experts showed increased left caudate nucleus and left dorsolateral and superior medial prefrontal cortex activation. In contrast, less experienced participants recruited increasingly bilateral visual areas. During creative writing activation in the right cuneus showed positive association with the creativity index in expert writers.
High experience in creative writing seems to be associated with a network of prefrontal (mPFC and DLPFC) and basal ganglia (caudate) activation. In addition, our findings suggest that high verbal creativity specific to literary writing increases activation in the right cuneus associated with increased resources obtained for reading processes.

Wednesday, March 25, 2015

Expert listening to music alters gene transcription...So?

I can't resist comment on a piece generated by PsyBlog, "Classical Music's Surprising Effect on Genes Vital to Memory and Learning." ..."How 20 minutes of Mozart affects the expression of genes vital to learning, memory and more…" that points to work of Järvelä​ and collaborators, whose abstract states:
To verify whether listening to classical music has any effect on human transcriptome, we performed genome-wide transcriptional profiling from the peripheral blood of participants after listening to classical music (n = 48), and after a control study without music exposure (n = 15). As musical experience is known to influence the responses to music, we compared the transcriptional responses of musically experienced and inexperienced participants separately with those of the controls. Comparisons were made based on two subphenotypes of musical experience: musical aptitude and music education. In musically experienced participants, we observed the differential expression of 45 genes (27 up- and 18 down-regulated) and 97 genes (75 up- and 22 down-regulated) respectively based on subphenotype comparisons...
Apart from issues of control and sample sizes, there is the problem that almost distinctive behavior (athletic engagement, meditation, whatever, can be shown to alter genes transcription. Presenting a trained (versus a naive) person with stimuli in the trained area of expertise would be expected to alter the “transcriptome” to support the brain processing required for that expertise, regardless of what the area is (music, visual art, literature, athletics). We're a long way from being able to make much sense of or interpret the statements that conclude the abstract:
...the up-regulated genes are primarily known to be involved in the secretion and transport of dopamine, neuron projection, protein sumoylation, long-term potentiation and dephosphorylation. Down-regulated genes are known to be involved in ATP synthase-coupled proton transport, cytolysis, and positive regulation of caspase, peptidase and endopeptidase activities. One of the most up-regulated genes, alpha-synuclein (SNCA), is located in the best linkage region of musical aptitude on chromosome 4q22.1 and is regulated by GATA2, which is known to be associated with musical aptitude. Several genes reported to regulate song perception and production in songbirds displayed altered activities, suggesting a possible evolutionary conservation of sound perception between species. We observed no significant findings in musically inexperienced participants.
To be sure, primitive first steps such as these are useful, but it is unfortunate when their popularization by blogs vying for attention proceeds to overinterpretation and hyperbole.