Showing posts with label vision. Show all posts
Showing posts with label vision. Show all posts

Friday, August 07, 2015

The benefits of pupil orientation.

This is kinda neat, from the current issue of Science Magazine:
 
Slit-eyed animals have either vertical or horizontal pupils. It is unclear whether one orientation conveys any sort of competitive advantage over the other, and if so, under what circumstances. Banks et al. suggest that the optics of vertical pupil slits generally benefit predators, whereas the optics of horizontal slits benefit prey. Vertical slits are better for estimating object distance and distances along the ground—perfect for a predator stalking its prey. In contrast, horizontal slits are better for seeing objects on the horizon—ideal for prey seeing an approaching predator and deciding which way to flee.

Tuesday, April 28, 2015

Improving vision in older adults.

I'm now a Fort Lauderdale, Florida resident (except for 5 months of spring and summer in Madison WI.), and have several friends 85 and older still still driving on the death defying I-95 interstate that links Palm Beach, Fort Lauderdale, and Miami, even though their visual capabilities have clearly declined. This is an age cohort that is increasing by 350% between 2000 and 2050. One of the most obvious declines in their visual processing is with contrast sensitivity, resolving small changes in illumination and shape detail, especially at high spatial frequencies. DeLoss et al., in a study in the same vein as others reported in this blog (enter aging in the search box in the left column), show that doing simple discrimination exercises for 1.5 hr per day of testing and training over 7 days resulted in performance that was not statistically different from that of younger college age adults prior to training. (These were exercises of the sort currently available online (See Brainhq.com or Luminosity.com). Here is the abstract, followed by figures illustrating the test employed.
A major problem for the rapidly growing population of older adults (age 65 and over) is age-related declines in vision, which have been associated with increased risk of falls and vehicle crashes. Research suggests that this increased risk is associated with declines in contrast sensitivity and visual acuity. We examined whether a perceptual-learning task could be used to improve age-related declines in contrast sensitivity. Older and younger adults were trained over 7 days using a forced-choice orientation-discrimination task with stimuli that varied in contrast with multiple levels of additive noise. Older adults performed as well after training as did college-age younger adults prior to training. Improvements transferred to performance for an untrained stimulus orientation and were not associated with changes in retinal illuminance. Improvements in far acuity in younger adults and in near acuity in older adults were also found. These findings indicate that behavioral interventions can greatly improve visual performance for older adults.

Example of the task used in the study. In each trial, subjects saw a Gabor patch at one of two standard orientations—25° clockwise (shown here) or 25° counterclockwise for training and testing trials, 45° clockwise or 45° counterclockwise for familiarization trials. After this Gabor patch disappeared, subjects saw a second stimulus and had to judge whether it was rotated clockwise or counterclockwise in comparison with the standard orientation (the examples shown here are rotated 15° clockwise and counterclockwise off the standard orientation, respectively). 
Example of contrast and noise levels used in the experiment. Gabor patches are displayed at 75% contrast in the top row and at 25% contrast in the bottom row. Stimuli were presented in five blocks (examples shown from left to right). There was no noise in the first block, but starting with the second block, stimuli were presented in Gaussian noise, with the noise level increasing in each subsequent block.
A clip from the NY Times review:
...the subjects watched 750 striped images that were rapidly presented on a computer screen with subtle changes in the visual “noise” surrounding them — like snow on a television. The viewer indicated whether the images were rotating clockwise or counterclockwise. The subject would hear a beep for every correct response.
Each session took an hour and a half. The exercises were taxing, although the subjects took frequent breaks. But after five sessions, the subjects had learned to home in more precisely on the images and to filter out the distracting visual noise. After the training, the older adults performed as well as those 40 years younger, before their own training.

Tuesday, April 21, 2015

Observing leadership emergence through interpersonal brain synchronization.

Interesting work from Jiang et al., who show that show that interpersonal neural synchronization is significantly higher between leaders and followers than between followers and followers, suggesting that leaders emerge by synchronizing their brain activity with that of the followers:
The neural mechanism of leader emergence is not well understood. This study investigated (i) whether interpersonal neural synchronization (INS) plays an important role in leader emergence, and (ii) whether INS and leader emergence are associated with the frequency or the quality of communications. Eleven three-member groups were asked to perform a leaderless group discussion (LGD) task, and their brain activities were recorded via functional near infrared spectroscopy (fNIRS)-based hyperscanning. Video recordings of the discussions were coded for leadership and communication. Results showed that the INS for the leader–follower (LF) pairs was higher than that for the follower–follower (FF) pairs in the left temporo-parietal junction (TPJ), an area important for social mentalizing. Although communication frequency was higher for the LF pairs than for the FF pairs, the frequency of leader-initiated and follower-initiated communication did not differ significantly. Moreover, INS for the LF pairs was significantly higher during leader-initiated communication than during follower-initiated communications. In addition, INS for the LF pairs during leader-initiated communication was significantly correlated with the leaders’ communication skills and competence, but not their communication frequency. Finally, leadership could be successfully predicted based on INS as well as communication frequency early during the LGD (before half a minute into the task). In sum, this study found that leader emergence was characterized by high-level neural synchronization between the leader and followers and that the quality, rather than the frequency, of communications was associated with synchronization. These results suggest that leaders emerge because they are able to say the right things at the right time.

Friday, January 23, 2015

We can see in the infrared!

The major part of my professional life was spent doing research on how the rod cells in our retinas change light into a nerve signal. (I just got a request from ResearchGate, a site on which scientists list their work, suggesting that I upload another of my old vision articles, in this case one that appeared in Nature - in 1965 - 50 years ago! - titled "Reaction of the Rhodopsin Chromophore with Sodium Borohydride".) Even though for the past 20 years or so I have focused on the topics covered by MindBlog I occasionally see a vision article that takes me back to 'the old days'. A colleague from those days (Krzysztof Palczewski) and collaborators have recently done a nice piece of work demonstrating that we can actually expand our vision beyond the normal "visible" range of 400 (blue) to 720 (red) nanometer (nm) wavelengths into the higher frequency (lower energy) infrared regions emitted by infrared lasers. It turns out that the Rhodopsin Chromophore of my article above, retinal, which normally has its shape changed (isomerized) by absorbing one photon of visible light, can be activated by a two-photon chromophore isomerization, especially at wavelengths above 900 nm. From their significance and abstract statements:
This study resolves a long-standing question about the ability of humans to perceive near infrared radiation (IR) and identifies a mechanism driving human IR vision. A few previous reports and our expanded psychophysical studies here reveal that humans can detect IR at wavelengths longer than 1,000 nm and perceive it as visible light, a finding that has not received a satisfactory physical explanation. We show that IR light activates photoreceptors through a nonlinear optical process.
Vision relies on photoactivation of visual pigments in rod and cone photoreceptor cells of the retina. The human eye structure and the absorption spectra of pigments limit our visual perception of light. Our visual perception is most responsive to stimulating light in the 400- to 720-nm (visible) range. First, we demonstrate by psychophysical experiments that humans can perceive infrared laser emission as visible light. Moreover, we show that mammalian photoreceptors can be directly activated by near infrared light with a sensitivity that paradoxically increases at wavelengths above 900 nm, and display quadratic dependence on laser power, indicating a nonlinear optical process. Biochemical experiments with rhodopsin, cone visual pigments, and a chromophore model compound 11-cis-retinyl-propylamine Schiff base demonstrate the direct isomerization of visual chromophore by a two-photon chromophore isomerization. Indeed, quantum mechanics modeling indicates the feasibility of this mechanism. Together, these findings clearly show that human visual perception of near infrared light occurs by two-photon isomerization of visual pigments.

Tuesday, December 23, 2014

Impact of literacy on visual processing

From Pegado et al., a clear demonstration of how learning the act of reading enhances our visual processing:
How does learning to read affect visual processing? We addressed this issue by scanning adults who could not attend school during childhood and either remained illiterate or acquired partial literacy during adulthood (ex-illiterates). By recording event-related brain responses, we obtained a high-temporal resolution description of how illiterate and literate adults differ in terms of early visual responses. The results show that learning to read dramatically enhances the magnitude, precision, and invariance of early visual coding, within 200 ms of stimulus onset, and also enhances later neural activity. Literacy effects were found not only for the expected category of expertise (letter strings), but also extended to other visual stimuli, confirming the benefits of literacy on early visual processing.

Monday, September 29, 2014

Hearing and imagination shape what we see.

Vetter et al. have done the interesting experiment of blindfolding people and then scanning their brains while they listened to birds singing, traffic noise, or people talking. They were able to identify the category of sounds just by examining the pattern of activity in the primary visual cortex, thus making a nice demonstration of the interconnectedness of the brain's sensory systems.

Highlights
•Early visual cortex receives nonretinal input carrying abstract information 
•Both auditory perception and imagery generate consistent top-down input 
•Information feedback may be mediated by multisensory areas 
•Feedback is robust to attentional, but not visuospatial, manipulation
Summary
Human early visual cortex was traditionally thought to process simple visual features such as orientation, contrast, and spatial frequency via feedforward input from the lateral geniculate nucleus. However, the role of nonretinal influence on early visual cortex is so far insufficiently investigated despite much evidence that feedback connections greatly outnumber feedforward connections. Here, we explored in five fMRI experiments how information originating from audition and imagery affects the brain activity patterns in early visual cortex in the absence of any feedforward visual stimulation. We show that category-specific information from both complex natural sounds and imagery can be read out from early visual cortex activity in blindfolded participants. The coding of nonretinal information in the activity patterns of early visual cortex is common across actual auditory perception and imagery and may be mediated by higher-level multisensory areas. Furthermore, this coding is robust to mild manipulations of attention and working memory but affected by orthogonal, cognitively demanding visuospatial processing. Crucially, the information fed down to early visual cortex is category specific and generalizes to sound exemplars of the same category, providing evidence for abstract information feedback rather than precise pictorial feedback. Our results suggest that early visual cortex receives nonretinal input from other brain areas when it is generated by auditory perception and/or imagery, and this input carries common abstract information. Our findings are compatible with feedback of predictive information to the earliest visual input level, in line with predictive coding models.

Thursday, June 05, 2014

Social attention and our ventromedial prefrontal cortex.

Ralph Adolphs points to an interesting article by Wolf et al. showing that bilateral ventromedial prefrontal cortex damage impairs visual attention to the eye regions of faces, particularly for fearful faces. From Adolphs summary:



Failing to look at the eyes. Shown in each image are the regions of a face at which different groups of subjects look, as measured using eye-tracking. The hottest colours (red regions) denote those regions of the face where people look the most. Whereas this corresponds to the eye region of the face in healthy controls (far left), it is abnormal in certain clinical populations, including individuals with lesions of the vmPFC (top right) or amygdala (bottom right) and individuals with autism spectrum disorder (bottom centre) Top row: from Wolf et al. 2014. Bottom row: data from Michael Spezio, Daniel Kennedy, Ralph Adolphs. All images represent spatially smoothed data averaged across multiple fixations, multiple stimuli and multiple subjects within the indicated group.

Tuesday, May 13, 2014

GABA predicts time perception.

Individuals can vary widely in their ability to detect sub-second visual stimuli, and most cognitive training and exercise regimes have exercises designed to enhance detection of shorter (50-200 millisecond) intervals. Terhune et al. make the interesting observation that this variability correlates with the resting state levels of the inhibitory transmitter GABA (gamma-amino butyric acid)in our visual cortex, such that elevated GABA is associated with underestimating the duration of subsecond visual intervals:
Our perception of time constrains our experience of the world and exerts a pivotal influence over a myriad array of cognitive and motor functions. There is emerging evidence that the perceived duration of subsecond intervals is driven by sensory-specific neural activity in human and nonhuman animals, but the mechanisms underlying individual differences in time perception remain elusive. We tested the hypothesis that elevated visual cortex GABA impairs the coding of particular visual stimuli, resulting in a dampening of visual processing and concomitant positive time-order error (relative underestimation) in the perceived duration of subsecond visual intervals. Participants completed psychophysical tasks measuring visual interval discrimination and temporal reproduction and we measured in vivo resting state GABA in visual cortex using magnetic resonance spectroscopy. Time-order error selectively correlated with GABA concentrations in visual cortex, with elevated GABA associated with a rightward horizontal shift in psychometric functions, reflecting a positive time-order error (relative underestimation). These results demonstrate anatomical, neurochemical, and task specificity and suggest that visual cortex GABA contributes to individual differences in time perception.

Thursday, April 17, 2014

Attributing awareness to oneself and others.

Kelley et al. make some fascinating observations. I pass on their statement of the significance of the work and their abstract:
Significance
What is the relationship between your own private awareness of events and the awareness that you intuitively attribute to the people around you? In this study, a region of the human cerebral cortex was active when people attributed sensory awareness to someone else. Furthermore, when that region of cortex was temporarily disrupted, the person’s own sensory awareness was disrupted. The findings suggest a fundamental connection between private awareness and social cognition.
Abstract
This study tested the possible relationship between reported visual awareness (“I see a visual stimulus in front of me”) and the social attribution of awareness to someone else (“That person is aware of an object next to him”). Subjects were tested in two steps. First, in an fMRI experiment, subjects were asked to attribute states of awareness to a cartoon face. Activity associated with this task was found bilaterally within the temporoparietal junction (TPJ) among other areas. Second, the TPJ was transiently disrupted using single-pulse transcranial magnetic stimulation (TMS). When the TMS was targeted to the same cortical sites that had become active during the social attribution task, the subjects showed symptoms of visual neglect in that their detection of visual stimuli was significantly affected. In control trials, when TMS was targeted to nearby cortical sites that had not become active during the social attribution task, no significant effect on visual detection was found. These results suggest that there may be at least some partial overlap in brain mechanisms that participate in the social attribution of sensory awareness to other people and in attributing sensory awareness to oneself.

Friday, April 04, 2014

Exercise protects retinas.

Gretchen Reynolds points to an article by Lawson et al. showing that the increase in blood levels of brain-derived neurotrophic factors (B.N.D.F.), known to promote neuron health and growth, apparently also raises BNDF levels in the retina. In a mouse model for retinal degeneration (vaguely analogous to human macular degeneration), exercise that raises BNDF levels inhibits the retinal deterioration caused by brief (4 hour) exposure to very bright lights.
Aerobic exercise is a common intervention for rehabilitation of motor, and more recently, cognitive function. While the underlying mechanisms are complex, BDNF may mediate much of the beneficial effects of exercise to these neurons. We studied the effects of aerobic exercise on retinal neurons undergoing degeneration. We exercised wild-type BALB/c mice on a treadmill (10 m/min for 1 h) for 5 d/week or placed control mice on static treadmills. After 2 weeks of exercise, mice were exposed to either toxic bright light (10,000 lux) for 4 h to induce photoreceptor degeneration or maintenance dim light (25 lux). Bright light caused 75% loss of both retinal function and photoreceptor numbers. However, exercised mice exposed to bright light had 2 times greater retinal function and photoreceptor nuclei than inactive mice exposed to bright light. In addition, exercise increased retinal BDNF protein levels by 20% compared with inactive mice. Systemic injections of a BDNF tropomyosin-receptor-kinase (TrkB) receptor antagonist reduced retinal function and photoreceptor nuclei counts in exercised mice to inactive levels, effectively blocking the protective effects seen with aerobic exercise. The data suggest that aerobic exercise is neuroprotective for retinal degeneration and that this effect is mediated by BDNF signaling.

Monday, March 10, 2014

Default mode network: the seat of literary creativity?

Wise et al. offer an article with the title of this post in Trends in Cognitive Sciences that comments on the brain areas that consistently become active in different subjects when spoken and written versions of a narrative are presented. They found
...a distribution of correlated activity in the midline posterior cortex and bilateral posterior inferior parietal cortex. This forms the posterior part of the so-called default mode network (DMN; Figure 1), a system classically associated with the introspective mind. It has been observed before, in another meta-analysis of language studies, one that set out to reveal the semantic system [ref]. The authors of that review, and others since (ref), have discussed how memories, semantic and personal, emotions, theory of mind, and no doubt many other mental functions are linked through the DMN. This would suggest that overlapping components of the DMN are functionally interconnected with many separate brain systems, including those for language and semantics, and indeed this is turning out to be the case (refs).

Spot the literary network: the default mode network (DMN) viewed from different angles (colors are intended for illustrative purposes only; data from [ref]). The medial posterior cingulate (PCC) and inferior posterior parietal components (IPP) were implicated in linguistic processing by Regev et al. [ref], but we suggest that due to the widespread connectivity of the DMN, these regions are related to higher order ‘literary’ processing.

Tuesday, February 25, 2014

Restoring vision to blind mice (and humans with RP or AMD?) with a photoswitch.

Age-related macular degeneration (AMD) and retinitis pigmentosa (RP) affect millions of people around the world and in their advanced stages lead to blindness. Studies in mouse models of these diseases have shown some promise in restoring vision but are either invasive (i.e., implantation of electronic chips) or irreversible (i.e., transplantation of photoreceptor progenitors or viral expression of optogenetic tools). Tochitsky et al. (click on the link to see the authors' fancy PR video of the work) have now performed introcular injection of a synthetic small molecule called DENAQ which is a red-shifted K+ channel photoswitch that exhibits trans to cis photoisomerization with visible light (450–550 nm) and relaxes rapidly to the trans configuration in the dark A single injection photosensitizes blind retinas with no photoreceptors to daylight intensity white light for a period of days with no toxicity. It restores light-elicited behavior and enables visual learning in blind mice. It is a prime drug candidate for vision restoration in patients with end-stage RP and AMD.



Figure of DENAQ from Mourot et al., ACS Chem. Neurosci., 2011, 2 (9), pp 536–543AMD.

Thursday, February 13, 2014

Our pupil dilation reflects decision related choice.

Pupil size is known to be increased by effortful decisions. The current supposition is that decision-related pupil dilation tracks the activity of neuromodulatory systems of the brainstem—in particular, the noradrenergic locus coeruleus and, possibly the cholinergic basal forebrain systems. These neuromodulatory systems activate briefly during perceptual decisions such as visual target detection.

de Gee et al. now provide evidence that pupil dilation reflects not the termination of the decision process but rather events during the course of decision formation. The amplitude of pupil dilation is bigger during decision formation for yes than for no choices, and it is strongest in conservative subjects choosing yes against their bias. Imagine what advertisers or merchandizers training cameras on their customers might be able to do with this!
A number of studies have shown that pupil size increases transiently during effortful decisions. These decision-related changes in pupil size are mediated by central neuromodulatory systems, which also influence the internal state of brain regions engaged in decision making. It has been proposed that pupil-linked neuromodulatory systems are activated by the termination of decision processes, and, consequently, that these systems primarily affect the postdecisional brain state. Here, we present pupil results that run contrary to this proposal, suggesting an important intradecisional role. We measured pupil size while subjects formed protracted decisions about the presence or absence (“yes” vs. “no”) of a visual contrast signal embedded in dynamic noise. Linear systems analysis revealed that the pupil was significantly driven by a sustained input throughout the course of the decision formation. This sustained component was larger than the transient component during the final choice (indicated by button press). The overall amplitude of pupil dilation during decision formation was bigger before yes than no choices, irrespective of the physical presence of the target signal. Remarkably, the magnitude of this pupil choice effect (yes > no) reflected the individual criterion: it was strongest in conservative subjects choosing yes against their bias. We conclude that the central neuromodulatory systems controlling pupil size are continuously engaged during decision formation in a way that reveals how the upcoming choice relates to the decision maker’s attitude. Changes in brain state seem to interact with biased decision making in the face of uncertainty.

Wednesday, January 29, 2014

Enriched environments enhance adult brain plasticity.

I learned much of my neuroscience at tea time in Hubel and Wiesel's laboratory at Harvard Medical School during my post-doc days in the 1960's, as we discussed their discovery of critical periods during the development of ocular dominance columns in the visual cortex, and the apparent immutability of the adult pathways, once formed. Everything now has changed. We know our brains maintain their ability to make new nerve cells and connections throughout life. Greifzu et. al. add a new chapter to the plasticity story in their recent work showing how important enriched environments are in maintaining a younger brain that has not been locked into place by the increased inhibitory interactions characteristic of adult brains. Specifically, they show that ocular dominance columns can remain plastic in adult mice in enriched, but not ordinary cage, environments, and recover from stroke-induced damage or monocular deprivation.
Experimental animals are usually raised in small, so-called standard cages, depriving them of numerous natural stimuli. We show that raising mice in an enriched environment, allowing enhanced physical, social, and cognitive stimulation, preserved a juvenile brain into adulthood. Enrichment also rejuvenated the visual cortex after extended periods of standard cage rearing and protected adult mice from stroke-induced impairments of cortical plasticity. Because the local inhibitory tone in the visual cortex of adult enriched mice was not only significantly reduced compared with nonenriched animals but at juvenile levels, the plasticity-promoting effect of enrichment is most likely mediated by preserving low juvenile levels of inhibition into adulthood and thereby, extending sensitive phases of enhanced neuronal plasticity into an older age.

Wednesday, September 11, 2013

Language can boost unseen objects into visual awareness.

From Lupyan et al:
Linguistic labels (e.g., “chair”) seem to activate visual properties of the objects to which they refer. Here we investigated whether language-based activation of visual representations can affect the ability to simply detect the presence of an object. We used continuous flash suppression to suppress visual awareness of familiar objects while they were continuously presented to one eye. Participants made simple detection decisions, indicating whether they saw any image. Hearing a verbal label before the simple detection task changed performance relative to an uninformative cue baseline. Valid labels improved performance relative to no-label baseline trials. Invalid labels decreased performance. Labels affected both sensitivity (d′) and response times. In addition, we found that the effectiveness of labels varied predictably as a function of the match between the shape of the stimulus and the shape denoted by the label. Together, the findings suggest that facilitated detection of invisible objects due to language occurs at a perceptual rather than semantic locus. We hypothesize that when information associated with verbal labels matches stimulus-driven activity, language can provide a boost to perception, propelling an otherwise invisible image into awareness.
A Methods note:
Continuous flash suppression was implemented using anaglyph images: participants wore red/cyan glasses and viewed stereograms containing a high-contrast red mask (∼9° × 9°) and—on object-present trials—a superimposed lower-contrast cyan object (Fig. 1A). Only the object was visible to the right eye and only the mask to the left. The dynamic mask comprised curved line segments, with frames randomly alternating at 10 Hz. Because similarity in spatial properties between stimuli and masks is important for effective suppression of stimuli (72), line segments were used to better mask the curvilinear character of the objects.

(A) Stimulus creation using continuous flash suppression. (B) Basic procedure of experiments 1 and 2.

Monday, July 08, 2013

How our brain cortex receives information about the world

This post is for that subset of MindBlog readers interested in details of brain wiring. Constantinople and Bruno have upset a basic dogma taught to budding neuroscientists (like myself, in the 1960s) - that (from the Science editor's summary):
...there is a “canonical microcircuit” in the neo cortex, in which information is transformed as excitation spreads serially along connections from thalamus, to cortical layer 4, then to layers 2/3, to layers 5/6, and finally to other brain regions. Each cortical layer is thought to transform sensory signals to extract behaviorally relevant information. Now, from Constantinople and Bruno...In vivo whole-cell recordings revealed that sensory stimuli activate neurons in deep cortical layers simultaneously to those in layer 4 and that a large number of thalamic neurons converge onto deep pyramidal neurons, possibly allowing sensory information to completely bypass upper layers. Temporary blockade of layer 4 revealed that synaptic input to deep cortical layers derived entirely from the thalamus and not at all from upper cortical layers. This thalamically derived synaptic input reliably drove pyramidal neurons in layer 5 to discharge action potentials in the living animal. These deep layer neurons project to numerous higher-order brain regions and could directly mediate behavior.
Here is a summary graphic from the paper:


(A) In the conventional serial model, sensory information is transformed as excitation spreads from thalamus to L4 to L2/3 to L5/6 along the densest axonal pathways (green). (B) In the bistratified model, thalamus copies sensory information to both an upper stratum (L4 and L2/3) and a lower stratum (L5/6), which differ in coding properties and downstream target
s.

Friday, July 05, 2013

Eye widening in fear - sensory and social benefits

An interesting bit from Lee et al. Their abstract:
Facial expressions may have originated from a primitive sensory regulatory function that was then co-opted and further shaped for the purposes of social utility. In the research reported here, we tested such a hypothesis by investigating the functional origins of fear expressions for both the expresser and the observer. We first found that fear-based eye widening enhanced target discrimination in the available visual periphery of the expresser by 9.4%. We then found that fear-based eye widening enhanced observers’ discrimination of expressers’ gaze direction and facilitated observers’ responses when locating eccentric targets. We present evidence that this benefit was driven by neither the perceived emotion nor attention but, rather, by an enhanced physical signal originating from greater exposure of the iris and sclera. These results highlight the coevolution of sensory and social regulatory functions of emotional expressions by showing that eye widening serves to enhance processing of important environmental events in the visual fields of both expresser and observer.

Tuesday, February 26, 2013

How ambient light might influence our mood.

The visual pigment melanopsin in the intrinsically photosensitive retinal ganglion cells (ipRGCs) of our inner retinas (two cells layers away from our rods and cones) detect ambient light and send this information to brain areas that regulate circadian rhythms and mood. LaGates et al. have now found that inappropriately timed light exposure that does not alter normal sleep architecture and circadian rhythmicity of body temperature and general activity still can cause impaired learning and depression-like behaviors in mice. In mice genetically altered to remove ipRGC cells, the depressive-like behaviors and learning deficits are not observed. If similar mechanisms operate in us humans, this suggests a potential mechanism by which abnormal ambient light schedules — caused by shift work or simply switching on an artificial light — might influence mood and learning. Here is their abstract:
The daily solar cycle allows organisms to synchronize their circadian rhythms and sleep–wake cycles to the correct temporal niche. Changes in day-length, shift-work, and transmeridian travel lead to mood alterations and cognitive function deficits. Sleep deprivation and circadian disruption underlie mood and cognitive disorders associated with irregular light schedules. Whether irregular light schedules directly affect mood and cognitive functions in the context of normal sleep and circadian rhythms remains unclear. Here we show, using an aberrant light cycle that neither changes the amount and architecture of sleep nor causes changes in the circadian timing system, that light directly regulates mood-related behaviours and cognitive functions in mice. Animals exposed to the aberrant light cycle maintain daily corticosterone rhythms, but the overall levels of corticosterone are increased. Despite normal circadian and sleep structures, these animals show increased depression-like behaviours and impaired hippocampal long-term potentiation and learning. Administration of the antidepressant drugs fluoxetine or desipramine restores learning in mice exposed to the aberrant light cycle, suggesting that the mood deficit precedes the learning impairments. To determine the retinal circuits underlying this impairment of mood and learning, we examined the behavioural consequences of this light cycle in animals that lack intrinsically photosensitive retinal ganglion cells. In these animals, the aberrant light cycle does not impair mood and learning, despite the presence of the conventional retinal ganglion cells and the ability of these animals to detect light for image formation. These findings demonstrate the ability of light to influence cognitive and mood functions directly through intrinsically photosensitive retinal ganglion cells.

Thursday, February 07, 2013

The cocktail party effect is enhanced by vision.

Golumbic et al. show that watching someone we are trying to hear and understand in a crowded noisy setting sharpens up the auditory processing in our brains that suppresses unwanted sounds from our surround:
Our ability to selectively attend to one auditory signal amid competing input streams, epitomized by the “Cocktail Party” problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared with responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker's face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a Cocktail Party setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed.
These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive.