Showing posts with label vision. Show all posts
Showing posts with label vision. Show all posts

Thursday, February 22, 2018

When our eyes move, our eardrums move.

Interesting stuff from Gruters et al.:

Significance
The peripheral hearing system contains several motor mechanisms that allow the brain to modify the auditory transduction process. Movements or tensioning of either the middle ear muscles or the outer hair cells modifies eardrum motion, producing sounds that can be detected by a microphone placed in the ear canal (e.g., as otoacoustic emissions). Here, we report a form of eardrum motion produced by the brain via these systems: oscillations synchronized with and covarying with the direction and amplitude of saccades. These observations suggest that a vision-related process modulates the first stage of hearing. In particular, these eye movement-related eardrum oscillations may help the brain connect sights and sounds despite changes in the spatial relationship between the eyes and the ears.
Abstract
Interactions between sensory pathways such as the visual and auditory systems are known to occur in the brain, but where they first occur is uncertain. Here, we show a multimodal interaction evident at the eardrum. Ear canal microphone measurements in humans (n = 19 ears in 16 subjects) and monkeys (n = 5 ears in three subjects) performing a saccadic eye movement task to visual targets indicated that the eardrum moves in conjunction with the eye movement. The eardrum motion was oscillatory and began as early as 10 ms before saccade onset in humans or with saccade onset in monkeys. These eardrum movements, which we dub eye movement-related eardrum oscillations (EMREOs), occurred in the absence of a sound stimulus. The amplitude and phase of the EMREOs depended on the direction and horizontal amplitude of the saccade. They lasted throughout the saccade and well into subsequent periods of steady fixation. We discuss the possibility that the mechanisms underlying EMREOs create eye movement-related binaural cues that may aid the brain in evaluating the relationship between visual and auditory stimulus locations as the eyes move.

Thursday, January 04, 2018

The temporal organization of perception.

Ronconi et al. do interesting work suggesting that whether we perceive two visual stimuli as being separate events or a single event depends on a precise relationship between specific temporal window durations and specific brain oscillations measured by EEG (alpha oscillations (8–10 Hz)and theta oscillations (6–7 Hz):
Incoming sensory input is condensed by our perceptual system to optimally represent and store information. In the temporal domain, this process has been described in terms of temporal windows (TWs) of integration/segregation, in which the phase of ongoing neural oscillations determines whether two stimuli are integrated into a single percept or segregated into separate events. However, TWs can vary substantially, raising the question of whether different TWs map onto unique oscillations or, rather, reflect a single, general fluctuation in cortical excitability (e.g., in the alpha band). We used multivariate decoding of electroencephalography (EEG) data to investigate perception of stimuli that either repeated in the same location (two-flash fusion) or moved in space (apparent motion). By manipulating the interstimulus interval (ISI), we created bistable stimuli that caused subjects to perceive either integration (fusion/apparent motion) or segregation (two unrelated flashes). Training a classifier searchlight on the whole channels/frequencies/times space, we found that the perceptual outcome (integration vs. segregation) could be reliably decoded from the phase of prestimulus oscillations in right parieto-occipital channels. The highest decoding accuracy for the two-flash fusion task (ISI = 40 ms) was evident in the phase of alpha oscillations (8–10 Hz), while the highest decoding accuracy for the apparent motion task (ISI = 120 ms) was evident in the phase of theta oscillations (6–7 Hz). These results reveal a precise relationship between specific TW durations and specific oscillations. Such oscillations at different frequencies may provide a hierarchical framework for the temporal organization of perception.

Monday, December 18, 2017

Positive stimuli blur time.

From Roberts et al.:
Anecdotal reports that time “flies by” or “slows down” during emotional events are supported by evidence that the motivational relevance of stimuli influences subsequent duration judgments. Yet it is unknown whether the subjective quality of events as they unfold is altered by motivational relevance. In a novel paradigm, we measured the subjective experience of moment-to-moment visual perception. Participants judged the temporal smoothness of high-approach positive images (desserts), negative images (e.g., of bodily mutilation), and neutral images (commonplace scenes) as they faded to black. Results revealed approach-motivated blurring, such that positive stimuli were judged as smoother and negative stimuli as choppier relative to neutral stimuli. Participants’ ratings of approach motivation predicted perceived fade smoothness after we controlled for low-level stimulus features. Electrophysiological data indicated that approach-motivated blurring modulated relatively rapid perceptual activation. These results indicate that stimulus value influences subjective temporal perceptual acuity; approach-motivating stimuli elicit perception of a “blurred” frame rate characteristic of speeded motion.

Monday, December 04, 2017

A mind reading machine?

Not quite, but Matthew Hutson points to work by Wen et al. using an artificial neural network to categorize fMRI signals from subjects watching different categories of images. The algorithm could predict with about 50% accuracy which of 15 classes of visual object a subject was watching. His description:
Artificial intelligence has taken us one baby step closer to the mind-reading machines of science fiction. Researchers have developed “deep learning” algorithms—roughly modeled on the human brain—to decipher, you guessed it, the human brain. First, they built a model of how the brain encodes information. As three women spent hours viewing hundreds of short videos, a functional MRI machine measured signals of activity in the visual cortex and elsewhere. A popular type of artificial neural network used for image processing learned to associate video images with brain activity. As the women watched additional clips, the algorithm’s predicted activity correlated with actual activity in a dozen brain regions. It also helped the scientists visualize which features each area of the cortex was processing. Another network decoded neural signals: Based on a participant’s brain activity, it could predict with about 50% accuracy what she was watching (by selecting one of 15 categories including bird, airplane, and exercise). If the network had trained on data from a different woman’s brain, it could still categorize the image with about 25% accuracy, the researchers report this month in Cerebral Cortex. The network could also partially reconstruct what a participant saw, turning brain activity into pixels, but the resulting images were little more than white blobs. The researchers hope their work will lead to the reconstruction of mental imagery, which uses some of the same brain circuits as visual processing. Translating from the mind’s eye into bits could allow people to express vivid thoughts or dreams to computers or to other people without words or mouse clicks, and could help those with strokes who have no other way to communicate.

Wednesday, August 30, 2017

An essay on the real problem of consciousness.

For those of you who are consciousness mavens, I would recommend having a glance at Anil Seth’s essay, which does a clear headed description of some current ideas about what consciousness is. He summarizes the model of consciousness as an ensemble of predictive perceptions. Clips from his essay:
The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).
...instead of asking which brain regions correlate with conscious (versus unconscious) perception, we can ask: which aspects of predictive perception go along with consciousness? A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. In 2001, Alvaro Pascual-Leone and Vincent Walsh at Harvard Medical School asked people to report the perceived direction of movement of clouds of drifting dots (so-called ‘random dot kinematograms’). They used TMS to specifically interrupt top-down signalling across the visual cortex, and they found that this abolished conscious perception of the motion, even though bottom-up signals were left intact.
More recently, in my lab, we’ve been probing the predictive mechanisms of conscious perception in more detail. In several experiments...we’ve found that people consciously see what they expect, rather than what violates their expectations. We’ve also discovered that the brain imposes its perceptual predictions at preferred points (or phases) within the so-called ‘alpha rhythm’, which is an oscillation in the EEG signal at about 10 Hz that is especially prominent over the visual areas of the brain. This is exciting because it gives us a glimpse of how the brain might actually implement something like predictive perception, and because it sheds new light on a well-known phenomenon of brain activity, the alpha rhythm, whose function so far has remained elusive.

Tuesday, August 29, 2017

A magic bullet to restore our brain's plasticity?

No...not yet. But work by Jenks et al. showing that juvenile-like plasticity is restored in the visual cortex of adult mice by acute viral expression of the neuronal protein Arc makes one wonder if a similar trick might eventually be tried in adult human brains...

Significance
Neuronal plasticity peaks early in life during critical periods and normally declines with age, but the molecular changes that underlie this decline are not fully understood. Using the mouse visual cortex as a model, we found that activity-dependent expression of the neuronal protein Arc peaks early in life, and that loss of activity-dependent Arc expression parallels loss of synaptic plasticity in the visual cortex. Genetic overexpression of Arc prolongs the critical period of visual cortex plasticity, and acute viral expression of Arc in adult mice can restore juvenile-like plasticity. These findings provide a mechanism for the loss of excitatory plasticity with age, and suggest that Arc may be an exciting therapeutic target for modulation of the malleability of neuronal circuits.
Abstract
The molecular basis for the decline in experience-dependent neural plasticity over age remains poorly understood. In visual cortex, the robust plasticity induced in juvenile mice by brief monocular deprivation during the critical period is abrogated by genetic deletion of Arc, an activity-dependent regulator of excitatory synaptic modification. Here, we report that augmenting Arc expression in adult mice prolongs juvenile-like plasticity in visual cortex, as assessed by recordings of ocular dominance (OD) plasticity in vivo. A distinguishing characteristic of juvenile OD plasticity is the weakening of deprived-eye responses, believed to be accounted for by the mechanisms of homosynaptic long-term depression (LTD). Accordingly, we also found increased LTD in visual cortex of adult mice with augmented Arc expression and impaired LTD in visual cortex of juvenile mice that lack Arc or have been treated in vivo with a protein synthesis inhibitor. Further, we found that although activity-dependent expression of Arc mRNA does not change with age, expression of Arc protein is maximal during the critical period and declines in adulthood. Finally, we show that acute augmentation of Arc expression in wild-type adult mouse visual cortex is sufficient to restore juvenile-like plasticity. Together, our findings suggest a unifying molecular explanation for the age- and activity-dependent modulation of synaptic sensitivity to deprivation.

Friday, June 09, 2017

Cracking the brain's code for facial identity.

Chang and Tsao appear to have figured out how facial identity is represented in the brain:

Highlights
•Facial images can be linearly reconstructed using responses of ∼200 face cells 
•Face cells display flat tuning along dimensions orthogonal to the axis being coded 
•The axis model is more efficient, robust, and flexible than the exemplar model 
•Face patches ML/MF and AM carry complementary information about faces
Summary
Primates recognize complex objects such as faces with remarkable speed and reliability. Here, we reveal the brain’s code for facial identity. Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches. By formatting faces as points in a high-dimensional linear space, we discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space. Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces. Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells. Our work suggests that other objects could be encoded by analogous metric coordinate systems.
From their introduction, their rationale for where they recorded in the inferior temporal cortex (IT):
To explore the geometry of tuning of high-level sensory neurons in a high-dimensional space, we recorded responses of cells in face patches middle lateral (ML)/middle fundus (MF) and anterior medial (AM) to a large set of realistic faces parameterized by 50 dimensions. We chose to record in ML/MF and AM because previous functional and anatomical experiments have demonstrated a hierarchical relationship between ML/MF and AM and suggest that AM is the final output stage of IT face processing. In particular, a population of sparse cells has been found in AM, which appear to encode exemplars for specific individuals, as they respond to faces of only a few specific individuals, regardless of head orientation. These cells encode the most explicit concept of facial identity across the entire face patch system, and understanding them seems crucial for gaining a full understanding of the neural code for faces in IT cortex.

Monday, June 05, 2017

Visual category selectivity is innate.

Interesting work from Hurk et al., who find that the brains of people blind since birth show category specific activity patterns for faces, scenes, body parts, and objects, meaning that this functional brain organization does not depend on visual input during development.

Significance
The brain’s ability to recognize visual categories is guided by category-selective ventral-temporal cortex (VTC). Whether visual experience is required for the functional organization of VTC into distinct functional subregions remains unknown, hampering our understanding of the mechanisms that drive category recognition. Here, we demonstrate that VTC in individuals who were blind since birth shows robust discriminatory responses to natural sounds representing different categories (faces, scenes, body parts, and objects). These activity patterns in the blind also could predict successfully which category was visually perceived by controls. The functional cortical layout in blind individuals showed remarkable similarity to the well-documented layout observed in sighted controls, suggesting that visual functional brain organization does not rely on visual input.
Abstract
To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience.

Thursday, June 01, 2017

The wisdom of crowds for visual search.

Juni and Eckstein show that perceptual decisions about large image data sets (as from medical and geospatial imaging) that are made by a group are more likely to be correct if group members' confidences are averaged than if a simple majority vote is taken:

Significance
Simple majority voting is a widespread, effective mechanism to exploit the wisdom of crowds. We explored scenarios where, from decision to decision, a varying minority of group members often has increased information relative to the majority of the group. We show how this happens for visual search with large image data and how the resulting pooling benefits are greater than previously thought based on simpler perceptual tasks. Furthermore, we show how simple majority voting obtains inferior benefits for such scenarios relative to averaging people’s confidences. These findings could apply to life-critical medical and geospatial imaging decisions that require searching large data volumes and, more generally, to any decision-making task for which the minority of group members with high expertise varies across decisions.
Abstract
Decision-making accuracy typically increases through collective integration of people’s judgments into group decisions, a phenomenon known as the wisdom of crowds. For simple perceptual laboratory tasks, classic signal detection theory specifies the upper limit for collective integration benefits obtained by weighted averaging of people’s confidences, and simple majority voting can often approximate that limit. Life-critical perceptual decisions often involve searching large image data (e.g., medical, security, and aerial imagery), but the expected benefits and merits of using different pooling algorithms are unknown for such tasks. Here, we show that expected pooling benefits are significantly greater for visual search than for single-location perceptual tasks and the prediction given by classic signal detection theory. In addition, we show that simple majority voting obtains inferior accuracy benefits for visual search relative to averaging and weighted averaging of observers’ confidences. Analysis of gaze behavior across observers suggests that the greater collective integration benefits for visual search arise from an interaction between the foveated properties of the human visual system (high foveal acuity and low peripheral acuity) and observers’ nonexhaustive search patterns, and can be predicted by an extended signal detection theory framework with trial to trial sampling from a varying mixture of high and low target detectabilities across observers (SDT-MIX). These findings advance our theoretical understanding of how to predict and enhance the wisdom of crowds for real world search tasks and could apply more generally to any decision-making task for which the minority of group members with high expertise varies from decision to decision.

Friday, May 19, 2017

Our brains have an innate knowledge of tools.

From Striem-Amit et al.:

Significance
To what extent is brain organization driven by innate genetic constraints, and how dependent is it on individual experience during early development? We show that an area of the visual system that processes both hands and tools can develop without sensorimotor experience in manipulating tools with one’s hands. People born without hands show typical hand–tool conjoined activity, in a region connected to the action network. Taken with findings from studies with people born blind, who also show intact hand and tool specialization in the visual system, these findings suggest that no specific sensory or motor experience is crucial for domain-specific organization of visual cortex. Instead, the results suggest that functional brain organization is largely innately determined.
Abstract
The visual occipito-temporal cortex is composed of several distinct regions specialized in the identification of different object kinds such as tools and bodies. Its organization appears to reflect not only the visual characteristics of the inputs but also the behavior that can be achieved with them. For example, there are spatially overlapping responses for viewing hands and tools, which is likely due to their common role in object-directed actions. How dependent is occipito-temporal cortex organization on object manipulation and motor experience? To investigate this question, we studied five individuals born without hands (individuals with upper limb dysplasia), who use tools with their feet. Using fMRI, we found the typical selective hand–tool overlap (HTO) not only in typically developed control participants but also in four of the five dysplasics. Functional connectivity of the HTO in the dysplasics also showed a largely similar pattern as in the controls. The preservation of functional organization in the dysplasics suggests that occipito-temporal cortex specialization is driven largely by inherited connectivity constraints that do not require sensorimotor experience. These findings complement discoveries of intact functional organization of the occipito-temporal cortex in people born blind, supporting an organization largely independent of any one specific sensory or motor experience.

Tuesday, April 25, 2017

Reading what the mind thinks from how the eye sees.

Expressive eye widening (as in fear) and eye narrowing (as in disgust) are associated with opposing optical consequences and serve opposing perceptual functions. Lee and Anderson suggest that the opposing effects of eye widening and narrowing on the expresser’s visual perception have been socially co-opted to denote opposing mental states of sensitivity and discrimination, respectively, such that opposing complex mental states may originate from this simple perceptual opposition. Their abstract:
Human eyes convey a remarkable variety of complex social and emotional information. However, it is unknown which physical eye features convey mental states and how that came about. In the current experiments, we tested the hypothesis that the receiver’s perception of mental states is grounded in expressive eye appearance that serves an optical function for the sender. Specifically, opposing features of eye widening versus eye narrowing that regulate sensitivity versus discrimination not only conveyed their associated basic emotions (e.g., fear vs. disgust, respectively) but also conveyed opposing clusters of complex mental states that communicate sensitivity versus discrimination (e.g., awe vs. suspicion). This sensitivity-discrimination dimension accounted for the majority of variance in perceived mental states (61.7%). Further, these eye features remained diagnostic of these complex mental states even in the context of competing information from the lower face. These results demonstrate that how humans read complex mental states may be derived from a basic optical principle of how people see.

Monday, January 30, 2017

The uniformity illusion.

Otten et al. investigate a visual illusion in which the accurate and detailed vision in the center of our visual field, accomplished by the fovea, influences our perception of peripheral stimuli, making them seem more similar to the center. The open source article contains several nice examples of this illusion.
Vision in the fovea, the center of the visual field, is much more accurate and detailed than vision in the periphery. This is not in line with the rich phenomenology of peripheral vision. Here, we investigated a visual illusion that shows that detailed peripheral visual experience is partially based on a reconstruction of reality. Participants fixated on the center of a visual display in which central stimuli differed from peripheral stimuli. Over time, participants perceived that the peripheral stimuli changed to match the central stimuli, so that the display seemed uniform. We showed that a wide range of visual features, including shape, orientation, motion, luminance, pattern, and identity, are susceptible to this uniformity illusion. We argue that the uniformity illusion is the result of a reconstruction of sparse visual information (from the periphery) based on more readily available detailed visual information (from the fovea), which gives rise to a rich, but illusory, experience of peripheral vision.

Wednesday, November 16, 2016

Our working memory modulates our conscious access to suppressed threatening information.

Our processing of emotional information is susceptible to working memory (WM) modulations - emotional faces trigger much stronger responses in the fronto-thalamic occipital network when they match an emotional word held in WM than when they do not. Liu et al. show that WM tasks can also influence the nonconscious processing of emotional signals. Their explanation of the procedure used:
We used a modified version of the delayed-match-to-sample paradigm. Specifically, participants were instructed to keep a face (either fearful or neutral) in WM while performing a target-detection task. The target, another face with a new identity (fearful or neutral), was suppressed from awareness utilizing continuous flash suppression. In this technique, the target is monocularly presented and hidden from visual awareness by simultaneously presenting dynamic noise to the other eye. We measured the time it took for the suppressed face to emerge from suppression. We specifically tested whether faces would emerge from suppression more quickly if they matched the emotional valence of WM contents than if they did not.
Here is their abstract:
Previous research has demonstrated that emotional information processing can be modulated by what is being held in working memory (WM). Here, we showed that such content-based WM effects can occur even when the emotional information is suppressed from conscious awareness. Using the delayed-match-to-sample paradigm in conjunction with continuous flash suppression, we found that suppressed threatening (fearful and angry) faces emerged from suppression faster when they matched the emotional valence of WM contents than when they did not. This effect cannot be explained by perceptual priming, as it disappeared when the faces were only passively viewed and not held in WM. Crucially, such an effect is highly specific to threatening faces but not to happy or neutral faces. Our findings together suggest that WM can modulate nonconscious emotion processing, which highlights the functional association between nonconsciously triggered emotional processes and conscious emotion representation.

Tuesday, November 15, 2016

View a flickering stimulus before you try to read fine print…

A nice piece of work from Arnold et al.:

Significance
Distinct anatomical visual pathways can be traced through the human central nervous system. These have been linked to specialized functions, such as encoding information about spatial forms (like the human face and text) and stimulus dynamics (flicker or movement). Our experiments are inconsistent with this strict division. They show that mechanisms responsive to flicker can alter form perception, with vision transiently sharpened by weakening the influence of flicker-sensitive mechanisms by prolonged exposure to flicker. So, next time you are trying to read fine print, you might be well advised to first view a flickering stimulus!
Abstract
Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision—allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because “blur” signals are mitigated.

Friday, November 11, 2016

Social class and attentiveness to others.

More on “The rich are different from you and me.” (F. Scott Fitzgerald) “Yes, They have more money.” (Hemingway). Dietze and Knowles use eye movement measurements to show that people of higher social class are less attentive to other people and their faces. Their abstract, slightly edited:
We theorize that people’s social class affects their appraisals of others’ motivational relevance—the degree to which others are seen as potentially rewarding, threatening, or otherwise worth attending to. Supporting this account, three studies indicate that social classes differ in the amount of attention their members direct toward other human beings. In the first study, wearable technology was used to film the visual fields of pedestrians on city streets; higher-class participants looked less at other people than did lower-class participants. A second study tracked participants’ eye movements while they viewed street scenes; higher class was associated with reduced attention to people in the images. Finally a third study used a change-detection procedure to assess the degree to which human faces spontaneously attract visual attention; faces proved less effective at drawing the attention of high-class than low-class participants, which implies that class affects spontaneous relevance appraisals. The measurement and conceptualization of social class are discussed.

Thursday, August 25, 2016

Alerting or Somnogenic light - pick your color

Bourgin and Hubbard summarize work by Pilorz et al.
Light exerts profound effects on our physiology and behaviour, setting our biological clocks to the correct time and regulating when we are asleep and we are awake. The photoreceptors mediating these responses include the rods and cones involved in vision, as well as a subset of photosensitive retinal ganglion cells (pRGCs) expressing the blue light-sensitive photopigment melanopsin. Previous studies have shown that mice lacking melanopsin show impaired sleep in response to light. However, other studies have shown that light increases glucocorticoid release—a response typically associated with stress. To address these contradictory findings, we studied the responses of mice to light of different colours. We found that blue light was aversive, delaying sleep onset and increasing glucocorticoid levels. By contrast, green light led to rapid sleep onset. These different behavioural effects appear to be driven by different neural pathways. Surprisingly, both responses were impaired in mice lacking melanopsin. These data show that light can promote either sleep or arousal. Moreover, they provide the first evidence that melanopsin directly mediates the effects of light on glucocorticoids. This work shows the extent to which light affects our physiology and has important implications for the design and use of artificial light sources.

Monday, March 14, 2016

Brain games and driving safely.

I've done a number of posts on brain games (cf. here, here, and here) and their critics. When I overcome my lassitude and occasionally return to dink with one of the BrainHQ games, I am struck by at least temporary improvements in the cognitive activity being refined, particularly with the vision exercises dealing with useful field of view, contrast sensitivity, etc. The exercise called Double Decision seems to me especially effective. I notice an effect on my driving after playing it. I thought I would pass on this BrainHQ web page on research on BrainHQ exercises. Their claim is that the studies have shown that after training, drivers on average:

-Make 38% fewer dangerous driving maneuvers
-Can stop 22 feet sooner when driving 55 miles per hour
-Feel more confident driving in difficult conditions (such as at night, in bad weather, or in new places)
-Cut their at-fault crash risk by 48%
-Keep their license later in life

Monday, February 15, 2016

Our eye movements are coupled to our heartbeats.

A fascinating finding by Ohl et al. that the darting about of our eyes (saccades) during visual search is coupled to our heart rate (the R-R interval), proving a powerful influence of body on visuomotor functioning.

ABSTRACT
During visual fixation, the eye generates microsaccades and slower components of fixational eye movements that are part of the visual processing strategy in humans. Here, we show that ongoing heartbeat is coupled to temporal rate variations in the generation of microsaccades. Using coregistration of eye recording and ECG in humans, we tested the hypothesis that microsaccade onsets are coupled to the relative phase of the R-R intervals in heartbeats. We observed significantly more microsaccades during the early phase after the R peak in the ECG. This form of coupling between heartbeat and eye movements was substantiated by the additional finding of a coupling between heart phase and motion activity in slow fixational eye movements; i.e., retinal image slip caused by physiological drift. Our findings therefore demonstrate a coupling of the oculomotor system and ongoing heartbeat, which provides further evidence for bodily influences on visuomotor functioning. 

SIGNIFICANCE STATEMENT
In the present study, we show that microsaccades are coupled to heartbeat. Moreover, we revealed a strong modulation of slow eye movements around the R peak in the ECG. These results suggest that heartbeat as a basic physiological signal is related to statistical modulations of fixational eye movements, in particular, the generation of microsaccades. Therefore, our findings add a new perspective on the principles underlying the generation of fixational eye movements. Importantly, our study highlights the need to record eye movements when studying the influence of heartbeat in neuroscience to avoid misinterpretation of eye-movement-related artifacts as heart-evoked modulations of neural processing.

Friday, November 06, 2015

Critical period for visual pathway formation? - another dogma bites the dust.

India, which may have the largest number of blind children in the world, with estimates ranging from 360,000 to nearly 1.2 million, is providing a vast laboratory that has overturned one of the central dogmas of brain development - that development of visual (and other) pathways must take place within a critical time window, after which formation of proper connections becomes much more difficult or impossible. Until recently, children over 8 years old with congenital cataracts were not considered appropriate subjects for lens replacement surgery. In Science Magazine Rhitu Chatterjee describes a project begun in 2004, Led by neuroscientist Pawan Sinha, that has restored sight to much older children. The story of one 18-year old is followed, who over the 18 months following lens replacement begin to see with clarity that permitted him to bike through a crowded marketplace.

Of the nearly 500 children and young adults that have undergone cataract operation, about half became research subjects. One fascinating result that emerged is that visual experience isn't critical for certain visual function, the brain seems to be prewired, for example, to be fooled by some visual illusions that were thought to be a product of learning. One is the Ponzo illusion, which typically involves lines converging on the horizon (like train tracks) and two short parallel lines cutting across them. Although the horizontal lines are identical, the one nearer the horizon looks longer. If the Ponzo illusion were the result of visual learning, newly sighted kids wouldn't fall for it. But in fact, children who had just had their vision restored were just as susceptible to the Ponzo illusion as were control subjects with normal vision. The kids also fell for the Müller-Lyer illusion, a pair of lines with arrowheads on both ends; one set of arrowheads points outward, the other inward toward the line. The line with the inward arrowheads seems longer. These results lead Sinha to suggest that the illusion is being driven by very simple factors in the image that the brain is probably innately programmed to respond to.

Thursday, October 15, 2015

Rhodopsin curing blindness?

In a previous life (1962-1998) my laboratory studied how the rhodopsin visual pigment in our eyes changes light into a nerve signal. Thus it excites me when I see major advances in understanding our vision and curing visual diseases. I want to pass on a nice graphic offered by Van Gelder and Kaur to illustrate recent work of Cehajic-Kapetanovic et al. (open access) showing that introduction of the visual pigment rhodopsin by viral gene therapy into the inner retina nerve cells of retinas whose rods and cones have degenerated can restore light sensitivity and can restore vision-like physiology and behavior to mice blind from outer retinal degeneration:

(click figure to enlarge)   Gene therapy rescue of vision in retinal degeneration. (A) In the healthy retina, light penetrates from inner to outer retina to reach the cones and rods, which transduce signals through horizontal, bipolar, amacrine, and ultimately retinal ganglion cells to the brain. (B) In outer retinal degenerative diseases, loss of photoreceptors renders the retina insensitive to light. (C) Gene therapy with AAV2/2 virus expressing human rhodopsin (hRod) under the control of the CAG promoter results in expression of the photopigment in many surviving cells of the inner retina, and results in restoration of light responses recognized by the brain. (D) More selective expression of rhodopsin in a subset of bipolar cells is achieved by use of a virus in which expression is driven by the grm6 promoter. This version appeared to restore the most natural visual function to blind mice.