Studies of compensatory changes in visual functions in response to auditory loss have shown that enhancements tend to be restricted to the processing of specific visual features, such as motion in the periphery. Previous studies have also shown that deaf individuals can show greater face processing abilities in the central visual field. Enhancements in the processing of peripheral stimuli are thought to arise from a lack of auditory input and a subsequent increase in the allocation of attentional resources to peripheral locations, while enhancements in face processing abilities are thought to be driven by experience with ASL and not necessarily hearing loss. This combined with the fact that face processing abilities typically decline with eccentricity suggests that face processing enhancements may not extend to the periphery for deaf individuals. Using a face matching task, we examined whether deaf individuals' enhanced ability to discriminate between faces extends to the peripheral visual field. Deaf participants were more accurate than hearing participants in discriminating faces presented both centrally and in the periphery. Our results support earlier findings that deaf individuals possess enhanced face discrimination abilities in the central visual field and further extend them by showing that these enhancements also occur in the periphery for more complex stimuli.
To date, only a few studies have investigated the clinical translational value of multisensory integration. Our previous research has linked the magnitude of visual-somatosensory integration (measured behaviorally using simple reaction time tasks) to important cognitive (attention) and motor (balance, gait, and falls) outcomes in healthy older adults. While multisensory integration effects have been measured across a wide array of populations using various sensory combinations and different neuroscience research approaches, multisensory integration tests have not been systematically implemented in clinical settings. We recently developed a step-by-step protocol for administering and calculating multisensory integration effects to facilitate innovative and novel translational research across diverse clinical populations and age-ranges. In recognizing that patients with severe medical conditions and/or mobility limitations often experience difficulty traveling to research facilities or joining time-demanding research protocols, we deemed it necessary for patients to be able to benefit from multisensory testing. Using an established protocol and methodology, we developed a multisensory falls-screening tool called CatchU ™ (an iPhone app) to quantify multisensory integration performance in clinical practice that is currently undergoing validation studies. Our goal is to facilitate the identification of patients who are at increased risk of falls and promote physician-initiated falls counseling during clinical visits (e.g., annual wellness, sick, or follow-up visits). This will thereby raise falls-awareness and foster physician efforts to alleviate disability, promote independence, and increase quality of life for our older adults. This conceptual overview highlights the potential of multisensory integration in predicting clinical outcomes from a research perspective, while also showcasing the practical application of a multisensory screening tool in routine clinical practice.