Pub Date : 2020-01-01Epub Date: 2020-01-26DOI: 10.2352/issn.2470-1173.2020.11.hvei-366
Katherine E M Tregillus, Lora T Likova
In order to better understand how our visual system processes information, we must understand the underlying brain connectivity architecture, and how it can get reorganized under visual deprivation. The full extent to which visual development and visual loss affect connectivity is not well known. To investigate the effect of the onset of blindness on structural connectivity both at the whole-brain voxel-wise level and at the level of all major white-matter tracts, we applied two complementary Diffusion-Tension Imaging (DTI) methods, TBSS and AFQ. Diffusion-weighted brain images were collected from three groups of participants: congenitally blind (CB), acquired blind (AB), and fully sighted controls. The differences between these groups were evaluated on a voxel-wise scale with Tract-Based Spatial Statistics (TBSS) method, and on larger-scale with Automated Fiber Quantification (AFQ), a method that allows for between-group comparisons at the level of the major fiber tracts. TBSS revealed that both blind groups tended to have higher FA than sighted controls in the central structures of the brain. AFQ revealed that, where the three groups differed, congenitally blind participants tended to be more similar to sighted controls than to those participants who had acquired blindness later in life. These differences were specifically manifested in the left uncinated fasciculus, the right corticospinal fasciculus, and the left superior longitudinal fasciculus, areas broadly associated with a range of higher-level cognitive systems.
{"title":"Differences in the major fiber-tracts of people with congenital and acquired blindness.","authors":"Katherine E M Tregillus, Lora T Likova","doi":"10.2352/issn.2470-1173.2020.11.hvei-366","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2020.11.hvei-366","url":null,"abstract":"<p><p>In order to better understand how our visual system processes information, we must understand the underlying brain connectivity architecture, and how it can get reorganized under visual deprivation. The full extent to which visual development and visual loss affect connectivity is not well known. To investigate the effect of the onset of blindness on structural connectivity both at the whole-brain voxel-wise level and at the level of all major white-matter tracts, we applied two complementary Diffusion-Tension Imaging (DTI) methods, TBSS and AFQ. Diffusion-weighted brain images were collected from three groups of participants: congenitally blind (CB), acquired blind (AB), and fully sighted controls. The differences between these groups were evaluated on a voxel-wise scale with Tract-Based Spatial Statistics (TBSS) method, and on larger-scale with Automated Fiber Quantification (AFQ), a method that allows for between-group comparisons at the level of the major fiber tracts. TBSS revealed that both blind groups tended to have higher FA than sighted controls in the central structures of the brain. AFQ revealed that, where the three groups differed, congenitally blind participants tended to be more similar to sighted controls than to those participants who had acquired blindness later in life. These differences were specifically manifested in the left uncinated fasciculus, the right corticospinal fasciculus, and the left superior longitudinal fasciculus, areas broadly associated with a range of higher-level cognitive systems.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2020 ","pages":"3661-3667"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8445597/pdf/nihms-1616194.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39453010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.12.hvei-237
Lora T. Likova, Ming Mei, Kristyo Mineff, S. Nicholas
To address the longstanding questions of whether the blind-from-birth have an innate face-schema, what plasticity mechanisms underlie non-visual face learning, and whether there are interhemispheric face processing differences in face processing in the blind, we used a unique non-visual drawing-based training in congenitally blind (CB), late-blind (LB) and blindfolded-sighted (BF) groups of adults. This Cognitive-Kinesthetic Drawing approach previously developed by Likova (e.g., 2010, 2012, 2013) enabled us to rapidly train and study training-driven neuroplasticity in both the blind and sighted groups. The five-day two-hour training taught participants to haptically explore, recognize, memorize raised-line images, and draw them free-hand from memory, in detail, including the fine facial characteristics of the face stimuli. Such drawings represent an externalization of the formed memory. Functional MRI was run before and after the training. Tactile-face perception activated the occipito-temporal cortex in all groups. However, the training led to a strong, predominantly left-hemispheric reorganization in the two blind groups, in contrast to right-hemispheric in blindfolded-sighted, i.e., the post-training response-change was stronger in the left hemisphere in the blind, but in the right in the blindfolded. This is the first study to discover interhemispheric differences in non-visual face processing. Remarkably, for face perception this learning-based change was positive in the CB and BF groups, but negative in the LB-group. Both the lateralization and inversed-sign learning effects were specific to face perception, but absent for the control nonface categories of small objects and houses. The unexpected inversed-sign training effect in CB vs LB suggests different stages of brain plasticity in the ventral pathway specific to the face category. Importantly, the fact that only after a very few days of our training, the totally-blind-from-birth CB manifested a very good (haptic) face perception, and even developed strong empathy to the explored faces, implies a preexisting face schema that can be "unmasked" and "tuned up" by a proper learning procedure. The Likova Cognitive-Kinesthetic Training is a powerful tool for driving brain plasticity, and providing deeper insights into non-visual learning, including emergence of perceptual categories. A rebound learning model and a neuro-Bayesian economy principle are proposed to explain the multidimensional learning effects. The results provide new insights into the Nature-vs-Nurture interplay in rapid brain plasticity and neurorehabilitation.
{"title":"Learning face perception without vision: Rebound learning effect and hemispheric differences in congenital vs late-onset blindness","authors":"Lora T. Likova, Ming Mei, Kristyo Mineff, S. Nicholas","doi":"10.2352/issn.2470-1173.2019.12.hvei-237","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.12.hvei-237","url":null,"abstract":"To address the longstanding questions of whether the blind-from-birth have an innate face-schema, what plasticity mechanisms underlie non-visual face learning, and whether there are interhemispheric face processing differences in face processing in the blind, we used a unique non-visual drawing-based training in congenitally blind (CB), late-blind (LB) and blindfolded-sighted (BF) groups of adults. This Cognitive-Kinesthetic Drawing approach previously developed by Likova (e.g., 2010, 2012, 2013) enabled us to rapidly train and study training-driven neuroplasticity in both the blind and sighted groups. The five-day two-hour training taught participants to haptically explore, recognize, memorize raised-line images, and draw them free-hand from memory, in detail, including the fine facial characteristics of the face stimuli. Such drawings represent an externalization of the formed memory. Functional MRI was run before and after the training. Tactile-face perception activated the occipito-temporal cortex in all groups. However, the training led to a strong, predominantly left-hemispheric reorganization in the two blind groups, in contrast to right-hemispheric in blindfolded-sighted, i.e., the post-training response-change was stronger in the left hemisphere in the blind, but in the right in the blindfolded. This is the first study to discover interhemispheric differences in non-visual face processing. Remarkably, for face perception this learning-based change was positive in the CB and BF groups, but negative in the LB-group. Both the lateralization and inversed-sign learning effects were specific to face perception, but absent for the control nonface categories of small objects and houses. The unexpected inversed-sign training effect in CB vs LB suggests different stages of brain plasticity in the ventral pathway specific to the face category. Importantly, the fact that only after a very few days of our training, the totally-blind-from-birth CB manifested a very good (haptic) face perception, and even developed strong empathy to the explored faces, implies a preexisting face schema that can be \"unmasked\" and \"tuned up\" by a proper learning procedure. The Likova Cognitive-Kinesthetic Training is a powerful tool for driving brain plasticity, and providing deeper insights into non-visual learning, including emergence of perceptual categories. A rebound learning model and a neuro-Bayesian economy principle are proposed to explain the multidimensional learning effects. The results provide new insights into the Nature-vs-Nurture interplay in rapid brain plasticity and neurorehabilitation.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"101 1","pages":"2371-23713"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85036531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-01DOI: 10.2352/ISSN.2470-1173.2018.14.HVEI-532
A. R. Karim, Lora T. Likova
Understanding perception and aesthetic appeal of arts and environmental objects, what is appreciated, liked, or preferred, and why, is of prime importance for improving the functional capacity of the blind and visually impaired and the ergonomic design for their environment, which however so far, has been examined only in sighted individuals. This paper provides a general overview of the first experimental study of tactile aesthetics as a function of visual experience and level of visual deprivation, using both behavioral and brain imaging techniques. We investigated how blind people perceive 3D tactile objects, how they characterize them, and whether the tactile perception, and tactile shape preference (liking or disliking) and tactile aesthetic appreciation (judging tactile qualities of an object, such as pleasantness, comfortableness etc.) of 3D tactile objects can be affected by the level of visual experience. The study employed innovative behavioral measures, such as new forms of aesthetic preference-appreciation and perceptual discrimination questionnaires, in combination with advanced functional Magnetic Resonance Imaging (fMRI) techniques, and compared congenitally blind, late-onset blind and blindfolded (sighted) participants. Behavioral results demonstrated that both blind and blindfolded-sighted participants assessed curved or rounded 3D tactile objects as significantly more pleasing than sharp 3D tactile objects, and symmetric 3D tactile objects as significantly more pleasing than asymmetric 3D tactile objects. However, as compared to the sighted, blind people showed better skills in tactile discrimination as demonstrated by accuracy and speed of discrimination. Functional MRI results demonstrated that there was a large overlap and characteristic differences in the aesthetic appreciation brain networks in the blind and the sighted. As demonstrated both populations commonly recruited the somatosensory and motor areas of the brain, but with stronger activations in the blind as compared to the sighted. Secondly, sighted people recruited more frontal regions whereas blind people, in particular, the congenitally blind, paradoxically recruited more 'visual' areas of the brain. These differences were more pronounced between the sighted and the congenitally blind rather than between the sighted and the late-onset blind, indicating the key influence of the onset time of visual deprivation. Understanding of the underlying brain mechanisms should have a wide range of important implications for a generalized cross-sensory theory and practice in the rapidly evolving field of neuroaesthetics, as well as for 'cutting-edge' rehabilitation technologies for the blind and the visually impaired.
{"title":"Haptic aesthetics in the blind: A behavioral and fMRI investigation","authors":"A. R. Karim, Lora T. Likova","doi":"10.2352/ISSN.2470-1173.2018.14.HVEI-532","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2018.14.HVEI-532","url":null,"abstract":"Understanding perception and aesthetic appeal of arts and environmental objects, what is appreciated, liked, or preferred, and why, is of prime importance for improving the functional capacity of the blind and visually impaired and the ergonomic design for their environment, which however so far, has been examined only in sighted individuals. This paper provides a general overview of the first experimental study of tactile aesthetics as a function of visual experience and level of visual deprivation, using both behavioral and brain imaging techniques. We investigated how blind people perceive 3D tactile objects, how they characterize them, and whether the tactile perception, and tactile shape preference (liking or disliking) and tactile aesthetic appreciation (judging tactile qualities of an object, such as pleasantness, comfortableness etc.) of 3D tactile objects can be affected by the level of visual experience. The study employed innovative behavioral measures, such as new forms of aesthetic preference-appreciation and perceptual discrimination questionnaires, in combination with advanced functional Magnetic Resonance Imaging (fMRI) techniques, and compared congenitally blind, late-onset blind and blindfolded (sighted) participants. Behavioral results demonstrated that both blind and blindfolded-sighted participants assessed curved or rounded 3D tactile objects as significantly more pleasing than sharp 3D tactile objects, and symmetric 3D tactile objects as significantly more pleasing than asymmetric 3D tactile objects. However, as compared to the sighted, blind people showed better skills in tactile discrimination as demonstrated by accuracy and speed of discrimination. Functional MRI results demonstrated that there was a large overlap and characteristic differences in the aesthetic appreciation brain networks in the blind and the sighted. As demonstrated both populations commonly recruited the somatosensory and motor areas of the brain, but with stronger activations in the blind as compared to the sighted. Secondly, sighted people recruited more frontal regions whereas blind people, in particular, the congenitally blind, paradoxically recruited more 'visual' areas of the brain. These differences were more pronounced between the sighted and the congenitally blind rather than between the sighted and the late-onset blind, indicating the key influence of the onset time of visual deprivation. Understanding of the underlying brain mechanisms should have a wide range of important implications for a generalized cross-sensory theory and practice in the rapidly evolving field of neuroaesthetics, as well as for 'cutting-edge' rehabilitation technologies for the blind and the visually impaired.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73538726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-01Epub Date: 2018-01-28DOI: 10.2352/issn.2470-1173.2018.06.mobmu-114
D Inupakutika, G Natarajan, S Kaghyan, D Akopian, M Evans, Y Zenong, D Parra-Medina
Rapidly evolving technologies like data analysis, smartphone and web-based applications, and the Internet of things have been increasingly used for healthy living, fitness and well-being. These technologies are being utilized by various research studies to reduce obesity. This paper demonstrates design and development of a dataflow protocol that integrates several applications. After registration of a user, activity, nutrition and other lifestyle data from participants are retrieved in a centralized cloud dedicated for health promotion. In addition, users are provided accounts in an e-Learning environment from which learning outcomes can be retrieved. Using the proposed system, health promotion campaigners have the ability to provide feedback to the participants using a dedicated messaging system. Participants authorize the system to use their activity data for the program participation. The implemented system and servicing protocol minimize personnel overhead of large-scale health promotion campaigns and are scalable to assist automated interventions, from automated data retrieval to automated messaging feedback. This paper describes end-to -end workflow of the proposed system. The case study tests are carried with Fitbit Flex2 activity trackers, Withings Scale, Verizon Android-based tablets, Moodle learning management system, and Articulate RISE for learning content development.
{"title":"An Integration of Health Tracking Sensor Applications and eLearning Environments for Cloud-Based Health Promotion Campaigns.","authors":"D Inupakutika, G Natarajan, S Kaghyan, D Akopian, M Evans, Y Zenong, D Parra-Medina","doi":"10.2352/issn.2470-1173.2018.06.mobmu-114","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2018.06.mobmu-114","url":null,"abstract":"<p><p>Rapidly evolving technologies like data analysis, smartphone and web-based applications, and the Internet of things have been increasingly used for healthy living, fitness and well-being. These technologies are being utilized by various research studies to reduce obesity. This paper demonstrates design and development of a dataflow protocol that integrates several applications. After registration of a user, activity, nutrition and other lifestyle data from participants are retrieved in a centralized cloud dedicated for health promotion. In addition, users are provided accounts in an e-Learning environment from which learning outcomes can be retrieved. Using the proposed system, health promotion campaigners have the ability to provide feedback to the participants using a dedicated messaging system. Participants authorize the system to use their activity data for the program participation. The implemented system and servicing protocol minimize personnel overhead of large-scale health promotion campaigns and are scalable to assist automated interventions, from automated data retrieval to automated messaging feedback. This paper describes end-to -end workflow of the proposed system. The case study tests are carried with Fitbit Flex2 activity trackers, Withings Scale, Verizon Android-based tablets, Moodle learning management system, and Articulate RISE for learning content development.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2018 ","pages":"1141-1148"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2352/issn.2470-1173.2018.06.mobmu-114","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39102931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01Epub Date: 2017-01-29DOI: 10.2352/ISSN.2470-1173.2017.17.COIMG-415
Nicole M Scarborough, G M Dilshan P Godaliyadda, Dong Hye Ye, David J Kissick, Shijie Zhang, Justin A Newman, Michael J Sheedlo, Azhad Chowdhury, Robert F Fischetti, Chittaranjan Das, Gregery T Buzzard, Charles A Bouman, Garth J Simpson
A supervised learning approach for dynamic sampling (SLADS) was developed to reduce X-ray exposure prior to data collection in protein structure determination. Implementation of this algorithm allowed reduction of the X-ray dose to the central core of the crystal by up to 20-fold compared to current raster scanning approaches. This dose reduction corresponds directly to a reduction on X-ray damage to the protein crystals prior to data collection for structure determination. Implementation at a beamline at Argonne National Laboratory suggests promise for the use of the SLADS approach to aid in the analysis of X-ray labile crystals. The potential benefits match a growing need for improvements in automated approaches for microcrystal positioning.
{"title":"Synchrotron X-Ray Diffraction Dynamic Sampling for Protein Crystal Centering.","authors":"Nicole M Scarborough, G M Dilshan P Godaliyadda, Dong Hye Ye, David J Kissick, Shijie Zhang, Justin A Newman, Michael J Sheedlo, Azhad Chowdhury, Robert F Fischetti, Chittaranjan Das, Gregery T Buzzard, Charles A Bouman, Garth J Simpson","doi":"10.2352/ISSN.2470-1173.2017.17.COIMG-415","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2017.17.COIMG-415","url":null,"abstract":"<p><p>A supervised learning approach for dynamic sampling (SLADS) was developed to reduce X-ray exposure prior to data collection in protein structure determination. Implementation of this algorithm allowed reduction of the X-ray dose to the central core of the crystal by up to 20-fold compared to current raster scanning approaches. This dose reduction corresponds directly to a reduction on X-ray damage to the protein crystals prior to data collection for structure determination. Implementation at a beamline at Argonne National Laboratory suggests promise for the use of the SLADS approach to aid in the analysis of X-ray labile crystals. The potential benefits match a growing need for improvements in automated approaches for microcrystal positioning.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2017 ","pages":"6-9"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2352/ISSN.2470-1173.2017.17.COIMG-415","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35903378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.2352/ISSN.2470-1173.2017.14.HVEI-155
Lora T Likova
Conceptual knowledge allows us to comprehend the multisensory stimulation impinging on our senses. Its representation in the anterior temporal lobe is a subject of considerable debate, with the "enigmatic" temporal pole (TP) being at the center of that debate. The controversial models of the organization of knowledge representation in TP range from unilateral to fully unified bilateral representational systems. To address the multitude of mutually exclusive options, we developed a novel cross-modal approach in a multifactorial brain imaging study of the blind, manipulating the modality (verbal vs pictorial) of both the reception source (reading text/verbal vs images/pictorial) and the expression (writing text/verbal vs drawing/pictorial) of conceptual knowledge. Furthermore, we also varied the level of familiarity. This study is the first to investigate the functional organization of (amodal) conceptual knowledge in TP in the blind, as well as, the first study of drawing based on the conceptual knowledge from memory of sentences delivered through Braille reading. Through this paradigm, we were able to functionally identify two novel subdivisions of the temporal pole - the TPa, at the apex, and the TPdm - dorso-medially. Their response characteristics revealed a complex interplay of non-visual specializations within the temporal pole, with a diversity of excitatory/inhibitory inversions as a function of hemisphere, task-domain and familiarity, which motivate an expanded neurocognitive analysis of conceptual knowledge. The interplay of inter-hemispheric specializations found here accounts for the variety of seemingly conflicting models in previous research for conceptual knowledge representation, reconciling them through the set of factors we have investigated: the two main knowledge domains (verbal and pictorial/sensory-motor) and the two main knowledge processing modes (receptive and expressive), including the level of familiarity as a modifier. Furthermore, the interplay of these factors allowed us to also reveal for the first time a system of complementary symmetries, asymmetries and unexpected anti-symmetries in the TP organization. Thus, taken together these results constitute a unifying explanation of the conflicting models in previous research on conceptual knowledge representation.
{"title":"Addressing long-standing controversies in conceptual knowledge representation in the temporal pole: A cross-modal paradigm.","authors":"Lora T Likova","doi":"10.2352/ISSN.2470-1173.2017.14.HVEI-155","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2017.14.HVEI-155","url":null,"abstract":"<p><p>Conceptual knowledge allows us to comprehend the multisensory stimulation impinging on our senses. Its representation in the anterior temporal lobe is a subject of considerable debate, with the \"enigmatic\" temporal pole (TP) being at the center of that debate. The controversial models of the organization of knowledge representation in TP range from unilateral to fully unified bilateral representational systems. To address the multitude of mutually exclusive options, we developed a novel cross-modal approach in a multifactorial brain imaging study of the blind, manipulating the modality (verbal vs pictorial) of both the reception source (reading text/verbal vs images/pictorial) and the expression (writing text/verbal vs drawing/pictorial) of conceptual knowledge. Furthermore, we also varied the level of familiarity. This study is the first to investigate the functional organization of (amodal) conceptual knowledge in TP in the blind, as well as, the first study of drawing based on the conceptual knowledge from memory of sentences delivered through Braille reading. Through this paradigm, we were able to functionally identify two novel subdivisions of the temporal pole - the TPa, at the apex, and the TPdm - dorso-medially. Their response characteristics revealed a complex interplay of non-visual specializations within the temporal pole, with a diversity of excitatory/inhibitory inversions as a function of hemisphere, task-domain and familiarity, which motivate an expanded neurocognitive analysis of conceptual knowledge. The interplay of inter-hemispheric specializations found here accounts for the variety of seemingly conflicting models in previous research for conceptual knowledge representation, reconciling them through the set of factors we have investigated: the two main knowledge domains (verbal and pictorial/sensory-motor) and the two main knowledge processing modes (receptive and expressive), including the level of familiarity as a modifier. Furthermore, the interplay of these factors allowed us to also reveal for the first time a system of complementary symmetries, asymmetries and unexpected anti-symmetries in the TP organization. Thus, taken together these results constitute a unifying explanation of the conflicting models in previous research on conceptual knowledge representation.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2017 ","pages":"268-272"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2352/ISSN.2470-1173.2017.14.HVEI-155","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41222287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-01Epub Date: 2016-02-14DOI: 10.2352/ISSN.2470-1173.2016.16.HVEI-095
Lora T Likova, Christopher W Tyler, Laura Cacciamani, Kristyo Mineff, Spero Nicholas
Fundamental forms of high-order cognition, such as reading and writing, are usually studied in the context of one modality - vision. People without sight, however, use the kinesthetic-based Braille writing, and haptic-based Braille reading. We asked whether the cognitive and motor control mechanisms underlying writing and reading are modality-specific or supramodal. While a number of previous functional Magnetic Resonance Imaging (fMRI) studies have investigated the brain network for Braille reading in the blind, such studies on Braille writing are lacking. Consequently, no comparative network analysis of Braille writing vs. reading exists. Here, we report the first study of Braille writing, and a comparison of the brain organization for Braille writing vs Braille reading. FMRI was conducted in a Siemens 3T Trio scanner. Our custom MRI-compatible drawing/writing lectern was further modified to provide for Braille reading and writing. Each of five paragraphs of novel Braille text describing objects, faces and navigation sequences was read, then reproduced twice by Braille writing from memory, then read a second time. During Braille reading, the haptic-sensing of the Braille letters strongly activated not only the early visual area V1 and V2, but some highly specialized areas, such as the classical visual grapheme area and the Exner motor grapheme area. Braille-writing-from-memory, engaged a significantly more extensive network in dorsal motor, somatosensory/kinesthetic, dorsal parietal and prefrontal cortex. However, in contrast to the largely extended V1 activation in drawing-from-memory in the blind after training (Likova, 2012), Braille writing from memory generated focal activation restricted to the most foveal part of V1, presumably reflecting topographically the focal demands of such a "pin-pricking" task.
{"title":"The Cortical Network for Braille Writing in the Blind.","authors":"Lora T Likova, Christopher W Tyler, Laura Cacciamani, Kristyo Mineff, Spero Nicholas","doi":"10.2352/ISSN.2470-1173.2016.16.HVEI-095","DOIUrl":"10.2352/ISSN.2470-1173.2016.16.HVEI-095","url":null,"abstract":"<p><p>Fundamental forms of high-order cognition, such as reading and writing, are usually studied in the context of one modality - vision. People without sight, however, use the kinesthetic-based Braille writing, and haptic-based Braille reading. We asked whether the cognitive and motor control mechanisms underlying writing and reading are modality-specific or supramodal. While a number of previous functional Magnetic Resonance Imaging (fMRI) studies have investigated the brain network for Braille reading in the blind, such studies on Braille writing are lacking. Consequently, no comparative network analysis of Braille writing vs. reading exists. Here, we report the first study of Braille writing, and a comparison of the brain organization for Braille writing vs Braille reading. FMRI was conducted in a Siemens 3T Trio scanner. Our custom MRI-compatible drawing/writing lectern was further modified to provide for Braille reading and writing. Each of five paragraphs of novel Braille text describing objects, faces and navigation sequences was read, then reproduced twice by Braille writing from memory, then read a second time. During Braille reading, the haptic-sensing of the Braille letters strongly activated not only the early visual area V1 and V2, but some highly specialized areas, such as the classical visual grapheme area and the Exner motor grapheme area. Braille-writing-from-memory, engaged a significantly more extensive network in dorsal motor, somatosensory/kinesthetic, dorsal parietal and prefrontal cortex. However, in contrast to the largely extended V1 activation in drawing-from-memory in the blind after training (Likova, 2012), Braille writing from memory generated focal activation restricted to the most foveal part of V1, presumably reflecting topographically the focal demands of such a \"pin-pricking\" task.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2016 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5589194/pdf/nihms795213.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35498342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-01Epub Date: 2016-02-14DOI: 10.2352/ISSN.2470-1173.2016.16.HVEI-122
Alex D Hwang, Eli Peli
Contrast sensitivity (CS) quantifies an observer's ability to detect the smallest (threshold) luminance difference between a target and its surrounding. In clinical settings, printed letter contrast charts are commonly used, and the contrast of the letter stimuli is specified by the Weber contrast definition. Those paper-printed charts use negative polarity contrast (NP, dark letters on bright background) and are not available with positive polarity contrast (PP, bright letters on dark background), as needed in a number of applications. We implemented a mobile CS measuring app supporting both NP and PP contrast stimuli that mimic the paper charts for NP. A novel modified Weber definition was developed to specify the contrast of PP letters. The validity of the app is established in comparison with the paper chart. We found that our app generates more accurate and a wider range of contrast stimuli than the paper chart (especially at the critical high CS, low contrast range), and found a clear difference between NP and PP CS measures (CSNP>CSPP) despite the symmetry afforded by the modified Weber contrast definition. Our app provides a convenient way to measure CS in both lighted and dark environments.
{"title":"Positive and negative polarity contrast sensitivity measuring app.","authors":"Alex D Hwang, Eli Peli","doi":"10.2352/ISSN.2470-1173.2016.16.HVEI-122","DOIUrl":"10.2352/ISSN.2470-1173.2016.16.HVEI-122","url":null,"abstract":"<p><p>Contrast sensitivity (CS) quantifies an observer's ability to detect the smallest (threshold) luminance difference between a target and its surrounding. In clinical settings, printed letter contrast charts are commonly used, and the contrast of the letter stimuli is specified by the Weber contrast definition. Those paper-printed charts use negative polarity contrast (NP, dark letters on bright background) and are not available with positive polarity contrast (PP, bright letters on dark background), as needed in a number of applications. We implemented a mobile CS measuring app supporting both NP and PP contrast stimuli that mimic the paper charts for NP. A novel modified Weber definition was developed to specify the contrast of PP letters. The validity of the app is established in comparison with the paper chart. We found that our app generates more accurate and a wider range of contrast stimuli than the paper chart (especially at the critical high CS, low contrast range), and found a clear difference between NP and PP CS measures (CS<sub>NP</sub>>CS<sub>PP</sub>) despite the symmetry afforded by the modified Weber contrast definition. Our app provides a convenient way to measure CS in both lighted and dark environments.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2016 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5481843/pdf/nihms868149.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35120202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-01Epub Date: 2016-02-14DOI: 10.2352/ISSN.2470-1173.2016.16.HVEI-111
Jae-Hyun Jung, Tian Pu, Eli Peli
Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary edge images (black edges on white background or white edges on black background) have been used to represent features (edges and cusps) in scenes. However, the polarity of cusps and edges may contain important depth information (depth from shading) which is lost in the binary edge representation. This depth information may be restored, to some degree, using bipolar edges. We compared recognition rates of 16 binary edge images, or bipolar features, by 26 subjects. Object recognition rates were higher with bipolar edges and the improvement was significant in scenes with complex backgrounds.
{"title":"Comparing object recognition from binary and bipolar edge features.","authors":"Jae-Hyun Jung, Tian Pu, Eli Peli","doi":"10.2352/ISSN.2470-1173.2016.16.HVEI-111","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2016.16.HVEI-111","url":null,"abstract":"<p><p>Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary edge images (black edges on white background or white edges on black background) have been used to represent features (edges and cusps) in scenes. However, the polarity of cusps and edges may contain important depth information (depth from shading) which is lost in the binary edge representation. This depth information may be restored, to some degree, using bipolar edges. We compared recognition rates of 16 binary edge images, or bipolar features, by 26 subjects. Object recognition rates were higher with bipolar edges and the improvement was significant in scenes with complex backgrounds.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2016 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2352/ISSN.2470-1173.2016.16.HVEI-111","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34913547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}