首页 > 最新文献

Journal of perceptual imaging最新文献

英文 中文
Exploring the Links between Colours and Tastes/Flavours† 探索颜色和味道之间的联系
Pub Date : 2022-01-01 DOI: 10.2352/j.percept.imaging.2022.5.000408
C. Spence, C. Levitan
The colour and other visual appearance properties of food and drink constitute a key factor determining consumer acceptance and choice behaviour. Not only do consumers associate specific colours with particular tastes and flavours, but adding or changing the colour of food and drink can also dramatically affect taste/flavour perception. Surprisingly, even the colour of cups, cutlery, plates, packages, and the colour of the environment itself, have also been shown to influence multisensory flavour perception. The taste/flavour associations that we hold with colour are context-dependent, and are often based on statistical learning (though emotional mediation may also play a role). However, to date, neither the computational principles constraining these ubiquitous crossmodal effects nor the neural mechanisms underpinning the various crossmodal associations (or correspondences) that have been documented between colours and tastes/flavours have yet been established. It is currently unclear to what extent such colour-taste/flavour correspondences ought to be explained in terms of semantic congruency (i.e., statistical learning), and/or emotional mediation. Bayesian causal inference has become an increasingly important tool in helping researchers to understand (and predict) the multisensory interactions between the spatial senses of vision, audition, and touch. However, a network modelling approach may be of value moving forward. As made clear by this review, there are substantial challenges, both theoretical and practical, that will need to be overcome by those wanting to apply computational approaches both to understanding the integration of the chemical senses in the case of multisensory flavour perception, and to understanding the influence of colour thereon.
食品和饮料的颜色和其他视觉外观特性是决定消费者接受和选择行为的关键因素。消费者不仅会将特定的颜色与特定的口味和风味联系起来,而且添加或改变食品和饮料的颜色也会极大地影响味觉/风味感知。令人惊讶的是,甚至杯子、餐具、盘子、包装的颜色,以及环境本身的颜色,也被证明会影响多感官对味道的感知。我们对颜色的味觉/味道的联想是与环境相关的,通常是基于统计学习(尽管情感调解也可能起作用)。然而,到目前为止,无论是限制这些普遍存在的跨模态效应的计算原理,还是支持各种跨模态关联(或对应)的神经机制,这些跨模态关联(或对应)已被记录在颜色和味道/味道之间,都尚未建立起来。目前尚不清楚这种颜色-味道/味道对应应该在多大程度上解释语义一致性(即,统计学习)和/或情感调解。贝叶斯因果推理在帮助研究人员理解(和预测)视觉、听觉和触觉等空间感官之间的多感官相互作用方面已经成为越来越重要的工具。然而,网络建模方法可能具有向前发展的价值。正如这篇综述明确指出的那样,那些想要应用计算方法来理解多感官风味感知情况下化学感官的整合以及理解颜色对其影响的人,在理论和实践上都面临着巨大的挑战。
{"title":"Exploring the Links between Colours and Tastes/Flavours†","authors":"C. Spence, C. Levitan","doi":"10.2352/j.percept.imaging.2022.5.000408","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2022.5.000408","url":null,"abstract":"The colour and other visual appearance properties of food and drink constitute a key factor determining consumer acceptance and choice behaviour. Not only do consumers associate specific colours with particular tastes and flavours, but adding or changing the colour of food and drink can also dramatically affect taste/flavour perception. Surprisingly, even the colour of cups, cutlery, plates, packages, and the colour of the environment itself, have also been shown to influence multisensory flavour perception. The taste/flavour associations that we hold with colour are context-dependent, and are often based on statistical learning (though emotional mediation may also play a role). However, to date, neither the computational principles constraining these ubiquitous crossmodal effects nor the neural mechanisms underpinning the various crossmodal associations (or correspondences) that have been documented between colours and tastes/flavours have yet been established. It is currently unclear to what extent such colour-taste/flavour correspondences ought to be explained in terms of semantic congruency (i.e., statistical learning), and/or emotional mediation. Bayesian causal inference has become an increasingly important tool in helping researchers to understand (and predict) the multisensory interactions between the spatial senses of vision, audition, and touch. However, a network modelling approach may be of value moving forward. As made clear by this review, there are substantial challenges, both theoretical and practical, that will need to be overcome by those wanting to apply computational approaches both to understanding the integration of the chemical senses in the case of multisensory flavour perception, and to understanding the influence of colour thereon.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82946786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Introducing CatchUTM: A Novel Multisensory Tool for Assessing Patients' Risk of Falling† 介绍CatchUTM:一种评估患者跌倒风险的新型多感官工具
Pub Date : 2022-01-01 DOI: 10.2352/j.percept.imaging.2021.4.3.030407
Jeannette R. Mahoney, Claudene J. George, J. Verghese
To date, only a few studies have investigated the clinical translational value of multisensory integration. Our previous research has linked the magnitude of visual-somatosensory integration (measured behaviorally using simple reaction time tasks) to important cognitive (attention) and motor (balance, gait, and falls) outcomes in healthy older adults. While multisensory integration effects have been measured across a wide array of populations using various sensory combinations and different neuroscience research approaches, multisensory integration tests have not been systematically implemented in clinical settings. We recently developed a step-by-step protocol for administering and calculating multisensory integration effects to facilitate innovative and novel translational research across diverse clinical populations and age-ranges. In recognizing that patients with severe medical conditions and/or mobility limitations often experience difficulty traveling to research facilities or joining time-demanding research protocols, we deemed it necessary for patients to be able to benefit from multisensory testing. Using an established protocol and methodology, we developed a multisensory falls-screening tool called CatchU ™ (an iPhone app) to quantify multisensory integration performance in clinical practice that is currently undergoing validation studies. Our goal is to facilitate the identification of patients who are at increased risk of falls and promote physician-initiated falls counseling during clinical visits (e.g., annual wellness, sick, or follow-up visits). This will thereby raise falls-awareness and foster physician efforts to alleviate disability, promote independence, and increase quality of life for our older adults. This conceptual overview highlights the potential of multisensory integration in predicting clinical outcomes from a research perspective, while also showcasing the practical application of a multisensory screening tool in routine clinical practice.
迄今为止,只有少数研究调查了多感觉整合的临床转化价值。我们之前的研究已经将健康老年人的视觉-体感统合程度(使用简单的反应时间任务来测量行为)与重要的认知(注意力)和运动(平衡、步态和跌倒)结果联系起来。虽然使用各种感觉组合和不同的神经科学研究方法在广泛的人群中测量了多感觉整合效应,但多感觉整合测试尚未在临床环境中系统地实施。我们最近制定了一个逐步管理和计算多感觉整合效应的协议,以促进跨不同临床人群和年龄范围的创新和新颖的转化研究。由于认识到患有严重疾病和/或行动受限的患者通常难以前往研究机构或参加时间要求高的研究方案,我们认为有必要使患者能够从多感官测试中受益。使用既定的协议和方法,我们开发了一种名为CatchU™(一款iPhone应用程序)的多感觉跌倒筛查工具,用于量化临床实践中的多感觉整合表现,目前正在进行验证研究。我们的目标是促进识别有增加跌倒风险的患者,并在临床就诊期间(例如,年度健康、生病或随访)促进医生发起的跌倒咨询。因此,这将提高跌倒意识,促进医生努力减轻残疾,促进独立性,提高老年人的生活质量。这一概念概述强调了从研究角度预测临床结果的多感觉整合的潜力,同时也展示了多感觉筛查工具在常规临床实践中的实际应用。
{"title":"Introducing CatchUTM: A Novel Multisensory Tool for Assessing Patients' Risk of Falling†","authors":"Jeannette R. Mahoney, Claudene J. George, J. Verghese","doi":"10.2352/j.percept.imaging.2021.4.3.030407","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2021.4.3.030407","url":null,"abstract":"To date, only a few studies have investigated the clinical translational value of multisensory integration. Our previous research has linked the magnitude of visual-somatosensory integration (measured behaviorally using simple reaction time tasks) to important cognitive (attention) and motor (balance, gait, and falls) outcomes in healthy older adults. While multisensory integration effects have been measured across a wide array of populations using various sensory combinations and different neuroscience research approaches, multisensory integration tests have not been systematically implemented in clinical settings. We recently developed a step-by-step protocol for administering and calculating multisensory integration effects to facilitate innovative and novel translational research across diverse clinical populations and age-ranges. In recognizing that patients with severe medical conditions and/or mobility limitations often experience difficulty traveling to research facilities or joining time-demanding research protocols, we deemed it necessary for patients to be able to benefit from multisensory testing. Using an established protocol and methodology, we developed a multisensory falls-screening tool called CatchU ™ (an iPhone app) to quantify multisensory integration performance in clinical practice that is currently undergoing validation studies. Our goal is to facilitate the identification of patients who are at increased risk of falls and promote physician-initiated falls counseling during clinical visits (e.g., annual wellness, sick, or follow-up visits). This will thereby raise falls-awareness and foster physician efforts to alleviate disability, promote independence, and increase quality of life for our older adults. This conceptual overview highlights the potential of multisensory integration in predicting clinical outcomes from a research perspective, while also showcasing the practical application of a multisensory screening tool in routine clinical practice.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68835397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ENHANCED PERIPHERAL FACE PROCESSING IN DEAF INDIVIDUALS. 增强聋人的外围面部处理能力。
Pub Date : 2022-01-01 Epub Date: 2021-05-04 DOI: 10.2352/j.percept.imaging.2022.5.000401
Kassandra R Lee, Elizabeth Groesbeck, O Scott Gwinn, Michael A Webster, Fang Jiang

Studies of compensatory changes in visual functions in response to auditory loss have shown that enhancements tend to be restricted to the processing of specific visual features, such as motion in the periphery. Previous studies have also shown that deaf individuals can show greater face processing abilities in the central visual field. Enhancements in the processing of peripheral stimuli are thought to arise from a lack of auditory input and a subsequent increase in the allocation of attentional resources to peripheral locations, while enhancements in face processing abilities are thought to be driven by experience with ASL and not necessarily hearing loss. This combined with the fact that face processing abilities typically decline with eccentricity suggests that face processing enhancements may not extend to the periphery for deaf individuals. Using a face matching task, we examined whether deaf individuals' enhanced ability to discriminate between faces extends to the peripheral visual field. Deaf participants were more accurate than hearing participants in discriminating faces presented both centrally and in the periphery. Our results support earlier findings that deaf individuals possess enhanced face discrimination abilities in the central visual field and further extend them by showing that these enhancements also occur in the periphery for more complex stimuli.

对听觉缺失引起的视觉功能代偿性变化的研究表明,视觉功能的增强往往局限于对特定视觉特征的处理,如外围的运动。先前的研究还表明,聋人在中央视野中可以表现出更强的面部处理能力。对外围刺激处理能力的增强被认为是由于缺乏听觉输入以及随后对外围位置的注意力资源分配增加所致,而对面部处理能力的增强则被认为是由使用 ASL 的经验驱动的,而不一定是听力损失。这一点再加上人脸处理能力通常会随着偏心率的下降而下降这一事实,表明聋人的人脸处理能力增强可能不会延伸到外周。通过人脸匹配任务,我们研究了聋人对人脸的分辨能力是否会延伸到外围视野。聋人比健听者能更准确地辨别中心和周边出现的人脸。我们的研究结果支持了之前的研究结果,即聋人在中央视野中具有更强的面孔辨别能力,并进一步扩展了这些结果,表明这些能力的增强也发生在外围更复杂的刺激上。
{"title":"ENHANCED PERIPHERAL FACE PROCESSING IN DEAF INDIVIDUALS.","authors":"Kassandra R Lee, Elizabeth Groesbeck, O Scott Gwinn, Michael A Webster, Fang Jiang","doi":"10.2352/j.percept.imaging.2022.5.000401","DOIUrl":"10.2352/j.percept.imaging.2022.5.000401","url":null,"abstract":"<p><p>Studies of compensatory changes in visual functions in response to auditory loss have shown that enhancements tend to be restricted to the processing of specific visual features, such as motion in the periphery. Previous studies have also shown that deaf individuals can show greater face processing abilities in the central visual field. Enhancements in the processing of peripheral stimuli are thought to arise from a lack of auditory input and a subsequent increase in the allocation of attentional resources to peripheral locations, while enhancements in face processing abilities are thought to be driven by experience with ASL and not necessarily hearing loss. This combined with the fact that face processing abilities typically decline with eccentricity suggests that face processing enhancements may not extend to the periphery for deaf individuals. Using a face matching task, we examined whether deaf individuals' enhanced ability to discriminate between faces extends to the peripheral visual field. Deaf participants were more accurate than hearing participants in discriminating faces presented both centrally and in the periphery. Our results support earlier findings that deaf individuals possess enhanced face discrimination abilities in the central visual field and further extend them by showing that these enhancements also occur in the periphery for more complex stimuli.</p>","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9007248/pdf/nihms-1678566.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10450495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception and Appreciation of Tactile Objects: The Role of Visual Experience and Texture Parameters† 触觉物体的感知和欣赏:视觉体验和纹理参数的作用†
Pub Date : 2022-01-01 DOI: 10.2352/j.percept.imaging.2021.4.2.020405
A. R. Karim, Sanchary Prativa, Lora T. Likova
This exploratory study was designed to examine the effects of visual experience and specific texture parameters on both discriminative and aesthetic aspects of tactile perception. To this end, the authors conducted two experiments using a novel behavioral (ranking) approach in blind and (blindfolded) sighted individuals. Groups of congenitally blind, late blind, and (blindfolded) sighted participants made relative stimulus preference, aesthetic appreciation, and smoothness or softness judgment of two-dimensional (2D) or three-dimensional (3D) tactile surfaces through active touch. In both experiments, the aesthetic judgment was assessed on three affective dimensions, Relaxation, Hedonics, and Arousal, hypothesized to underlie visual aesthetics in a prior study. Results demonstrated that none of these behavioral judgments significantly varied as a function of visual experience in either experiment. However, irrespective of visual experience, significant differences were identified in all these behavioral judgments across the physical levels of smoothness or softness. In general, 2D smoothness or 3D softness discrimination was proportional to the level of physical smoothness or softness. Second, the smoother or softer tactile stimuli were preferred over the rougher or harder tactile stimuli. Third, the 3D affective structure of visual aesthetics appeared to be amodal and applicable to tactile aesthetics. However, analysis of the aesthetic profile across the affective dimensions revealed some striking differences between the forms of appreciation of smoothness and softness, uncovering unanticipated substructures in the nascent field of tactile aesthetics. While the physically softer 3D stimuli received higher ranks on all three affective dimensions, the physically smoother 2D stimuli received higher ranks on the Relaxation and Hedonics but lower ranks on the Arousal dimension. Moreover, the Relaxation and Hedonics ranks accurately overlapped with one another across all the physical levels of softness/hardness, but not across the physical levels of smoothness/roughness. These findings suggest that physical texture parameters not only affect basic tactile discrimination but differentially mediate tactile preferences, and aesthetic appreciation. The theoretical and practical implications of these novel findings are discussed.
这项探索性研究旨在检验视觉体验和特定纹理参数对触觉的辨别和审美方面的影响。为此,作者使用一种新的行为(排名)方法在盲人和(蒙眼)视力者中进行了两项实验。先天性失明、晚期失明和(蒙眼)失明的参与者通过主动触摸对二维(2D)或三维(3D)触觉表面进行相对刺激偏好、审美欣赏和平滑度或柔软度判断。在这两个实验中,审美判断都是在三个情感维度上进行评估的,即放松、Hedonics和Arousal,在之前的研究中被假设为视觉美学的基础。结果表明,在两个实验中,这些行为判断都没有作为视觉体验的函数而发生显著变化。然而,无论视觉体验如何,在所有这些行为判断中,光滑或柔软的身体水平都存在显著差异。通常,2D平滑度或3D柔软度的辨别与物理平滑度或柔软度的水平成比例。其次,与粗糙或较硬的触觉刺激相比,更平滑或较软的触觉刺激是优选的。第三,视觉美学的三维情感结构似乎是畸形的,适用于触觉美学。然而,对情感维度上的美学轮廓的分析揭示了光滑和柔软的欣赏形式之间的一些显著差异,揭示了触觉美学新生领域中意想不到的亚结构。虽然身体上较软的3D刺激在所有三个情感维度上都获得了更高的排名,但身体上较平滑的2D刺激在放松和享乐维度上获得了较高的排名,而在唤醒维度上获得的排名较低。此外,在柔软度/硬度的所有物理水平上,松弛度和Hedonics的排名准确地相互重叠,但在光滑度/粗糙度的物理水平上却没有。这些发现表明,物理纹理参数不仅影响基本的触觉辨别,而且不同地调节触觉偏好和审美。讨论了这些新发现的理论和实践意义。
{"title":"Perception and Appreciation of Tactile Objects: The Role of Visual Experience and Texture Parameters†","authors":"A. R. Karim, Sanchary Prativa, Lora T. Likova","doi":"10.2352/j.percept.imaging.2021.4.2.020405","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2021.4.2.020405","url":null,"abstract":"This exploratory study was designed to examine the effects of visual experience and specific texture parameters on both discriminative and aesthetic aspects of tactile perception. To this end, the authors conducted two experiments using a novel behavioral (ranking) approach in blind and (blindfolded) sighted individuals. Groups of congenitally blind, late blind, and (blindfolded) sighted participants made relative stimulus preference, aesthetic appreciation, and smoothness or softness judgment of two-dimensional (2D) or three-dimensional (3D) tactile surfaces through active touch. In both experiments, the aesthetic judgment was assessed on three affective dimensions, Relaxation, Hedonics, and Arousal, hypothesized to underlie visual aesthetics in a prior study. Results demonstrated that none of these behavioral judgments significantly varied as a function of visual experience in either experiment. However, irrespective of visual experience, significant differences were identified in all these behavioral judgments across the physical levels of smoothness or softness. In general, 2D smoothness or 3D softness discrimination was proportional to the level of physical smoothness or softness. Second, the smoother or softer tactile stimuli were preferred over the rougher or harder tactile stimuli. Third, the 3D affective structure of visual aesthetics appeared to be amodal and applicable to tactile aesthetics. However, analysis of the aesthetic profile across the affective dimensions revealed some striking differences between the forms of appreciation of smoothness and softness, uncovering unanticipated substructures in the nascent field of tactile aesthetics. While the physically softer 3D stimuli received higher ranks on all three affective dimensions, the physically smoother 2D stimuli received higher ranks on the Relaxation and Hedonics but lower ranks on the Arousal dimension. Moreover, the Relaxation and Hedonics ranks accurately overlapped with one another across all the physical levels of softness/hardness, but not across the physical levels of smoothness/roughness. These findings suggest that physical texture parameters not only affect basic tactile discrimination but differentially mediate tactile preferences, and aesthetic appreciation. The theoretical and practical implications of these novel findings are discussed.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47445894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Beyond Visual Aesthetics: The Role of Fractal-scaling Characteristics across the Senses† 超越视觉美学:分形尺度特征在感官中的作用
Pub Date : 2022-01-01 DOI: 10.2352/j.percept.imaging.2021.4.3.030406
Catherine Viengkham, B. Spehar
The investigation of aesthetics has primarily been conducted within the visual domain. This is not a surprise, as aesthetics has largely been associated with the perception and appreciation of visual media, such as traditional artworks, photography, and architecture. However, one doesn’t need to look far to realize that aesthetics extends beyond the visual domain. Media such as film and music introduce a unique and equally rich temporally changing visual and auditory experience. Product design, ranging from furniture to clothing, strongly depends on pleasant tactile evaluations. Studies involving the perception of 1/f statistics in vision have been particularly consistent in demonstrating a preference for a 1/f structure resembling that of natural scenes, as well as systematic individual differences across a variety of visual objects. Interestingly, comparable findings have also been reached in the auditory and tactile domains. In this review, we discuss some of the current literature on the perception of 1/f statistics across the contexts of different sensory modalities.
美学的研究主要是在视觉领域内进行的。这并不奇怪,因为美学在很大程度上与视觉媒体的感知和欣赏有关,比如传统艺术品、摄影和建筑。然而,人们不需要看很远就能意识到美学超出了视觉领域。电影和音乐等媒体引入了一种独特的、同样丰富的、随时间变化的视觉和听觉体验。产品设计,从家具到服装,在很大程度上依赖于令人愉悦的触觉评价。涉及视觉中1/f统计感知的研究特别一致地证明了对类似于自然场景的1/f结构的偏好,以及各种视觉对象之间的系统个体差异。有趣的是,在听觉和触觉领域也有类似的发现。在这篇综述中,我们讨论了在不同感觉模式的背景下对1/f统计的感知的一些当前文献。
{"title":"Beyond Visual Aesthetics: The Role of Fractal-scaling Characteristics across the Senses†","authors":"Catherine Viengkham, B. Spehar","doi":"10.2352/j.percept.imaging.2021.4.3.030406","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2021.4.3.030406","url":null,"abstract":"The investigation of aesthetics has primarily been conducted within the visual domain. This is not a surprise, as aesthetics has largely been associated with the perception and appreciation of visual media, such as traditional artworks, photography, and architecture. However, one doesn’t need to look far to realize that aesthetics extends beyond the visual domain. Media such as film and music introduce a unique and equally rich temporally changing visual and auditory experience. Product design, ranging from furniture to clothing, strongly depends on pleasant tactile evaluations. Studies involving the perception of 1/f statistics in vision have been particularly consistent in demonstrating a preference for a 1/f structure resembling that of natural scenes, as well as systematic individual differences across a variety of visual objects. Interestingly, comparable findings have also been reached in the auditory and tactile domains. In this review, we discuss some of the current literature on the perception of 1/f statistics across the contexts of different sensory modalities.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78340803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crossmodal Postdiction: Conscious Perception as Revisionist History† 跨模态定位:作为修正主义历史的有意识感知
Pub Date : 2022-01-01 DOI: 10.2352/j.percept.imaging.2021.4.2.020403
N. Stiles, A. Tanguay, S. Shimojo
Postdiction occurs when later stimuli influence the perception of earlier stimuli. As the multisensory science field has grown in recent decades, the investigation of crossmodal postdictive phenomena has also expanded. Crossmodal postdiction can be considered (in its simplest form) the phenomenon in which later stimuli in one modality influence earlier stimuli in another modality (e.g., Intermodal Apparent Motion). Crossmodal postdiction can also appear in more nuanced forms, such as unimodal postdictive illusions (e.g., Apparent Motion) that are influenced by concurrent crossmodal stimuli (e.g., Crossmodal Influence on Apparent Motion), or crossmodal illusions (e.g., the Double Flash Illusion) that are influenced postdictively by a stimulus in one or the other modality (e.g., a visual stimulus in the Illusory Audiovisual Rabbit Illusion). In this review, these and other varied forms of crossmodal postdiction will be discussed. Three neuropsychological models proposed for unimodal postdiction will be adapted to the unique aspects of processing and integrating multisensory stimuli. Crossmodal postdiction opens a new window into sensory integration, and could potentially be used to identify new mechanisms of crossmodal crosstalk in the brain.
当后刺激影响对前刺激的感知时,就会发生后置现象。随着近几十年来多感官科学领域的发展,对跨模态后验现象的研究也在扩大。跨模态后置可以被认为是(最简单的形式)一种模态中较晚的刺激影响另一模态中较早的刺激的现象(例如,多模态表观运动)。跨模态后置也可以以更细微的形式出现,例如受并发跨模态刺激影响的单模态后置错觉(例如,表观运动的跨模态影响),或受一种或另一种模态刺激影响的跨模态错觉(例如,虚幻视听兔子错觉中的视觉刺激)。在这篇综述中,这些和其他不同形式的跨模定位将被讨论。提出的三个单峰定位的神经心理学模型将适应处理和整合多感觉刺激的独特方面。跨模态定位为感觉统合打开了一扇新的窗口,并有可能用于识别大脑中跨模态串扰的新机制。
{"title":"Crossmodal Postdiction: Conscious Perception as Revisionist History†","authors":"N. Stiles, A. Tanguay, S. Shimojo","doi":"10.2352/j.percept.imaging.2021.4.2.020403","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2021.4.2.020403","url":null,"abstract":"Postdiction occurs when later stimuli influence the perception of earlier stimuli. As the multisensory science field has grown in recent decades, the investigation of crossmodal postdictive phenomena has also expanded. Crossmodal postdiction can be considered (in its simplest form) the phenomenon in which later stimuli in one modality influence earlier stimuli in another modality (e.g., Intermodal Apparent Motion). Crossmodal postdiction can also appear in more nuanced forms, such as unimodal postdictive illusions (e.g., Apparent Motion) that are influenced by concurrent crossmodal stimuli (e.g., Crossmodal Influence on Apparent Motion), or crossmodal illusions (e.g., the Double Flash Illusion) that are influenced postdictively by a stimulus in one or the other modality (e.g., a visual stimulus in the Illusory Audiovisual Rabbit Illusion). In this review, these and other varied forms of crossmodal postdiction will be discussed. Three neuropsychological models proposed for unimodal postdiction will be adapted to the unique aspects of processing and integrating multisensory stimuli. Crossmodal postdiction opens a new window into sensory integration, and could potentially be used to identify new mechanisms of crossmodal crosstalk in the brain.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44466934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introducing CatchU: A Novel Multisensory Tool for Assessing Patients' Risk of Falling. 介绍 CatchU™:用于评估患者跌倒风险的新型多感官工具。
Pub Date : 2022-01-01 DOI: 10.2352/j.percept.imaging.2022.5.000407
Jeannette R Mahoney, Claudene J George, Joe Verghese

To date, only a few studies have investigated the clinical translational value of multisensory integration. Our previous research has linked the magnitude of visual-somatosensory integration (measured behaviorally using simple reaction time tasks) to important cognitive (attention) and motor (balance, gait, and falls) outcomes in healthy older adults. While multisensory integration effects have been measured across a wide array of populations using various sensory combinations and different neuroscience research approaches, multisensory integration tests have not been systematically implemented in clinical settings. We recently developed a step-by-step protocol for administering and calculating multisensory integration effects to facilitate innovative and novel translational research across diverse clinical populations and age-ranges. In recognizing that patients with severe medical conditions and/or mobility limitations often experience difficulty traveling to research facilities or joining time-demanding research protocols, we deemed it necessary for patients to be able to benefit from multisensory testing. Using an established protocol and methodology, we developed a multisensory falls-screening tool called CatchU (an iPhone app) to quantify multisensory integration performance in clinical practice that is currently undergoing validation studies. Our goal is to facilitate the identification of patients who are at increased risk of falls and promote physician-initiated falls counseling during clinical visits (e.g., annual wellness, sick, or follow-up visits). This will thereby raise falls-awareness and foster physician efforts to alleviate disability, promote independence, and increase quality of life for our older adults. This conceptual overview highlights the potential of multisensory integration in predicting clinical outcomes from a research perspective, while also showcasing the practical application of a multisensory screening tool in routine clinical practice.

迄今为止,只有少数研究调查了多感觉统合的临床转化价值。我们之前的研究将视觉-感觉统合的程度(使用简单的反应时间任务进行行为测量)与健康老年人的重要认知(注意力)和运动(平衡、步态和跌倒)结果联系起来。虽然多感觉统合效应已通过各种感觉组合和不同的神经科学研究方法在众多人群中进行了测量,但多感觉统合测试尚未在临床环境中系统实施。我们最近开发了一套逐步实施和计算多感觉统合效应的方案,以促进不同临床人群和年龄段的创新性转化研究。考虑到患有严重疾病和/或行动不便的患者往往难以前往研究机构或参加时间要求较高的研究方案,我们认为有必要让患者能够从多感官测试中获益。利用既定的方案和方法,我们开发了一种名为 CatchU ™ 的多感官跌倒筛查工具(iPhone 应用程序),用于量化临床实践中的多感官整合表现,目前正在进行验证研究。我们的目标是促进识别跌倒风险增加的患者,并在临床就诊(如年度健康、疾病或随访)期间促进由医生发起的跌倒咨询。这将提高人们对跌倒的认识,促进医生减轻残疾、促进独立和提高老年人生活质量的努力。这篇概念性综述从研究角度强调了多感官整合在预测临床结果方面的潜力,同时也展示了多感官筛查工具在常规临床实践中的实际应用。
{"title":"Introducing CatchU<sup>™</sup>: A Novel Multisensory Tool for Assessing Patients' Risk of Falling.","authors":"Jeannette R Mahoney, Claudene J George, Joe Verghese","doi":"10.2352/j.percept.imaging.2022.5.000407","DOIUrl":"10.2352/j.percept.imaging.2022.5.000407","url":null,"abstract":"<p><p>To date, only a few studies have investigated the clinical translational value of multisensory integration. Our previous research has linked the magnitude of visual-somatosensory integration (measured behaviorally using simple reaction time tasks) to important cognitive (attention) and motor (balance, gait, and falls) outcomes in healthy older adults. While multisensory integration effects have been measured across a wide array of populations using various sensory combinations and different neuroscience research approaches, multisensory integration tests have not been systematically implemented in clinical settings. We recently developed a step-by-step protocol for administering and calculating multisensory integration effects to facilitate innovative and novel translational research across diverse clinical populations and age-ranges. In recognizing that patients with severe medical conditions and/or mobility limitations often experience difficulty traveling to research facilities or joining time-demanding research protocols, we deemed it necessary for patients to be able to benefit from multisensory testing. Using an established protocol and methodology, we developed a multisensory falls-screening tool called CatchU <sup><b>™</b></sup> (an iPhone app) to quantify multisensory integration performance in clinical practice that is currently undergoing validation studies. Our goal is to facilitate the identification of patients who are at increased risk of falls and promote physician-initiated falls counseling during clinical visits (e.g., annual wellness, sick, or follow-up visits). This will thereby raise falls-awareness and foster physician efforts to alleviate disability, promote independence, and increase quality of life for our older adults. This conceptual overview highlights the potential of multisensory integration in predicting clinical outcomes from a research perspective, while also showcasing the practical application of a multisensory screening tool in routine clinical practice.</p>","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10010676/pdf/nihms-1833668.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9121492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From the Editors in Chief 来自总编辑
Pub Date : 2021-05-01 DOI: 10.2352/j.percept.imaging.2021.4.1.010101
B. Rogowitz, Thrasos N. Pappas
{"title":"From the Editors in Chief","authors":"B. Rogowitz, Thrasos N. Pappas","doi":"10.2352/j.percept.imaging.2021.4.1.010101","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2021.4.1.010101","url":null,"abstract":"","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86225147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Camouflage for Moving Objects 移动物体的自适应伪装
Pub Date : 2021-03-01 DOI: 10.2352/j.percept.imaging.2021.4.2.020502
E. Burg, M. Hogervorst, A. Toet
Abstract Targets that are well camouflaged under static conditions are often easily detected as soon as they start moving. We investigated and evaluated ways to design camouflage that dynamically adapts to the background and conceals the target while taking the variation in potential viewing directions into account. In a human observer experiment, recorded imagery was used to simulate moving (either walking or running) and static soldiers, equipped with different types of camouflage patterns and viewed from different directions. Participants were instructed to detect the soldier and to make a rapid response as soon as they have identified the soldier. Mean target detection rate was compared between soldiers in standard (Netherlands) Woodland uniform, in static camouflage (adapted to the local background) and in dynamically adapting camouflage. We investigated the effects of background type and variability on detection performance by varying the soldiers’ environment (such as bushland and urban). In general, detection was easier for dynamic soldiers compared to static soldiers, confirming that motion breaks camouflage. Interestingly, we show that motion onset and not motion itself is an important feature for capturing attention. Furthermore, camouflage performance of the static adaptive pattern was generally much better than for the standard Woodland pattern. Also, camouflage performance was found to be dependent on the background and the local structures around the soldier. Interestingly, our dynamic camouflage design outperformed a method which simply displays the ‘exact’ background on the camouflage suit (as if it was transparent), since it is better capable of taking the variability in viewing directions into account. By combining new adaptive camouflage technologies with dynamic adaptive camouflage designs such as the one presented here, it may become feasible to prevent detection of moving targets in the (near) future.
在静态条件下伪装良好的目标一旦开始移动就很容易被发现。我们研究和评估了在考虑潜在观察方向变化的情况下,动态适应背景并隐藏目标的伪装设计方法。在一项人体观察者实验中,记录的图像被用来模拟移动(步行或跑步)和静止的士兵,他们装备了不同类型的迷彩图案,从不同的方向观看。参与者被要求发现士兵,并在发现士兵后立即做出快速反应。比较了标准(荷兰)林地制服、静态迷彩(适应局部背景)和动态迷彩士兵的平均目标检出率。我们通过改变士兵的环境(如丛林和城市)来研究背景类型和变异性对探测性能的影响。总的来说,与静态士兵相比,动态士兵更容易被发现,这证实了运动可以打破伪装。有趣的是,我们发现动作的开始而不是动作本身是吸引注意力的一个重要特征。此外,静态自适应模式的伪装性能普遍优于标准林地模式。同时,迷彩的表现也取决于背景和士兵周围的局部结构。有趣的是,我们的动态伪装设计优于一种简单地在伪装服上显示“精确”背景的方法(就好像它是透明的),因为它能够更好地考虑到观察方向的可变性。通过将新的自适应伪装技术与动态自适应伪装设计相结合,在(不久)的将来,防止移动目标被发现可能是可行的。
{"title":"Adaptive Camouflage for Moving Objects","authors":"E. Burg, M. Hogervorst, A. Toet","doi":"10.2352/j.percept.imaging.2021.4.2.020502","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2021.4.2.020502","url":null,"abstract":"Abstract Targets that are well camouflaged under static conditions are often easily detected as soon as they start moving. We investigated and evaluated ways to design camouflage that dynamically adapts to the background and conceals the target while taking the variation\u0000 in potential viewing directions into account. In a human observer experiment, recorded imagery was used to simulate moving (either walking or running) and static soldiers, equipped with different types of camouflage patterns and viewed from different directions. Participants were instructed\u0000 to detect the soldier and to make a rapid response as soon as they have identified the soldier. Mean target detection rate was compared between soldiers in standard (Netherlands) Woodland uniform, in static camouflage (adapted to the local background) and in dynamically adapting camouflage.\u0000 We investigated the effects of background type and variability on detection performance by varying the soldiers’ environment (such as bushland and urban). In general, detection was easier for dynamic soldiers compared to static soldiers, confirming that motion breaks camouflage. Interestingly,\u0000 we show that motion onset and not motion itself is an important feature for capturing attention. Furthermore, camouflage performance of the static adaptive pattern was generally much better than for the standard Woodland pattern. Also, camouflage performance was found to be dependent on the\u0000 background and the local structures around the soldier. Interestingly, our dynamic camouflage design outperformed a method which simply displays the ‘exact’ background on the camouflage suit (as if it was transparent), since it is better capable of taking the variability in viewing\u0000 directions into account. By combining new adaptive camouflage technologies with dynamic adaptive camouflage designs such as the one presented here, it may become feasible to prevent detection of moving targets in the (near) future.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87482065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Color Conversion in Deep Autoencoders 深度自动编码器的颜色转换
Pub Date : 2021-01-01 DOI: 10.2352/J.PERCEPT.IMAGING.2021.4.2.020401
A. Akbarinia, Raquel Gil Rodríguez
Studies of compensatory changes in visual functions in response to auditory loss have shown that enhancements tend to be restricted to the processing of specific visual features, such as motion in the periphery. Previous studies have also shown that deaf individuals can show greater face processing abilities in the central visual field. Enhancements in the processing of peripheral stimuli are thought to arise from a lack of auditory input and subsequent increase in the allocation of attentional resources to peripheral locations, while enhancements in face processing abilities are thought to be driven by experience with American sign language and not necessarily hearing loss. This combined with the fact that face processing abilities typically decline with eccentricity suggests that face processing enhancements may not extend to the periphery for deaf individuals. Using a face matching task, the authors examined whether deaf individuals’ enhanced ability to discriminate between faces extends to the peripheral visual field. Deaf participants were more accurate than hearing participants in discriminating faces presented both centrally and in the periphery. Their results support earlier findings that deaf individuals possess enhanced face discrimination abilities in the central visual field and further extend them by showing that these enhancements also occur in the periphery for more complex stimuli.
对听觉丧失后视觉功能代偿性变化的研究表明,这种增强往往局限于处理特定的视觉特征,如外围运动。先前的研究也表明,聋人在中央视野中表现出更强的面部处理能力。外围刺激处理能力的增强被认为是由于缺乏听觉输入,随后注意力资源分配到外围位置的增加,而面部处理能力的增强被认为是由美国手语的经验驱动的,而不一定是听力损失。再加上面部处理能力通常会随着古怪而下降,这表明聋人面部处理能力的增强可能不会扩展到外围。通过一项面部匹配任务,作者检验了失聪者区分面部的能力是否会扩展到周边视野。聋人参与者在辨别中央和外围出现的面孔时比听力正常的参与者更准确。他们的研究结果支持了先前的发现,即聋人在中央视野中具有更强的面部识别能力,并进一步扩展了这一发现,表明在更复杂的刺激下,这些增强也发生在外围。
{"title":"Color Conversion in Deep Autoencoders","authors":"A. Akbarinia, Raquel Gil Rodríguez","doi":"10.2352/J.PERCEPT.IMAGING.2021.4.2.020401","DOIUrl":"https://doi.org/10.2352/J.PERCEPT.IMAGING.2021.4.2.020401","url":null,"abstract":"Studies of compensatory changes in visual functions in response to auditory loss have shown that enhancements tend to be restricted to the processing of specific visual features, such as motion in the periphery. Previous studies have also shown that deaf individuals can show greater face processing abilities in the central visual field. Enhancements in the processing of peripheral stimuli are thought to arise from a lack of auditory input and subsequent increase in the allocation of attentional resources to peripheral locations, while enhancements in face processing abilities are thought to be driven by experience with American sign language and not necessarily hearing loss. This combined with the fact that face processing abilities typically decline with eccentricity suggests that face processing enhancements may not extend to the periphery for deaf individuals. Using a face matching task, the authors examined whether deaf individuals’ enhanced ability to discriminate between faces extends to the peripheral visual field. Deaf participants were more accurate than hearing participants in discriminating faces presented both centrally and in the periphery. Their results support earlier findings that deaf individuals possess enhanced face discrimination abilities in the central visual field and further extend them by showing that these enhancements also occur in the periphery for more complex stimuli.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68835375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Journal of perceptual imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1