首页 > 最新文献

Multisensory Research最新文献

英文 中文
Influence of Sensory Conflict on Perceived Timing of Passive Rotation in Virtual Reality. 感官冲突对虚拟现实中被动旋转感知时间的影响。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2022-04-05 DOI: 10.1163/22134808-bja10074
William Chung, Michael Barnett-Cowan

Integration of incoming sensory signals from multiple modalities is central in the determination of self-motion perception. With the emergence of consumer virtual reality (VR), it is becoming increasingly common to experience a mismatch in sensory feedback regarding motion when using immersive displays. In this study, we explored whether introducing various discrepancies between the vestibular and visual motion would influence the perceived timing of self-motion. Participants performed a series of temporal-order judgements between an auditory tone and a passive whole-body rotation on a motion platform accompanied by visual feedback using a virtual environment generated through a head-mounted display. Sensory conflict was induced by altering the speed and direction by which the movement of the visual scene updated relative to the observer's physical rotation. There were no differences in perceived timing of the rotation without vision, with congruent visual feedback and when the speed of the updating of the visual motion was slower. However, the perceived timing was significantly further from zero when the direction of the visual motion was incongruent with the rotation. These findings demonstrate the potential interaction between visual and vestibular signals in the temporal perception of self-motion. Additionally, we recorded cybersickness ratings and found that sickness severity was significantly greater when visual motion was present and incongruent with the physical motion. This supports previous research regarding cybersickness and the sensory conflict theory, where a mismatch between the visual and vestibular signals may lead to a greater likelihood for the occurrence of sickness symptoms.

整合来自多种模式的传入感官信号是确定自我运动感知的核心。随着消费类虚拟现实(VR)的出现,在使用沉浸式显示器时,运动感觉反馈不匹配的现象越来越普遍。在这项研究中,我们探讨了在前庭运动和视觉运动之间引入各种差异是否会影响自我运动的感知时间。受试者通过头戴式显示器生成的虚拟环境,在听觉音调和运动平台上的被动全身旋转之间进行了一系列时序判断,并伴有视觉反馈。通过改变视觉场景相对于观察者身体旋转的更新速度和方向,诱发感觉冲突。在没有视觉的情况下、在视觉反馈一致的情况下以及在视觉运动更新速度较慢的情况下,观察者对旋转时间的感知没有差异。然而,当视觉运动的方向与旋转方向不一致时,感知到的时间明显偏离零点。这些发现表明,在自我运动的时间感知中,视觉和前庭信号之间可能存在相互作用。此外,我们还记录了晕机评分,并发现当视觉运动出现且与物理运动不一致时,晕机的严重程度会明显增加。这支持了之前有关晕机和感觉冲突理论的研究,即视觉信号和前庭信号之间的不匹配可能导致晕机症状更有可能发生。
{"title":"Influence of Sensory Conflict on Perceived Timing of Passive Rotation in Virtual Reality.","authors":"William Chung, Michael Barnett-Cowan","doi":"10.1163/22134808-bja10074","DOIUrl":"10.1163/22134808-bja10074","url":null,"abstract":"<p><p>Integration of incoming sensory signals from multiple modalities is central in the determination of self-motion perception. With the emergence of consumer virtual reality (VR), it is becoming increasingly common to experience a mismatch in sensory feedback regarding motion when using immersive displays. In this study, we explored whether introducing various discrepancies between the vestibular and visual motion would influence the perceived timing of self-motion. Participants performed a series of temporal-order judgements between an auditory tone and a passive whole-body rotation on a motion platform accompanied by visual feedback using a virtual environment generated through a head-mounted display. Sensory conflict was induced by altering the speed and direction by which the movement of the visual scene updated relative to the observer's physical rotation. There were no differences in perceived timing of the rotation without vision, with congruent visual feedback and when the speed of the updating of the visual motion was slower. However, the perceived timing was significantly further from zero when the direction of the visual motion was incongruent with the rotation. These findings demonstrate the potential interaction between visual and vestibular signals in the temporal perception of self-motion. Additionally, we recorded cybersickness ratings and found that sickness severity was significantly greater when visual motion was present and incongruent with the physical motion. This supports previous research regarding cybersickness and the sensory conflict theory, where a mismatch between the visual and vestibular signals may lead to a greater likelihood for the occurrence of sickness symptoms.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"1 1","pages":"1-23"},"PeriodicalIF":1.6,"publicationDate":"2022-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64581115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of Tactile Flow on Visual Heading Perception. 触觉流对视觉头球感知的影响。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2022-03-09 DOI: 10.1163/22134808-bja10071
Lisa Rosenblum, Elisa Grewe, Jan Churan, Frank Bremmer

The integration of information from different sensory modalities is crucial for successful navigation through an environment. Among others, self-motion induces distinct optic flow patterns on the retina, vestibular signals and tactile flow, which contribute to determine traveled distance (path integration) or movement direction (heading). While the processing of combined visual-vestibular information is subject to a growing body of literature, the processing of visuo-tactile signals in the context of self-motion has received comparatively little attention. Here, we investigated whether visual heading perception is influenced by behaviorally irrelevant tactile flow. In the visual modality, we simulated an observer's self-motion across a horizontal ground plane (optic flow). Tactile self-motion stimuli were delivered by air flow from head-mounted nozzles (tactile flow). In blocks of trials, we presented only visual or tactile stimuli and subjects had to report their perceived heading. In another block of trials, tactile and visual stimuli were presented simultaneously, with the tactile flow within ±40° of the visual heading (bimodal condition). Here, importantly, participants had to report their perceived visual heading. Perceived self-motion direction in all conditions revealed a centripetal bias, i.e., heading directions were perceived as compressed toward straight ahead. In the bimodal condition, we found a small but systematic influence of task-irrelevant tactile flow on visually perceived headings as function of their directional offset. We conclude that tactile flow is more tightly linked to self-motion perception than previously thought.

整合来自不同感官模式的信息对于在环境中成功导航至关重要。其中,自我运动诱导视网膜上不同的光流模式、前庭信号和触觉流,这有助于确定行进距离(路径整合)或运动方向(方向)。虽然视觉-前庭联合信息的处理受到越来越多的文献的关注,但在自我运动的背景下,视觉-触觉信号的处理受到的关注相对较少。在此,我们研究了视觉头球知觉是否受到与行为无关的触觉流的影响。在视觉模态中,我们模拟了观察者在水平地平面上的自运动(光流)。触觉自运动刺激由头戴式喷嘴的气流传递(触觉流)。在一组试验中,我们只提供视觉或触觉刺激,受试者必须报告他们感知到的头球。在另一组实验中,触觉和视觉刺激同时呈现,触觉流在视觉方向的±40°范围内(双峰条件)。在这里,重要的是,参与者必须报告他们所感知到的视觉标题。在所有条件下感知到的自我运动方向都显示出向心偏差,即方向方向被感知为向正前方压缩。在双峰条件下,我们发现与任务无关的触觉流对视觉感知标题的方向偏移有一个小而系统的影响。我们的结论是,触觉流与自我运动感知的联系比我们之前认为的要紧密。
{"title":"Influence of Tactile Flow on Visual Heading Perception.","authors":"Lisa Rosenblum,&nbsp;Elisa Grewe,&nbsp;Jan Churan,&nbsp;Frank Bremmer","doi":"10.1163/22134808-bja10071","DOIUrl":"https://doi.org/10.1163/22134808-bja10071","url":null,"abstract":"<p><p>The integration of information from different sensory modalities is crucial for successful navigation through an environment. Among others, self-motion induces distinct optic flow patterns on the retina, vestibular signals and tactile flow, which contribute to determine traveled distance (path integration) or movement direction (heading). While the processing of combined visual-vestibular information is subject to a growing body of literature, the processing of visuo-tactile signals in the context of self-motion has received comparatively little attention. Here, we investigated whether visual heading perception is influenced by behaviorally irrelevant tactile flow. In the visual modality, we simulated an observer's self-motion across a horizontal ground plane (optic flow). Tactile self-motion stimuli were delivered by air flow from head-mounted nozzles (tactile flow). In blocks of trials, we presented only visual or tactile stimuli and subjects had to report their perceived heading. In another block of trials, tactile and visual stimuli were presented simultaneously, with the tactile flow within ±40° of the visual heading (bimodal condition). Here, importantly, participants had to report their perceived visual heading. Perceived self-motion direction in all conditions revealed a centripetal bias, i.e., heading directions were perceived as compressed toward straight ahead. In the bimodal condition, we found a small but systematic influence of task-irrelevant tactile flow on visually perceived headings as function of their directional offset. We conclude that tactile flow is more tightly linked to self-motion perception than previously thought.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 4","pages":"291-308"},"PeriodicalIF":1.6,"publicationDate":"2022-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9378724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Book Review. 书评。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2022-03-09 DOI: 10.1163/22134808-bja10070
Adam J Reeves
The institution of citizenship has undergone far-reaching factual and normative changes. In two recent studies, Christian Joppke and Ayelet Shachar address complex and pressing problems underlying modern citizenship theory. Joppke and Shachar begin from different premises regarding immigration and citizenship. Joppke takes for granted the existing regime of birthright citizenship; his main focus is the relationship between immigration and citizenship, and the interrelation between the dimensions of citizenship. Shachar finds the option of becoming a citizen deficient, and underscores the need to rethink the whole concept of birthright citizenship and the role it plays in perpetuating global injustice. Joppke is more optimistic: he celebrates the triumph of liberalism. Shachar is pessimistic about the citizenship discourse—which, even if more liberal than in the past, is still flawed—yet optimistic about the potential of her ideas to bring about a better future. This review briefly examines each book and discusses the contribution of each to the contemporary, evolving debates on citizenship.
{"title":"Book Review.","authors":"Adam J Reeves","doi":"10.1163/22134808-bja10070","DOIUrl":"https://doi.org/10.1163/22134808-bja10070","url":null,"abstract":"The institution of citizenship has undergone far-reaching factual and normative changes. In two recent studies, Christian Joppke and Ayelet Shachar address complex and pressing problems underlying modern citizenship theory. Joppke and Shachar begin from different premises regarding immigration and citizenship. Joppke takes for granted the existing regime of birthright citizenship; his main focus is the relationship between immigration and citizenship, and the interrelation between the dimensions of citizenship. Shachar finds the option of becoming a citizen deficient, and underscores the need to rethink the whole concept of birthright citizenship and the role it plays in perpetuating global injustice. Joppke is more optimistic: he celebrates the triumph of liberalism. Shachar is pessimistic about the citizenship discourse—which, even if more liberal than in the past, is still flawed—yet optimistic about the potential of her ideas to bring about a better future. This review briefly examines each book and discusses the contribution of each to the contemporary, evolving debates on citizenship.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 3","pages":"289-290"},"PeriodicalIF":1.6,"publicationDate":"2022-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9209582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Weighting of Time-Varying Visual and Auditory Evidence During Multisensory Decision Making. 多传感器决策过程中时变视觉和听觉证据的动态加权。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2022-02-21 DOI: 10.31234/osf.io/knzbv
Rosanne Roosmarijn Maria Tuip, W.M. van der Ham, Jeannette A. M. Lorteije, Filip Van Opstal
Perceptual decision-making in a dynamic environment requires two integration processes: integration of sensory evidence from multiple modalities to form a coherent representation of the environment, and integration of evidence across time to accurately make a decision. Only recently studies started to unravel how evidence from two modalities is accumulated across time to form a perceptual decision. One important question is whether information from individual senses contributes equally to multisensory decisions. We designed a new psychophysical task that measures how visual and auditory evidence is weighted across time. Participants were asked to discriminate between two visual gratings, and/or two sounds presented to the right and left ear based on respectively contrast and loudness. We varied the evidence, i.e., the contrast of the gratings and amplitude of the sound, over time. Results showed a significant increase in performance accuracy on multisensory trials compared to unisensory trials, indicating that discriminating between two sources is improved when multisensory information is available. Furthermore, we found that early evidence contributed most to sensory decisions. Weighting of unisensory information during audiovisual decision-making dynamically changed over time. A first epoch was characterized by both visual and auditory weighting, during the second epoch vision dominated and the third epoch finalized the weighting profile with auditory dominance. Our results suggest that during our task multisensory improvement is generated by a mechanism that requires cross-modal interactions but also dynamically evokes dominance switching.
动态环境下的感知决策需要两个整合过程:整合来自多种形态的感官证据以形成对环境的连贯表征;整合跨时间的证据以准确做出决策。直到最近,研究才开始揭示两种模式的证据是如何随着时间的推移而积累起来形成感知决策的。一个重要的问题是,来自单个感官的信息是否同样有助于多感官决策。我们设计了一个新的心理物理任务来衡量视觉和听觉证据是如何随时间加权的。参与者被要求根据对比度和响度区分两种视觉光栅和/或分别呈现给右耳和左耳的两种声音。随着时间的推移,我们改变了证据,即光栅的对比度和声音的振幅。结果显示,与单感觉试验相比,多感觉试验的性能准确性显著提高,这表明当有多感觉信息可用时,两个来源之间的区分得到了改善。此外,我们发现早期证据对感官决策贡献最大。视听决策过程中感官信息的权重随时间动态变化。第一阶段以视觉和听觉加权为主,第二阶段以视觉为主,第三阶段以听觉为主。我们的研究结果表明,在我们的任务过程中,多感官的改善是由一种需要跨模态交互的机制产生的,但也动态地唤起了优势转换。
{"title":"Dynamic Weighting of Time-Varying Visual and Auditory Evidence During Multisensory Decision Making.","authors":"Rosanne Roosmarijn Maria Tuip, W.M. van der Ham, Jeannette A. M. Lorteije, Filip Van Opstal","doi":"10.31234/osf.io/knzbv","DOIUrl":"https://doi.org/10.31234/osf.io/knzbv","url":null,"abstract":"Perceptual decision-making in a dynamic environment requires two integration processes: integration of sensory evidence from multiple modalities to form a coherent representation of the environment, and integration of evidence across time to accurately make a decision. Only recently studies started to unravel how evidence from two modalities is accumulated across time to form a perceptual decision. One important question is whether information from individual senses contributes equally to multisensory decisions. We designed a new psychophysical task that measures how visual and auditory evidence is weighted across time. Participants were asked to discriminate between two visual gratings, and/or two sounds presented to the right and left ear based on respectively contrast and loudness. We varied the evidence, i.e., the contrast of the gratings and amplitude of the sound, over time. Results showed a significant increase in performance accuracy on multisensory trials compared to unisensory trials, indicating that discriminating between two sources is improved when multisensory information is available. Furthermore, we found that early evidence contributed most to sensory decisions. Weighting of unisensory information during audiovisual decision-making dynamically changed over time. A first epoch was characterized by both visual and auditory weighting, during the second epoch vision dominated and the third epoch finalized the weighting profile with auditory dominance. Our results suggest that during our task multisensory improvement is generated by a mechanism that requires cross-modal interactions but also dynamically evokes dominance switching.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 1 1","pages":"31-56"},"PeriodicalIF":1.6,"publicationDate":"2022-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46738287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crossmodal Correspondence Between Auditory Timbre and Visual Shape. 听觉音色与视觉形状的跨模对应关系。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-12-30 DOI: 10.1163/22134808-bja10067
Daniel Gurman, Colin R McCormick, Raymond M Klein

Crossmodal correspondences are defined as associations between crossmodal stimuli based on seemingly irrelevant stimulus features (i.e., bright shapes being associated with high-pitched sounds). There is a large body of research describing auditory crossmodal correspondences involving pitch and volume, but not so much involving auditory timbre, the character or quality of a sound. Adeli and colleagues (2014, Front. Hum. Neurosci. 8, 352) found evidence of correspondences between timbre and visual shape. The present study aimed to replicate Adeli et al.'s findings, as well as identify novel timbre-shape correspondences. Participants were tested using two computerized tasks: an association task, which involved matching shapes to presented sounds based on best perceived fit, and a semantic task, which involved rating shapes and sounds on a number of scales. The analysis of association matches reveals nonrandom selection, with certain stimulus pairs being selected at a much higher frequency. The harsh/jagged and smooth/soft correspondences observed by Adeli et al. were found to be associated with a high level of consistency. Additionally, high matching frequency of sounds with unstudied timbre characteristics suggests the existence of novel correspondences. Finally, the ability of the semantic task to supplement existing crossmodal correspondence assessments was demonstrated. Convergent analysis of the semantic and association data demonstrates that the two datasets are significantly correlated (-0.36) meaning stimulus pairs associated with a high level of consensus were more likely to hold similar perceived meaning. The results of this study are discussed in both theoretical and applied contexts.

跨模态对应被定义为基于看似无关的刺激特征的跨模态刺激之间的关联(例如,明亮的形状与高音调的声音相关联)。有大量的研究描述了涉及音调和音量的听觉跨模对应,但没有太多涉及听觉音色,声音的特征或质量。Adeli和同事(2014,Front。嗡嗡声。神经科学,8,352)发现了音色和视觉形状之间对应关系的证据。目前的研究旨在复制Adeli等人的发现,以及识别新的音色形状对应。参与者接受了两项计算机化任务的测试:一项是联想任务,它涉及到根据最佳感知契合度将形状与呈现的声音相匹配;另一项是语义任务,它涉及到在多个尺度上对形状和声音进行评级。联想匹配分析揭示了非随机选择,某些刺激对被选择的频率更高。Adeli等人观察到的粗糙/锯齿和光滑/柔软的对应关系被发现与高度的一致性有关。此外,高匹配频率的声音与未研究的音色特征表明存在新的对应。最后,证明了语义任务对现有的跨模对应性评估的补充能力。语义和关联数据的收敛分析表明,这两个数据集显著相关(-0.36),这意味着具有高共识水平的刺激对更有可能持有相似的感知意义。本文从理论和应用两个方面对研究结果进行了讨论。
{"title":"Crossmodal Correspondence Between Auditory Timbre and Visual Shape.","authors":"Daniel Gurman,&nbsp;Colin R McCormick,&nbsp;Raymond M Klein","doi":"10.1163/22134808-bja10067","DOIUrl":"https://doi.org/10.1163/22134808-bja10067","url":null,"abstract":"<p><p>Crossmodal correspondences are defined as associations between crossmodal stimuli based on seemingly irrelevant stimulus features (i.e., bright shapes being associated with high-pitched sounds). There is a large body of research describing auditory crossmodal correspondences involving pitch and volume, but not so much involving auditory timbre, the character or quality of a sound. Adeli and colleagues (2014, Front. Hum. Neurosci. 8, 352) found evidence of correspondences between timbre and visual shape. The present study aimed to replicate Adeli et al.'s findings, as well as identify novel timbre-shape correspondences. Participants were tested using two computerized tasks: an association task, which involved matching shapes to presented sounds based on best perceived fit, and a semantic task, which involved rating shapes and sounds on a number of scales. The analysis of association matches reveals nonrandom selection, with certain stimulus pairs being selected at a much higher frequency. The harsh/jagged and smooth/soft correspondences observed by Adeli et al. were found to be associated with a high level of consistency. Additionally, high matching frequency of sounds with unstudied timbre characteristics suggests the existence of novel correspondences. Finally, the ability of the semantic task to supplement existing crossmodal correspondence assessments was demonstrated. Convergent analysis of the semantic and association data demonstrates that the two datasets are significantly correlated (-0.36) meaning stimulus pairs associated with a high level of consensus were more likely to hold similar perceived meaning. The results of this study are discussed in both theoretical and applied contexts.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 3","pages":"221-241"},"PeriodicalIF":1.6,"publicationDate":"2021-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39710991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Effects of Mandarin Chinese Lexical Tones in Sound-Shape and Sound-Size Correspondences. 汉语词汇声调在音形和音大小对应中的作用。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-12-30 DOI: 10.1163/22134808-bja10068
Yen-Han Chang, Mingxue Zhao, Yi-Chuan Chen, Pi-Chun Huang

Crossmodal correspondences refer to when specific domains of features in different sensory modalities are mapped. We investigated how vowels and lexical tones drive sound-shape (rounded or angular) and sound-size (large or small) mappings among native Mandarin Chinese speakers. We used three vowels (/i/, /u/, and /a/), and each vowel was articulated in four lexical tones. In the sound-shape matching, the tendency to match the rounded shape was decreased in the following order: /u/, /i/, and /a/. Tone 2 was more likely to be matched to the rounded pattern, whereas Tone 4 was more likely to be matched to the angular pattern. In the sound-size matching, /a/ was matched to the larger object more than /u/ and /i/, and Tone 2 and Tone 4 correspond to the large-small contrast. The results demonstrated that both vowels and tones play prominent roles in crossmodal correspondences, and sound-shape and sound-size mappings are heterogeneous phenomena.

跨模态对应是指不同感觉模态的特定特征域被映射。我们调查了元音和词汇音调是如何驱动以普通话为母语的人的音型(圆的或角的)和声音大小(大的或小的)映射的。我们用了三个元音(/i/, /u/和/a/),每个元音都用四个词汇语调发音。在音形匹配中,对圆形形状的匹配倾向依次递减:/u/、/i/、/a/。色调2更可能与圆形图案相匹配,而色调4更可能与棱角图案相匹配。在声音大小匹配中,/a/对应的是比/u/和/i/更大的物体,Tone 2和Tone 4对应的是大小对比。结果表明,元音和声调在跨模态对应中都起着重要作用,音形和音大小映射是异质现象。
{"title":"The Effects of Mandarin Chinese Lexical Tones in Sound-Shape and Sound-Size Correspondences.","authors":"Yen-Han Chang,&nbsp;Mingxue Zhao,&nbsp;Yi-Chuan Chen,&nbsp;Pi-Chun Huang","doi":"10.1163/22134808-bja10068","DOIUrl":"https://doi.org/10.1163/22134808-bja10068","url":null,"abstract":"<p><p>Crossmodal correspondences refer to when specific domains of features in different sensory modalities are mapped. We investigated how vowels and lexical tones drive sound-shape (rounded or angular) and sound-size (large or small) mappings among native Mandarin Chinese speakers. We used three vowels (/i/, /u/, and /a/), and each vowel was articulated in four lexical tones. In the sound-shape matching, the tendency to match the rounded shape was decreased in the following order: /u/, /i/, and /a/. Tone 2 was more likely to be matched to the rounded pattern, whereas Tone 4 was more likely to be matched to the angular pattern. In the sound-size matching, /a/ was matched to the larger object more than /u/ and /i/, and Tone 2 and Tone 4 correspond to the large-small contrast. The results demonstrated that both vowels and tones play prominent roles in crossmodal correspondences, and sound-shape and sound-size mappings are heterogeneous phenomena.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"35 3","pages":"243-257"},"PeriodicalIF":1.6,"publicationDate":"2021-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39725688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Reducing Cybersickness in 360-Degree Virtual Reality. 在 360 度虚拟现实中减少网络晕眩。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-12-16 DOI: 10.1163/22134808-bja10066
Iqra Arshad, Paulo De Mello, Martin Ender, Jason D McEwen, Elisa R Ferré

Despite the technological advancements in Virtual Reality (VR), users are constantly combating feelings of nausea and disorientation, the so-called cybersickness. Cybersickness symptoms cause severe discomfort and hinder the immersive VR experience. Here we investigated cybersickness in 360-degree head-mounted display VR. In traditional 360-degree VR experiences, translational movement in the real world is not reflected in the virtual world, and therefore self-motion information is not corroborated by matching visual and vestibular cues, which may trigger symptoms of cybersickness. We evaluated whether a new Artificial Intelligence (AI) software designed to supplement the 360-degree VR experience with artificial six-degrees-of-freedom motion may reduce cybersickness. Explicit (simulator sickness questionnaire and Fast Motion Sickness (FMS) rating) and implicit (heart rate) measurements were used to evaluate cybersickness symptoms during and after 360-degree VR exposure. Simulator sickness scores showed a significant reduction in feelings of nausea during the AI-supplemented six-degrees-of-freedom motion VR compared to traditional 360-degree VR. However, six-degrees-of-freedom motion VR did not reduce oculomotor or disorientation measures of sickness. No changes were observed in FMS and heart rate measures. Improving the congruency between visual and vestibular cues in 360-degree VR, as provided by the AI-supplemented six-degrees-of-freedom motion system considered, is essential for a more engaging, immersive and safe VR experience, which is critical for educational, cultural and entertainment applications.

尽管虚拟现实(VR)技术日新月异,但用户仍经常感到恶心和迷失方向,即所谓的 "晕机"。晕机症状会造成严重不适,阻碍身临其境的 VR 体验。在这里,我们研究了 360 度头戴式显示器 VR 中的晕机症状。在传统的 360 度 VR 体验中,现实世界中的平移运动不会反映在虚拟世界中,因此自我运动信息无法通过匹配的视觉和前庭线索得到证实,这可能会引发晕机症状。我们评估了一种新的人工智能(AI)软件,该软件旨在通过人工六自由度运动来补充 360 度 VR 体验,从而减少晕机症状。显性测量(模拟器晕机问卷和快速运动晕机(FMS)评级)和隐性测量(心率)被用来评估 360 度 VR 体验期间和之后的晕机症状。模拟器晕眩评分显示,与传统的 360 度 VR 相比,人工智能辅助六自由度运动 VR 的恶心感明显减少。不过,六自由度运动 VR 并没有减少眼球运动或迷失方向的不适感。在 FMS 和心率测量方面没有观察到任何变化。改进 360 度 VR 中视觉和前庭线索之间的一致性(如所考虑的人工智能辅助六自由度运动系统所提供的),对于获得更吸引人、更身临其境和更安全的 VR 体验至关重要,这对于教育、文化和娱乐应用也至关重要。
{"title":"Reducing Cybersickness in 360-Degree Virtual Reality.","authors":"Iqra Arshad, Paulo De Mello, Martin Ender, Jason D McEwen, Elisa R Ferré","doi":"10.1163/22134808-bja10066","DOIUrl":"10.1163/22134808-bja10066","url":null,"abstract":"<p><p>Despite the technological advancements in Virtual Reality (VR), users are constantly combating feelings of nausea and disorientation, the so-called cybersickness. Cybersickness symptoms cause severe discomfort and hinder the immersive VR experience. Here we investigated cybersickness in 360-degree head-mounted display VR. In traditional 360-degree VR experiences, translational movement in the real world is not reflected in the virtual world, and therefore self-motion information is not corroborated by matching visual and vestibular cues, which may trigger symptoms of cybersickness. We evaluated whether a new Artificial Intelligence (AI) software designed to supplement the 360-degree VR experience with artificial six-degrees-of-freedom motion may reduce cybersickness. Explicit (simulator sickness questionnaire and Fast Motion Sickness (FMS) rating) and implicit (heart rate) measurements were used to evaluate cybersickness symptoms during and after 360-degree VR exposure. Simulator sickness scores showed a significant reduction in feelings of nausea during the AI-supplemented six-degrees-of-freedom motion VR compared to traditional 360-degree VR. However, six-degrees-of-freedom motion VR did not reduce oculomotor or disorientation measures of sickness. No changes were observed in FMS and heart rate measures. Improving the congruency between visual and vestibular cues in 360-degree VR, as provided by the AI-supplemented six-degrees-of-freedom motion system considered, is essential for a more engaging, immersive and safe VR experience, which is critical for educational, cultural and entertainment applications.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-17"},"PeriodicalIF":1.6,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39749047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imagine Your Crossed Hands as Uncrossed: Visual Imagery Impacts the Crossed-Hands Deficit. 想象你交叉的双手没有交叉:视觉意象对双手交叉缺陷的影响
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-10-22 DOI: 10.1163/22134808-bja10065
Lisa Lorentz, Kaian Unwalla, David I Shore

Successful interaction with our environment requires accurate tactile localization. Although we seem to localize tactile stimuli effortlessly, the processes underlying this ability are complex. This is evidenced by the crossed-hands deficit, in which tactile localization performance suffers when the hands are crossed. The deficit results from the conflict between an internal reference frame, based in somatotopic coordinates, and an external reference frame, based in external spatial coordinates. Previous evidence in favour of the integration model employed manipulations to the external reference frame (e.g., blindfolding participants), which reduced the deficit by reducing conflict between the two reference frames. The present study extends this finding by asking blindfolded participants to visually imagine their crossed arms as uncrossed. This imagery manipulation further decreased the magnitude of the crossed-hands deficit by bringing information in the two reference frames into alignment. This imagery manipulation differentially affected males and females, which was consistent with the previously observed sex difference in this effect: females tend to show a larger crossed-hands deficit than males and females were more impacted by the imagery manipulation. Results are discussed in terms of the integration model of the crossed-hands deficit.

与环境成功互动需要准确的触觉定位。虽然我们似乎可以毫不费力地对触觉刺激进行定位,但这种能力背后的过程却十分复杂。双手交叉时,触觉定位能力会受到影响。这种缺陷源于以躯体坐标为基础的内部参照系与以外部空间坐标为基础的外部参照系之间的冲突。之前支持整合模型的证据采用了对外部参照框架的操作(例如蒙住参与者的眼睛),通过减少两个参照框架之间的冲突来减轻这种缺陷。本研究通过要求蒙住双眼的被试将交叉的手臂想象成未交叉的手臂,对这一发现进行了扩展。这种想象操作使两个参照框架中的信息趋于一致,从而进一步降低了交叉双手障碍的程度。这种想象操作对男性和女性的影响不同,这与之前观察到的这种效应的性别差异一致:女性往往比男性表现出更大的双手交叉障碍,而女性受到想象操作的影响更大。研究结果将根据双手交叉障碍的整合模型进行讨论。
{"title":"Imagine Your Crossed Hands as Uncrossed: Visual Imagery Impacts the Crossed-Hands Deficit.","authors":"Lisa Lorentz, Kaian Unwalla, David I Shore","doi":"10.1163/22134808-bja10065","DOIUrl":"10.1163/22134808-bja10065","url":null,"abstract":"<p><p>Successful interaction with our environment requires accurate tactile localization. Although we seem to localize tactile stimuli effortlessly, the processes underlying this ability are complex. This is evidenced by the crossed-hands deficit, in which tactile localization performance suffers when the hands are crossed. The deficit results from the conflict between an internal reference frame, based in somatotopic coordinates, and an external reference frame, based in external spatial coordinates. Previous evidence in favour of the integration model employed manipulations to the external reference frame (e.g., blindfolding participants), which reduced the deficit by reducing conflict between the two reference frames. The present study extends this finding by asking blindfolded participants to visually imagine their crossed arms as uncrossed. This imagery manipulation further decreased the magnitude of the crossed-hands deficit by bringing information in the two reference frames into alignment. This imagery manipulation differentially affected males and females, which was consistent with the previously observed sex difference in this effect: females tend to show a larger crossed-hands deficit than males and females were more impacted by the imagery manipulation. Results are discussed in terms of the integration model of the crossed-hands deficit.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-29"},"PeriodicalIF":1.6,"publicationDate":"2021-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39554652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal Alignment but not Complexity of Audiovisual Stimuli Influences Crossmodal Duration Percepts. 视听刺激的时间一致性而非复杂性会影响跨模态时长感知
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-10-08 DOI: 10.1163/22134808-bja10062
Alexandra N Scurry, Daniela M Lemus, Fang Jiang

Reliable duration perception is an integral aspect of daily life that impacts everyday perception, motor coordination, and subjective passage of time. The Scalar Expectancy Theory (SET) is a common model that explains how an internal pacemaker, gated by an external stimulus-driven switch, accumulates pulses during sensory events and compares these accumulated pulses to a reference memory duration for subsequent duration estimation. Second-order mechanisms, such as multisensory integration (MSI) and attention, can influence this model and affect duration perception. For instance, diverting attention away from temporal features could delay the switch closure or temporarily open the accumulator, altering pulse accumulation and distorting duration perception. In crossmodal duration perception, auditory signals of unequal duration can induce perceptual compression and expansion of durations of visual stimuli, presumably via auditory influence on the visual clock. The current project aimed to investigate the role of temporal (stimulus alignment) and nontemporal (stimulus complexity) features on crossmodal, specifically auditory over visual, duration perception. While temporal alignment revealed a larger impact on the strength of crossmodal duration percepts compared to stimulus complexity, both features showcase auditory dominance in processing visual duration.

可靠的时长感知是日常生活中不可或缺的一个方面,它影响着日常感知、运动协调和主观时间流逝。标量期望理论(SET)是一种常见的模型,它解释了内部起搏器如何在外部刺激驱动的开关作用下,在感觉事件发生时积累脉冲,并将这些积累的脉冲与参考记忆持续时间进行比较,以进行后续的持续时间估计。多感觉统合(MSI)和注意力等二阶机制会影响这一模型并影响持续时间感知。例如,将注意力从时间特征上转移开可能会延迟开关闭合或暂时打开累积器,从而改变脉冲累积并扭曲持续时间感知。在跨模态时长感知中,时长不等的听觉信号可能通过听觉对视觉时钟的影响,引起视觉刺激时长的压缩和扩展。本项目旨在研究时间特征(刺激对齐)和非时间特征(刺激复杂性)对跨模态时长感知的作用,特别是听觉对视觉时长感知的作用。虽然与刺激复杂性相比,时间排列对跨模态时长感知强度的影响更大,但这两种特征都显示了听觉在处理视觉时长时的优势。
{"title":"Temporal Alignment but not Complexity of Audiovisual Stimuli Influences Crossmodal Duration Percepts.","authors":"Alexandra N Scurry, Daniela M Lemus, Fang Jiang","doi":"10.1163/22134808-bja10062","DOIUrl":"10.1163/22134808-bja10062","url":null,"abstract":"<p><p>Reliable duration perception is an integral aspect of daily life that impacts everyday perception, motor coordination, and subjective passage of time. The Scalar Expectancy Theory (SET) is a common model that explains how an internal pacemaker, gated by an external stimulus-driven switch, accumulates pulses during sensory events and compares these accumulated pulses to a reference memory duration for subsequent duration estimation. Second-order mechanisms, such as multisensory integration (MSI) and attention, can influence this model and affect duration perception. For instance, diverting attention away from temporal features could delay the switch closure or temporarily open the accumulator, altering pulse accumulation and distorting duration perception. In crossmodal duration perception, auditory signals of unequal duration can induce perceptual compression and expansion of durations of visual stimuli, presumably via auditory influence on the visual clock. The current project aimed to investigate the role of temporal (stimulus alignment) and nontemporal (stimulus complexity) features on crossmodal, specifically auditory over visual, duration perception. While temporal alignment revealed a larger impact on the strength of crossmodal duration percepts compared to stimulus complexity, both features showcase auditory dominance in processing visual duration.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-19"},"PeriodicalIF":1.6,"publicationDate":"2021-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39510866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Serial Dependence of Emotion Within and Between Stimulus Sensory Modalities. 刺激感官模式内和刺激感官模式间的情绪序列依赖性。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2021-09-29 DOI: 10.1163/22134808-bja10064
Erik Van der Burg, Alexander Toet, Anne-Marie Brouwer, Jan B F Van Erp

How we perceive the world is not solely determined by what we sense at a given moment in time, but also by what we processed recently. Here we investigated whether such serial dependencies for emotional stimuli transfer from one modality to another. Participants were presented a random sequence of emotional sounds and images and instructed to rate the valence and arousal of each stimulus (Experiment 1). For both ratings, we conducted an intertrial analysis, based on whether the rating on the previous trial was low or high. We found a positive serial dependence for valence and arousal regardless of the stimulus modality on two consecutive trials. In Experiment 2, we examined whether passively perceiving a stimulus is sufficient to induce a serial dependence. In Experiment 2, participants were instructed to rate the stimuli only on active trials and not on passive trials. The participants were informed that the active and passive trials were presented in alternating order, so that they were able to prepare for the task. We conducted an intertrial analysis on active trials, based on whether the rating on the previous passive trial (determined in Experiment 1) was low or high. For both ratings, we again observed positive serial dependencies regardless of the stimulus modality. We conclude that the emotional experience triggered by one stimulus affects the emotional experience for a subsequent stimulus regardless of their sensory modalities, that this occurs in a bottom-up fashion, and that this can be explained by residual activation in the emotional network in the brain.

我们对世界的感知不仅取决于我们在某一特定时刻所感受到的东西,还取决于我们最近处理过的东西。在这里,我们研究了情绪刺激的这种序列依赖性是否会从一种模式转移到另一种模式。我们向受试者随机展示了一系列情绪化的声音和图像,并要求他们对每个刺激的价值和唤醒程度进行评分(实验 1)。对于这两种评分,我们根据前一次试验的评分是低还是高进行了试验间分析。我们发现,在连续两次试验中,无论刺激模式如何,情绪和唤醒度都存在正序列依赖性。在实验 2 中,我们考察了被动感知刺激是否足以诱发序列依赖。在实验 2 中,我们要求被试只对主动试验中的刺激进行评分,而不对被动试验中的刺激进行评分。我们告知被试,主动试验和被动试验是交替出现的,这样被试就能为任务做好准备。我们根据前一次被动试验(实验 1 中确定)的评分是低还是高,对主动试验进行了试验间分析。对于这两种评分,我们再次观察到了正序列依赖关系,而与刺激模式无关。我们的结论是,由一个刺激引发的情绪体验会影响对后续刺激的情绪体验,而不管它们的感官模式如何,这种情况是以自下而上的方式发生的,这可以用大脑中情绪网络的残余激活来解释。
{"title":"Serial Dependence of Emotion Within and Between Stimulus Sensory Modalities.","authors":"Erik Van der Burg, Alexander Toet, Anne-Marie Brouwer, Jan B F Van Erp","doi":"10.1163/22134808-bja10064","DOIUrl":"10.1163/22134808-bja10064","url":null,"abstract":"<p><p>How we perceive the world is not solely determined by what we sense at a given moment in time, but also by what we processed recently. Here we investigated whether such serial dependencies for emotional stimuli transfer from one modality to another. Participants were presented a random sequence of emotional sounds and images and instructed to rate the valence and arousal of each stimulus (Experiment 1). For both ratings, we conducted an intertrial analysis, based on whether the rating on the previous trial was low or high. We found a positive serial dependence for valence and arousal regardless of the stimulus modality on two consecutive trials. In Experiment 2, we examined whether passively perceiving a stimulus is sufficient to induce a serial dependence. In Experiment 2, participants were instructed to rate the stimuli only on active trials and not on passive trials. The participants were informed that the active and passive trials were presented in alternating order, so that they were able to prepare for the task. We conducted an intertrial analysis on active trials, based on whether the rating on the previous passive trial (determined in Experiment 1) was low or high. For both ratings, we again observed positive serial dependencies regardless of the stimulus modality. We conclude that the emotional experience triggered by one stimulus affects the emotional experience for a subsequent stimulus regardless of their sensory modalities, that this occurs in a bottom-up fashion, and that this can be explained by residual activation in the emotional network in the brain.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-22"},"PeriodicalIF":1.6,"publicationDate":"2021-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39473356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Multisensory Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1