首页 > 最新文献

Journal of the Audio Engineering Society最新文献

英文 中文
The Dynamic Grid: Time-Varying Parameters for Musical Instrument Simulations Based on Finite-Difference Time-Domain Schemes 动态网格:基于时域有限差分格式的乐器仿真时变参数
IF 1.4 4区 工程技术 Q1 Arts and Humanities Pub Date : 2022-11-02 DOI: 10.17743/jaes.2022.0043
S. Willemsen, S. Bilbao, M. Ducceschi, S. Serafin
Several well-established approaches to physical modeling synthesis for musical instruments exist. Finite-difference time-domain methods are known for their generality and flexibility in terms of the systems one can model but are less flexible with regard to smooth parameter variations due to their reliance on a static grid. This paper presents the dynamic grid, a method to smoothly change grid configurations of finite-difference time-domain schemes based on sub- audio–rate time variation of parameters. This allows for extensions of the behavior of physical models beyond the physically possible, broadening the range of expressive possibilities for the musician. The method is applied to the 1D wave equation, the stiff string, and 2D systems, including the 2D wave equation and thin plate. Results show that the method does not introduce noticeable artefacts when changing between grid configurations for systems, including loss.
存在几种完善的乐器物理建模合成方法。有限差分时域方法以其可建模系统的通用性和灵活性而闻名,但由于依赖于静态网格,因此在光滑参数变化方面灵活性较差。本文提出了一种基于参数亚音频速率时变的有限差分时域格式网格结构平滑变化的方法——动态网格。这使得物理模型的行为超越了物理可能的扩展,拓宽了音乐家表达可能性的范围。该方法适用于一维波动方程、刚性弦和二维系统,包括二维波动方程和薄板。结果表明,该方法在系统网格配置变化时不会引入明显的伪影,包括损耗。
{"title":"The Dynamic Grid: Time-Varying Parameters for Musical Instrument Simulations Based on Finite-Difference Time-Domain Schemes","authors":"S. Willemsen, S. Bilbao, M. Ducceschi, S. Serafin","doi":"10.17743/jaes.2022.0043","DOIUrl":"https://doi.org/10.17743/jaes.2022.0043","url":null,"abstract":"Several well-established approaches to physical modeling synthesis for musical instruments exist. Finite-difference time-domain methods are known for their generality and flexibility in terms of the systems one can model but are less flexible with regard to smooth parameter variations due to their reliance on a static grid. This paper presents the dynamic grid, a method to smoothly change grid configurations of finite-difference time-domain schemes based on sub- audio–rate time variation of parameters. This allows for extensions of the behavior of physical models beyond the physically possible, broadening the range of expressive possibilities for the musician. The method is applied to the 1D wave equation, the stiff string, and 2D systems, including the 2D wave equation and thin plate. Results show that the method does not introduce noticeable artefacts when changing between grid configurations for systems, including loss.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47822785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interaural Time Difference Prediction Using Anthropometric Interaural Distance 利用人体测量耳际距离预测耳际时差
IF 1.4 4区 工程技术 Q1 Arts and Humanities Pub Date : 2022-11-02 DOI: 10.17743/jaes.2022.0038
Jaan Johansson, A. Mäkivirta, Matti Malinen, Ville Saari
This paper studies the feasibility of predicting the interaural time difference (ITD) in azimuth and elevation once the personal anthropometric interaural distance is known, proposing an enhancement for spherical head ITD models to increase their accuracy. The method and enhancement are developed using data in a Head-Related Impulse Response (HRIR) data set comprising photogrammetrically obtained personal 3D geometries for 170 persons and then evaluated using three acoustically measured HRIR data sets containing 119 persons in total. The directions include 360 ◦ in azimuth and –15 ◦ to 60 ◦ in elevation. The prediction error for each data set is described, the proportion of persons under a given error in all studied directions is shown, and the directions in which large errors occur are analyzed. The enhanced spherical head model can predict the ITD such that the first and 99th percentile levels of the ITD prediction error for all persons and in all directions remains below 122 μ s. The anthropometric interaural distance could potentially be measured directly on a person, enabling personalized ITD without measuring the HRIR. The enhanced model can personalize ITD in binaural rendering for headphone reproduction in games and immersive audio applications.
本文研究了在已知个人人体测量的耳间距离后,预测方位角和仰角的耳间时间差(ITD)的可行性,提出了对球形头部ITD模型的改进,以提高其准确性。该方法和增强是使用头部相关脉冲响应(HRIR)数据集中的数据开发的,该数据集包括170人的摄影获得的个人3D几何形状,然后使用总共包含119人的三个声学测量的HRIR数据集进行评估。方向包括360◦ 方位角和-15◦ 至60◦ 高程。描述了每个数据集的预测误差,显示了在所有研究方向上处于给定误差下的人的比例,并分析了出现大误差的方向。增强型球形头部模型可以预测ITD,使所有人和所有方向的ITD预测误差的第一和第99百分位水平保持在122μs以下。人体测量的耳间距离可以直接在人身上测量,从而实现个性化ITD,而无需测量HRIR。增强型模型可以在双耳渲染中个性化ITD,用于游戏和沉浸式音频应用中的耳机再现。
{"title":"Interaural Time Difference Prediction Using Anthropometric Interaural Distance","authors":"Jaan Johansson, A. Mäkivirta, Matti Malinen, Ville Saari","doi":"10.17743/jaes.2022.0038","DOIUrl":"https://doi.org/10.17743/jaes.2022.0038","url":null,"abstract":"This paper studies the feasibility of predicting the interaural time difference (ITD) in azimuth and elevation once the personal anthropometric interaural distance is known, proposing an enhancement for spherical head ITD models to increase their accuracy. The method and enhancement are developed using data in a Head-Related Impulse Response (HRIR) data set comprising photogrammetrically obtained personal 3D geometries for 170 persons and then evaluated using three acoustically measured HRIR data sets containing 119 persons in total. The directions include 360 ◦ in azimuth and –15 ◦ to 60 ◦ in elevation. The prediction error for each data set is described, the proportion of persons under a given error in all studied directions is shown, and the directions in which large errors occur are analyzed. The enhanced spherical head model can predict the ITD such that the first and 99th percentile levels of the ITD prediction error for all persons and in all directions remains below 122 μ s. The anthropometric interaural distance could potentially be measured directly on a person, enabling personalized ITD without measuring the HRIR. The enhanced model can personalize ITD in binaural rendering for headphone reproduction in games and immersive audio applications.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44774269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Antialiasing for Simplified Nonlinear Volterra Models 简化非线性Volterra模型的抗混叠
IF 1.4 4区 工程技术 Q1 Arts and Humanities Pub Date : 2022-11-02 DOI: 10.17743/jaes.2022.0033
C. Bennett, Stefan Hopman
{"title":"Antialiasing for Simplified Nonlinear Volterra Models","authors":"C. Bennett, Stefan Hopman","doi":"10.17743/jaes.2022.0033","DOIUrl":"https://doi.org/10.17743/jaes.2022.0033","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41550382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessor Selection Process for Perceptual Quality Evaluation of 360 Audiovisual Content 360视听内容感知质量评价的评估员选择过程
IF 1.4 4区 工程技术 Q1 Arts and Humanities Pub Date : 2022-11-02 DOI: 10.17743/jaes.2022.0037
R. F. Fela, N. Zacharov, Søren Forchhammer
{"title":"Assessor Selection Process for Perceptual Quality Evaluation of 360 Audiovisual Content","authors":"R. F. Fela, N. Zacharov, Søren Forchhammer","doi":"10.17743/jaes.2022.0037","DOIUrl":"https://doi.org/10.17743/jaes.2022.0037","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43326674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Fast Local Sparsity Method: A Low-Cost Combination of Time-Frequency Representations Based on the Hoyer Sparsity 快速局部稀疏性方法:一种基于Hoyer稀疏性的低成本时频表示组合
IF 1.4 4区 工程技术 Q1 Arts and Humanities Pub Date : 2022-11-02 DOI: 10.17743/jaes.2022.0036
M. D. V. M. da Costa, L. Biscainho
{"title":"The Fast Local Sparsity Method: A Low-Cost Combination of Time-Frequency Representations Based on the Hoyer Sparsity","authors":"M. D. V. M. da Costa, L. Biscainho","doi":"10.17743/jaes.2022.0036","DOIUrl":"https://doi.org/10.17743/jaes.2022.0036","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48004979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nyquist Band Transform: An Order-Preserving Transform for Bandlimited Discretization 奈奎斯特带变换:一种用于带限离散化的保序变换
IF 1.4 4区 工程技术 Q1 Arts and Humanities Pub Date : 2022-11-02 DOI: 10.17743/jaes.2022.0044
Champ C. Darabundit, J. Abel, D. Berners
{"title":"Nyquist Band Transform: An Order-Preserving Transform for Bandlimited Discretization","authors":"Champ C. Darabundit, J. Abel, D. Berners","doi":"10.17743/jaes.2022.0044","DOIUrl":"https://doi.org/10.17743/jaes.2022.0044","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47336490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparative Study of Music Mastered by Human Engineers and Automated Services 人类工程师掌握的音乐与自动化服务的比较研究
IF 1.4 4区 工程技术 Q1 Arts and Humanities Pub Date : 2022-11-02 DOI: 10.17743/jaes.2022.0050
Mitchell Elliott, S. Chon
{"title":"A Comparative Study of Music Mastered by Human Engineers and Automated Services","authors":"Mitchell Elliott, S. Chon","doi":"10.17743/jaes.2022.0050","DOIUrl":"https://doi.org/10.17743/jaes.2022.0050","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44890164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conditioned Source Separation by Attentively Aggregating Frequency Transformations With Self-Conditioning 通过自调节集中频率变换实现条件源分离
IF 1.4 4区 工程技术 Q1 Arts and Humanities Pub Date : 2022-11-02 DOI: 10.17743/jaes.2022.0030
Woosung Choi, Yeong-Seok Jeong, Jinsung Kim, Jaehwa Chung, Soonyoung Jung, J. Reiss
Label-conditioned source separation extracts the target source, specified by an input symbol, from an input mixture track. A recently proposed label-conditioned source separation model called Latent Source Attentive Frequency Transformation (LaSAFT)–Gated Point-Wise Con- volutional Modulation (GPoCM)–Net introduced a block for latent source analysis called LaSAFT. Employing LaSAFT blocks, it established state-of-the-art performance on several tasks of the MUSDB18 benchmark. This paper enhances the LaSAFT block by exploiting a self-conditioning method. Whereas the existing method only cares about the symbolic re- lationships between the target source symbol and latent sources, ignoring audio content, the new approach also considers audio content. The enhanced block computes the attention mask conditioning on the label and the input audio feature map. Here, it is shown that the conditioned U-Net employing the enhanced LaSAFT blocks outperforms the previous model. It is also shown that the present model performs the audio-query–based separation with a slight modification.
标签条件源分离从输入混合轨迹中提取由输入符号指定的目标源。最近提出的一种称为潜在源衰减频率变换(LaSAFT)-门控逐点进化调制(GPoCM)-Net的标签条件源分离模型引入了一个称为LaSAFT的潜在源分析块。采用LaSAFT块,它在MUSDB18基准的几个任务上建立了最先进的性能。本文利用一种自调节方法对LaSAFT块进行了增强。现有方法只关心目标源符号和潜在源之间的符号关系,忽略了音频内容,而新方法也考虑了音频内容。增强块计算标签和输入音频特征图上的注意力掩码条件。本文表明,采用增强型LaSAFT块的条件U-Net优于先前的模型。还表明,本模型执行了基于音频查询的分离,并进行了轻微的修改。
{"title":"Conditioned Source Separation by Attentively Aggregating Frequency Transformations With Self-Conditioning","authors":"Woosung Choi, Yeong-Seok Jeong, Jinsung Kim, Jaehwa Chung, Soonyoung Jung, J. Reiss","doi":"10.17743/jaes.2022.0030","DOIUrl":"https://doi.org/10.17743/jaes.2022.0030","url":null,"abstract":"Label-conditioned source separation extracts the target source, specified by an input symbol, from an input mixture track. A recently proposed label-conditioned source separation model called Latent Source Attentive Frequency Transformation (LaSAFT)–Gated Point-Wise Con- volutional Modulation (GPoCM)–Net introduced a block for latent source analysis called LaSAFT. Employing LaSAFT blocks, it established state-of-the-art performance on several tasks of the MUSDB18 benchmark. This paper enhances the LaSAFT block by exploiting a self-conditioning method. Whereas the existing method only cares about the symbolic re- lationships between the target source symbol and latent sources, ignoring audio content, the new approach also considers audio content. The enhanced block computes the attention mask conditioning on the label and the input audio feature map. Here, it is shown that the conditioned U-Net employing the enhanced LaSAFT blocks outperforms the previous model. It is also shown that the present model performs the audio-query–based separation with a slight modification.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47087314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Audio Effects for Snare Drum Recording Transformations 深音频效果的小鼓记录转换
IF 1.4 4区 工程技术 Q1 Arts and Humanities Pub Date : 2022-11-02 DOI: 10.17743/jaes.2022.0041
M. Cheshire, Jake Drysdale, Sean Enderby, Maciej Tomczak, Jason Hockman
The ability to perceptually modify drum recording parameters in a post-recording process would be of great benefit to engineers limited by time or equipment. In this work, a data-driven approach to post-recording modification of the dampening and microphone positioning parameters commonly associated with snare drum capture is proposed. The system consists of a deep encoder that analyzes audio input and predicts optimal parameters of one or more third-party audio effects, which are then used to process the audio and produce the desired transformed output audio. Furthermore, two novel audio effects are specifically developed to take advantage of the multiple parameter learning abilities of the system. Perceptual quality of transformations is assessed through a subjective listening test, and an object evaluation is used to measure system performance. Results demonstrate a capacity to emulate snare dampening; however, attempts were not successful for emulating microphone position changes.
在后记录过程中感知地修改鼓记录参数的能力对于受时间或设备限制的工程师来说将是非常有益的。在这项工作中,提出了一种数据驱动的方法来修改通常与小鼓捕捉相关的阻尼和麦克风定位参数。该系统由一个深度编码器组成,该编码器分析音频输入并预测一个或多个第三方音频效果的最佳参数,然后用于处理音频并产生所需的转换输出音频。此外,还专门开发了两种新颖的音频效果,以利用系统的多参数学习能力。转换的感知质量通过主观听力测试进行评估,并使用对象评估来衡量系统性能。结果表明,有能力模拟圈套抑制;然而,模拟麦克风位置变化的尝试并不成功。
{"title":"Deep Audio Effects for Snare Drum Recording Transformations","authors":"M. Cheshire, Jake Drysdale, Sean Enderby, Maciej Tomczak, Jason Hockman","doi":"10.17743/jaes.2022.0041","DOIUrl":"https://doi.org/10.17743/jaes.2022.0041","url":null,"abstract":"The ability to perceptually modify drum recording parameters in a post-recording process would be of great benefit to engineers limited by time or equipment. In this work, a data-driven approach to post-recording modification of the dampening and microphone positioning parameters commonly associated with snare drum capture is proposed. The system consists of a deep encoder that analyzes audio input and predicts optimal parameters of one or more third-party audio effects, which are then used to process the audio and produce the desired transformed output audio. Furthermore, two novel audio effects are specifically developed to take advantage of the multiple parameter learning abilities of the system. Perceptual quality of transformations is assessed through a subjective listening test, and an object evaluation is used to measure system performance. Results demonstrate a capacity to emulate snare dampening; however, attempts were not successful for emulating microphone position changes.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41852712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of Changes in Audio Spatialization on Immersion in Audiovisual Experiences 音频空间化的变化对沉浸式视听体验的影响
IF 1.4 4区 工程技术 Q1 Arts and Humanities Pub Date : 2022-11-02 DOI: 10.17743/jaes.2022.0034
Sarvesh Agrawal, S. Bech, K. De Moor, Søren Forchhammer
Understanding the influence of technical system parameters on audiovisual experiences is important for technologists to optimize experiences. The focus in this study was on the influence of changes in audio spatialization (varying the loudspeaker configuration for audio rendering from 2.1 to 5.1 to 7.1.4) on the experience of immersion. First, a magnitude estimation experiment was performed to perceptually evaluate envelopment for verifying the initial condition that there is a perceptual difference between the audio spatialization levels. It was found that envelopment increased from 2.1 to 5.1 reproduction, but there was no significant benefit of extending from 5.1 to 7.1.4. An absolute-rating experimental paradigm was used to assess immersion in four audiovisual experiences by 24 participants. Evident differences between immersion scores could not be established, signaling that a change in audio spatialization and subsequent change in envelopment does not guarantee a psychologically immersive experience.
了解技术系统参数对视听体验的影响对技术人员优化体验具有重要意义。本研究的重点是音频空间化的变化(将音频渲染的扬声器配置从2.1到5.1再到7.1.4)对沉浸体验的影响。首先,进行幅度估计实验,对包络度进行感知评估,以验证音频空间化水平之间存在感知差异的初始条件。结果表明,包络度从2.1繁殖增加到5.1繁殖,但从5.1繁殖增加到7.1.4繁殖没有显著的效益。采用绝对评分实验范式对24名参与者进行了四种视听体验的沉浸性评估。沉浸感得分之间的明显差异无法确定,这表明音频空间化的变化和随后的包络变化并不能保证心理上的沉浸体验。
{"title":"Influence of Changes in Audio Spatialization on Immersion in Audiovisual Experiences","authors":"Sarvesh Agrawal, S. Bech, K. De Moor, Søren Forchhammer","doi":"10.17743/jaes.2022.0034","DOIUrl":"https://doi.org/10.17743/jaes.2022.0034","url":null,"abstract":"Understanding the influence of technical system parameters on audiovisual experiences is important for technologists to optimize experiences. The focus in this study was on the influence of changes in audio spatialization (varying the loudspeaker configuration for audio rendering from 2.1 to 5.1 to 7.1.4) on the experience of immersion. First, a magnitude estimation experiment was performed to perceptually evaluate envelopment for verifying the initial condition that there is a perceptual difference between the audio spatialization levels. It was found that envelopment increased from 2.1 to 5.1 reproduction, but there was no significant benefit of extending from 5.1 to 7.1.4. An absolute-rating experimental paradigm was used to assess immersion in four audiovisual experiences by 24 participants. Evident differences between immersion scores could not be established, signaling that a change in audio spatialization and subsequent change in envelopment does not guarantee a psychologically immersive experience.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48276813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Journal of the Audio Engineering Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1