首页 > 最新文献

Final Program and Paper Summaries 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics最新文献

英文 中文
Spatial Sound Recording and Transmission Systems: Status, Problems, and Prospects 空间录音与传输系统:现状、问题与展望
M. F. Davis
Contemporary analog and digital signal processing techniques have 'been refined to the point of being able to routinely convey individual audio channels with little or no perceptible loss of quality. The greatest disparity between original anti reproduced soundfields usually involves their spatial characteristics. Improving the spatial fidelity of audio recording and transmission systems involves understanding the underlying localization mechanisms, then applying this understanding to evolve specific system requirements, subject to the constraints of real world components and practices. Spatial audio systems are conveniently dividable into three functional blocks: 1. soundfield pickup, via one or more microphones and/or electronically synthesized signals, 2. means for coding, transmission (or recording/playback), and decoding of the net source audio signals, and 3. soundfield reconstruction, via loudspeakers or headphones, and possible associated processing. The presentation environment exerts sufficient influence on system configuration to have spawned several classes of spatial audio systems, e.g. home, headphone, and theatre systems. Conventional home stereo is currently the most common spatial audio system in use. It purports to encode a horizontal continuum of space into a pair of audio channels, which are then conveyed via a discrete two channel medium to a pair of loudspeakers. This system relies on the psychoacoustic phenomenon of phantom images; to try to fill the space between the speakers. Related techniques, such as 'Sonic Holography' or Q-Sound, attempt to extend the range of horizontal space conveyed, in part by using interaural cross cancellation to extend the apparent reproduced image beyond the arc of the speakers. Microphone pickup arrangements for stereo recording vary widely, and are often a matter of strong individual preference on the part of recording producers. It is desirable for newly developed systems to retain this option.
当代模拟和数字信号处理技术已经被改进到能够在很少或没有可察觉的质量损失的情况下常规地传输单个音频通道的程度。原始反重放声场之间的最大差异通常与它们的空间特性有关。提高音频录制和传输系统的空间保真度需要理解潜在的定位机制,然后应用这种理解来发展特定的系统需求,并受到现实世界组件和实践的约束。空间音频系统可方便地分为三个功能模块:1。声场拾取,通过一个或多个麦克风和/或电子合成信号;2 .用于对网络源音频信号进行编码、传输(或记录/播放)和解码的装置;声场重建,通过扬声器或耳机,以及可能的相关处理。演示环境对系统配置有足够的影响,从而产生了几类空间音频系统,例如家庭、耳机和剧院系统。传统的家庭立体声是目前使用最普遍的空间音响系统。它声称将一个水平连续空间编码成一对音频通道,然后通过一个离散的双通道介质传输到一对扬声器。该系统依赖于幻像的心理声学现象;试图填补说话者之间的空白。相关技术,如“声波全息”或Q-Sound,试图扩展水平空间的范围,部分是通过使用耳间交叉抵消来扩展明显的再现图像,超出扬声器的弧线。立体声录音的麦克风拾音器安排变化很大,并且通常是录音制作人强烈的个人偏好问题。新开发的系统最好保留这个选项。
{"title":"Spatial Sound Recording and Transmission Systems: Status, Problems, and Prospects","authors":"M. F. Davis","doi":"10.1109/ASPAA.1991.634097","DOIUrl":"https://doi.org/10.1109/ASPAA.1991.634097","url":null,"abstract":"Contemporary analog and digital signal processing techniques have 'been refined to the point of being able to routinely convey individual audio channels with little or no perceptible loss of quality. The greatest disparity between original anti reproduced soundfields usually involves their spatial characteristics. Improving the spatial fidelity of audio recording and transmission systems involves understanding the underlying localization mechanisms, then applying this understanding to evolve specific system requirements, subject to the constraints of real world components and practices. Spatial audio systems are conveniently dividable into three functional blocks: 1. soundfield pickup, via one or more microphones and/or electronically synthesized signals, 2. means for coding, transmission (or recording/playback), and decoding of the net source audio signals, and 3. soundfield reconstruction, via loudspeakers or headphones, and possible associated processing. The presentation environment exerts sufficient influence on system configuration to have spawned several classes of spatial audio systems, e.g. home, headphone, and theatre systems. Conventional home stereo is currently the most common spatial audio system in use. It purports to encode a horizontal continuum of space into a pair of audio channels, which are then conveyed via a discrete two channel medium to a pair of loudspeakers. This system relies on the psychoacoustic phenomenon of phantom images; to try to fill the space between the speakers. Related techniques, such as 'Sonic Holography' or Q-Sound, attempt to extend the range of horizontal space conveyed, in part by using interaural cross cancellation to extend the apparent reproduced image beyond the arc of the speakers. Microphone pickup arrangements for stereo recording vary widely, and are often a matter of strong individual preference on the part of recording producers. It is desirable for newly developed systems to retain this option.","PeriodicalId":146017,"journal":{"name":"Final Program and Paper Summaries 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126839359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Masking In Three-dimensional Auditory Displays II: Effects Of Spatial And Spectral Simil2irity 三维听觉显示中的掩蔽II:空间和光谱相似性的影响
Theodore J Doll, Thomas E Hanna
It has been suggested that three dimensional (3-D) auditory displays could enhance operator performance in a wide variety of applications, including sonar (Doll, Hanna, and Ruissotti, in press), auditory warnings in aircraft cockpits (Doll et al., Most of the anticipated applications of 3-D auditory dislplays involve simultaneous presentation of multiple signals from different directions. A potential problem is that signals that are sounded simultaneously or closely in time may mask one-another. The extent to which simultaneous sounds mask one-(another should depend both upon their spectral similarity and how closely their sources are positioned in space. It is well esta'blished that masking is greatly reduced when the masker and signal do not occupy the same critical band (e.g., Durlach & Colburn, 1978). Studies of free-field masking show that the effectiveness of a masker decreases as it is separated in space from the signal However, the extent to which spectral and spatial similarity trade-off in determining the detectability of signals i n 3-0 auditory displays is unknom. This information i.s needed to design effective 3-D displays. The purpose of this research was to deternine how the spectral and spatial similarity of signals arid naskers interact to determine the deteczability of s i p a i s in 3-D audicory displays. A tonal signal and a **notchedtg noise masker were presented from loudspeakers with various spatial separations (0, 20, and 40 degrees) in a free field (i.e. , a *treal*t 3-D auditory display). The loudspeakers were arranged in a horizontal circular arc 10 ft. from the listener at ear l e v e l. The spectral similarity of the masker and signal were manipulated by varying the low-pass cutoff of one noise band and the high-pass cutoff of another, independent noise band, equal in spectral level to the first. The noises were mixed to form notches of various widths centered on the signal frequency. Minimimum signal levels required for 79.4 percent correcr: uetection were measured using an adaptive, two-alternative forced choice procedure. The subject was instructed not to m v e the head, and the chin was positioned in a chin rest.
三维(3-D)听觉显示可以提高操作员在各种应用中的表现,包括声纳(Doll, Hanna, and Ruissotti, in press),飞机驾驶舱的听觉警告(Doll等)。大多数预期的3-D听觉显示应用涉及同时呈现来自不同方向的多个信号。一个潜在的问题是,同时或紧密发出的信号可能会相互掩盖。同时发出的声音相互掩盖的程度应取决于它们的频谱相似性和它们的源在空间中的位置有多近。众所周知,当掩蔽器和信号不占用相同的临界频带时,掩蔽会大大减少(例如,Durlach & Colburn, 1978)。自由场掩蔽的研究表明,掩蔽器的有效性随着它与信号在空间上的分离而降低。然而,在确定3-0听觉显示信号的可检测性时,频谱和空间相似性权衡的程度是未知的。设计有效的3-D显示器需要这些信息。本研究的目的是确定信号和信号的频谱和空间相似性如何相互作用,以确定三维听觉显示中信号的可探测性。在自由场(即真实的三维听觉显示)中,从具有不同空间间隔(0度、20度和40度)的扬声器中呈现音调信号和陷波噪声掩蔽器。扬声器被布置成一个水平圆弧,距离听众10英尺,在耳朵1 - 5 - 1处。通过改变一个噪声带的低通截止和另一个独立噪声带的高通截止,在光谱电平上与第一个噪声带相等,可以操纵掩蔽器和信号的频谱相似性。这些噪声被混合成以信号频率为中心的不同宽度的凹痕。79.4%正确率所需的最小信号电平:使用自适应,两种选择的强制选择程序进行测量。受试者被指示不要碰头部,下巴放在下巴托上。
{"title":"Masking In Three-dimensional Auditory Displays II: Effects Of Spatial And Spectral Simil2irity","authors":"Theodore J Doll, Thomas E Hanna","doi":"10.1109/ASPAA.1991.634102","DOIUrl":"https://doi.org/10.1109/ASPAA.1991.634102","url":null,"abstract":"It has been suggested that three dimensional (3-D) auditory displays could enhance operator performance in a wide variety of applications, including sonar (Doll, Hanna, and Ruissotti, in press), auditory warnings in aircraft cockpits (Doll et al., Most of the anticipated applications of 3-D auditory dislplays involve simultaneous presentation of multiple signals from different directions. A potential problem is that signals that are sounded simultaneously or closely in time may mask one-another. The extent to which simultaneous sounds mask one-(another should depend both upon their spectral similarity and how closely their sources are positioned in space. It is well esta'blished that masking is greatly reduced when the masker and signal do not occupy the same critical band (e.g., Durlach & Colburn, 1978). Studies of free-field masking show that the effectiveness of a masker decreases as it is separated in space from the signal However, the extent to which spectral and spatial similarity trade-off in determining the detectability of signals i n 3-0 auditory displays is unknom. This information i.s needed to design effective 3-D displays. The purpose of this research was to deternine how the spectral and spatial similarity of signals arid naskers interact to determine the deteczability of s i p a i s in 3-D audicory displays. A tonal signal and a **notchedtg noise masker were presented from loudspeakers with various spatial separations (0, 20, and 40 degrees) in a free field (i.e. , a *treal*t 3-D auditory display). The loudspeakers were arranged in a horizontal circular arc 10 ft. from the listener at ear l e v e l. The spectral similarity of the masker and signal were manipulated by varying the low-pass cutoff of one noise band and the high-pass cutoff of another, independent noise band, equal in spectral level to the first. The noises were mixed to form notches of various widths centered on the signal frequency. Minimimum signal levels required for 79.4 percent correcr: uetection were measured using an adaptive, two-alternative forced choice procedure. The subject was instructed not to m v e the head, and the chin was positioned in a chin rest.","PeriodicalId":146017,"journal":{"name":"Final Program and Paper Summaries 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130947080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speech Enhancement For Hearing Aids Using A Microphone Array 使用麦克风阵列的助听器语音增强
A. Ganeshkumar, J. Hammond, C. G. Rice
Our approach is based on enhancing the Short Time Spectral Amplitude (STSA) of degraded speech using the spectral subtraction algorithm. The use of spectral subtraction to enhance speech has been studied quite extensively in the past [1,2]. These studies have generally shown an increase in speech quality but the gain in intelligibility has been insignificant. The lack of improvement in intelligibility can be atmbiited to two main factors. The first being that since all previous work on the application of spectral subtraction algorithm have been confined to single input systems, the noise short time spectrum can only be estimated during non-speech activity periods. This approach not only requires accurate speechhion-speech activity detection a difficult task, particularly at low signal to noise ratiosbut also requires the noise to be sufficiently stationary for the estimate to be used during the subsequent speech period. The second factor for the lack of improvement in intelligibility is due to the annoying 'musical' type of residual noise introduced by spectral subtraction processing. This residual noise may distract the listener from the speech.
我们的方法是基于使用频谱减法算法增强退化语音的短时频谱幅度(STSA)。使用谱减法增强语音在过去已经得到了相当广泛的研究[1,2]。这些研究普遍表明,语音质量有所提高,但可理解性的提高却微不足道。可理解性缺乏改善可以归结为两个主要因素。首先,由于之前所有关于谱减法算法应用的工作都局限于单输入系统,因此只能估计非语音活动期间的噪声短时间谱。这种方法不仅需要准确的语音-语音活动检测,这是一项困难的任务,特别是在低信噪比的情况下,而且还要求噪声足够平稳,以便在随后的语音期间使用估计。可理解性缺乏改善的第二个因素是由于频谱减法处理引入了令人讨厌的“音乐”类型的残余噪声。残留的噪音可能会分散听者的注意力。
{"title":"Speech Enhancement For Hearing Aids Using A Microphone Array","authors":"A. Ganeshkumar, J. Hammond, C. G. Rice","doi":"10.1109/ASPAA.1991.634122","DOIUrl":"https://doi.org/10.1109/ASPAA.1991.634122","url":null,"abstract":"Our approach is based on enhancing the Short Time Spectral Amplitude (STSA) of degraded speech using the spectral subtraction algorithm. The use of spectral subtraction to enhance speech has been studied quite extensively in the past [1,2]. These studies have generally shown an increase in speech quality but the gain in intelligibility has been insignificant. The lack of improvement in intelligibility can be atmbiited to two main factors. The first being that since all previous work on the application of spectral subtraction algorithm have been confined to single input systems, the noise short time spectrum can only be estimated during non-speech activity periods. This approach not only requires accurate speechhion-speech activity detection a difficult task, particularly at low signal to noise ratiosbut also requires the noise to be sufficiently stationary for the estimate to be used during the subsequent speech period. The second factor for the lack of improvement in intelligibility is due to the annoying 'musical' type of residual noise introduced by spectral subtraction processing. This residual noise may distract the listener from the speech.","PeriodicalId":146017,"journal":{"name":"Final Program and Paper Summaries 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123062308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The "ARMAdillo" Coefficient Encoding Scheme for Digital Audio Filters 数字音频滤波器的“ARMAdillo”系数编码方案
D. Rossum
In the &sign of VLSI circuits to implement digital filters for electronic music purposes, we have found it useful to encode the filter coefficients. Such encoding offers three advantages. First, the encoding can be made to correspond more properly to the "natural" perceptual units of audio. While these are most accurately the "bark" for frequency and the "sone" for loudness, a good working approximation is decibels and musical octaves respectively. Secondly, our encoding scheme allows for partial decoupling of the pole radius and angle, providing superior interpolation characteristics when the coefficients are dynamically swept. Thirdly, and perhaps most importantly, appropriate encoding of the coefficients can save substantial amounts of on-chip memory. While audio filter coefficients typically require twenty or more bits, we have found adequate coverage at as few as eight bits, allowing for a much more cost effective custom hardware implementation when many coefficients are required. We have named the resulting patented encoding scheme "ARh4Adillo." Our implementation of digital audio filters is based on the canonical second order section whose transfer function should be familiar to all: 1*-1*-2 H(Z) = +blz-1+b,z-2 [I1 While dealing with poles and feedback (bn) coefficients, the comments herein apply as well to zeroes and feedforward coefficients (an/@) when the gain (a@ is separated as shown above. Noting that the height of a resonant peak in the magnitude response produced by a pole is approximately inversely proportional to the distance from the pole to the unit circle, we can relate the height p of this resonant peak in dB to the pole radius R: 1 1-R
在VLSI电路的&符号中实现用于电子音乐的数字滤波器,我们发现对滤波器系数进行编码是有用的。这样的编码提供了三个优点。首先,编码可以更恰当地对应于音频的“自然”感知单元。虽然用“bark”来表示频率最准确,用“sone”来表示响度最准确,但一个很好的近似方法是分别用分贝和八度。其次,我们的编码方案允许极点半径和角度的部分解耦,当系数被动态扫描时提供优越的插值特性。第三,也许是最重要的,系数的适当编码可以节省大量的片上存储器。虽然音频滤波器系数通常需要20位或更多位,但我们发现只需8位就足够覆盖,当需要许多系数时,允许更具成本效益的定制硬件实现。我们将由此产生的专利编码方案命名为“ARh4Adillo”。我们的数字音频滤波器的实现是基于规范的二阶部分,其传递函数应该是所有人都熟悉的:1*-1*-2 H(Z) = +blz-1+b, Z -2 [I1]在处理极点和反馈(bn)系数时,此处的注释也适用于零点和前馈系数(an/@),当增益(a@)如上所示分离时。注意到极点产生的振幅响应中谐振峰的高度与极点到单位圆的距离近似成反比,我们可以将这个谐振峰的高度p(以dB为单位)与极点半径R联系起来:1 1-R
{"title":"The \"ARMAdillo\" Coefficient Encoding Scheme for Digital Audio Filters","authors":"D. Rossum","doi":"10.1109/ASPAA.1991.634131","DOIUrl":"https://doi.org/10.1109/ASPAA.1991.634131","url":null,"abstract":"In the &sign of VLSI circuits to implement digital filters for electronic music purposes, we have found it useful to encode the filter coefficients. Such encoding offers three advantages. First, the encoding can be made to correspond more properly to the \"natural\" perceptual units of audio. While these are most accurately the \"bark\" for frequency and the \"sone\" for loudness, a good working approximation is decibels and musical octaves respectively. Secondly, our encoding scheme allows for partial decoupling of the pole radius and angle, providing superior interpolation characteristics when the coefficients are dynamically swept. Thirdly, and perhaps most importantly, appropriate encoding of the coefficients can save substantial amounts of on-chip memory. While audio filter coefficients typically require twenty or more bits, we have found adequate coverage at as few as eight bits, allowing for a much more cost effective custom hardware implementation when many coefficients are required. We have named the resulting patented encoding scheme \"ARh4Adillo.\" Our implementation of digital audio filters is based on the canonical second order section whose transfer function should be familiar to all: 1*-1*-2 H(Z) = +blz-1+b,z-2 [I1 While dealing with poles and feedback (bn) coefficients, the comments herein apply as well to zeroes and feedforward coefficients (an/@) when the gain (a@ is separated as shown above. Noting that the height of a resonant peak in the magnitude response produced by a pole is approximately inversely proportional to the distance from the pole to the unit circle, we can relate the height p of this resonant peak in dB to the pole radius R: 1 1-R","PeriodicalId":146017,"journal":{"name":"Final Program and Paper Summaries 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115024907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Real Time Synthesis of Complex Acoustic Environments 复杂声环境的实时合成
S. Foster, E. Wenzel, R. M. Tayior
This paper describes some recent efforts to "render" the complex acoustic field experienced by a listener within an environment. It represents an extension of earlier attempts to synthesize externalized, threedimensional sound cues over headphones using a very high-speed, signal processor, the Convolvotron (Wenzel, et al., 1988). The synthesis technique involves the digital generation of stimuli using HeadRelated Transfer Functions (HRTFs) measured in the ear canals of individual subjects for a large number of equidistant kcations in an anechoic chamber (Wightman & Kistler, 1989). The advantage of this technique is that it preserves the complex pattern of interaural differences over the entire spectrum of the stimulus, thus capturing the effects of filtering by the pinnae, head, shoulders, and torso.
本文描述了一些最近的努力,以“呈现”复杂的声场,听众在一个环境中所经历的。它代表了早期尝试的延伸,即通过耳机合成外化的三维声音线索,使用非常高速的信号处理器Convolvotron (Wenzel等人,1988)。合成技术包括在消声室中测量个体受试者耳道中大量等距位置的头部相关传递函数(hrtf),以数字方式产生刺激(Wightman & Kistler, 1989)。这种技术的优势在于,它保留了整个刺激频谱中听觉差异的复杂模式,从而捕获了耳廓、头部、肩膀和躯干过滤的效果。
{"title":"Real Time Synthesis of Complex Acoustic Environments","authors":"S. Foster, E. Wenzel, R. M. Tayior","doi":"10.1109/ASPAA.1991.634098","DOIUrl":"https://doi.org/10.1109/ASPAA.1991.634098","url":null,"abstract":"This paper describes some recent efforts to \"render\" the complex acoustic field experienced by a listener within an environment. It represents an extension of earlier attempts to synthesize externalized, threedimensional sound cues over headphones using a very high-speed, signal processor, the Convolvotron (Wenzel, et al., 1988). The synthesis technique involves the digital generation of stimuli using HeadRelated Transfer Functions (HRTFs) measured in the ear canals of individual subjects for a large number of equidistant kcations in an anechoic chamber (Wightman & Kistler, 1989). The advantage of this technique is that it preserves the complex pattern of interaural differences over the entire spectrum of the stimulus, thus capturing the effects of filtering by the pinnae, head, shoulders, and torso.","PeriodicalId":146017,"journal":{"name":"Final Program and Paper Summaries 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125684487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
State Variable Models For Sound Synthesis 声音合成的状态变量模型
P. Depalle, D. Matignon, X. Rodet
In this paper, we present an approach to sound synthesis which mes to unify the two current approaches, one that we call the signal approach and the other that we call the physical approach. These two approaches have their own advantages and drawbacks. 1. The signal approach inherits the whole set of signal processing techniques. It is based on the use of fairly general production models, the internal structure of which is not precisely defined. The input variables to the model are called parameters. The process of synthesizing a sound consists of finding the time varying values of the parameters. In general, there exist analysis techniques to determine parameter values from natural sounds (e.g. FFT ifor additive synthesis, LPC for source filter models). One of the drawbacks to this approach is the difficulty in determining the parameter values of a signal whose Characteristics vary rapidly. It is also difficult to control the model for certain sound effects since there is no internal description. 2. The physical approach consists of an explicit simulation of the physical system which produces the sound. In this case the internal description is precisely defined. Synthesis is accomplished by finding the numeric solution to the model equation. The control parameters directly correspond to the physical parameters of the system. The sound produced by such models are of great quality. The drawback to this synthesis method is that the modd equations are determined from a dePailed physical analysis of the insuument and that the parameters have to be obtained from physical measurements which are often long and complex to realise. To take advantage of the positive aspects of the preceding approaches, we explore a third approach. On the one hand it takes advantage of a precise description of the internal structure of a physical system. On the other hand, it determines certain parameter values by analyzing sounds produced by the system. Our new approach is based on the state variable description of physical systems. This formalism is largely used in process control theory. Kalmari filtering is one of the techniques that we use in order to obtain the parameter values that conool ihe model. We have applied this formalism to build a model of connected acoustic tubes. We have developped an algorithm for recursive consrmction of a state variable model given the structure of the system. Such a model can be excited by non linear systems to …
在本文中,我们提出了一种声音合成的方法,它可以统一目前的两种方法,一种我们称之为信号方法,另一种我们称之为物理方法。这两种方法各有优缺点。1. 信号方法继承了一整套信号处理技术。它基于使用相当一般的生产模型,其内部结构没有精确定义。模型的输入变量称为参数。合成声音的过程包括找到参数随时间变化的值。一般来说,存在从自然声音中确定参数值的分析技术(例如用于加性合成的FFT,用于源滤波器模型的LPC)。这种方法的缺点之一是难以确定特征变化迅速的信号的参数值。因为没有内部描述,所以很难控制某些声音效果的模型。2. 物理方法包括对产生声音的物理系统的显式模拟。在这种情况下,内部描述是精确定义的。综合是通过寻找模型方程的数值解来完成的。控制参数直接对应系统的物理参数。这种型号发出的声音质量很好。这种综合方法的缺点是,模态方程是由对仪器的详细物理分析确定的,参数必须从物理测量中获得,而物理测量通常是漫长而复杂的。为了利用上述方法的积极方面,我们探索了第三种方法。一方面,它利用了对物理系统内部结构的精确描述。另一方面,它通过分析系统产生的声音来确定某些参数值。我们的新方法是基于物理系统的状态变量描述。这种形式主义在过程控制理论中得到了广泛的应用。卡尔马里滤波是我们用来获得控制模型的参数值的技术之一。我们运用这种形式建立了一个连接声管的模型。在给定系统结构的情况下,我们开发了一种递归构造状态变量模型的算法。这样的模型可以被非线性系统激发到…
{"title":"State Variable Models For Sound Synthesis","authors":"P. Depalle, D. Matignon, X. Rodet","doi":"10.1109/ASPAA.1991.634151","DOIUrl":"https://doi.org/10.1109/ASPAA.1991.634151","url":null,"abstract":"In this paper, we present an approach to sound synthesis which mes to unify the two current approaches, one that we call the signal approach and the other that we call the physical approach. These two approaches have their own advantages and drawbacks. 1. The signal approach inherits the whole set of signal processing techniques. It is based on the use of fairly general production models, the internal structure of which is not precisely defined. The input variables to the model are called parameters. The process of synthesizing a sound consists of finding the time varying values of the parameters. In general, there exist analysis techniques to determine parameter values from natural sounds (e.g. FFT ifor additive synthesis, LPC for source filter models). One of the drawbacks to this approach is the difficulty in determining the parameter values of a signal whose Characteristics vary rapidly. It is also difficult to control the model for certain sound effects since there is no internal description. 2. The physical approach consists of an explicit simulation of the physical system which produces the sound. In this case the internal description is precisely defined. Synthesis is accomplished by finding the numeric solution to the model equation. The control parameters directly correspond to the physical parameters of the system. The sound produced by such models are of great quality. The drawback to this synthesis method is that the modd equations are determined from a dePailed physical analysis of the insuument and that the parameters have to be obtained from physical measurements which are often long and complex to realise. To take advantage of the positive aspects of the preceding approaches, we explore a third approach. On the one hand it takes advantage of a precise description of the internal structure of a physical system. On the other hand, it determines certain parameter values by analyzing sounds produced by the system. Our new approach is based on the state variable description of physical systems. This formalism is largely used in process control theory. Kalmari filtering is one of the techniques that we use in order to obtain the parameter values that conool ihe model. We have applied this formalism to build a model of connected acoustic tubes. We have developped an algorithm for recursive consrmction of a state variable model given the structure of the system. Such a model can be excited by non linear systems to …","PeriodicalId":146017,"journal":{"name":"Final Program and Paper Summaries 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133228704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Acoustic-Phonetic Diagnostic Tool for the Evaluation of Auditory Models 一种评价听觉模型的声学-语音诊断工具
O. Ghitza
A long standing question that arises when studying a particular auditory model is how to evaluate its performance. More precisely, it is of interest to evaluate to what extent the model-representation can describe the actual human internal representation. In this study we address this question in the context of speech perception. That is, given a speech representation based on the auditory system, to what extent can it preserve phonetic information that is perceptually relevant? To answer this question, a diagnostic system has been developed that simulates the psychophysical procedure used in the standard Diagnostic-Rhyme Test (DRT, Voiers, 1983). In the psychophysical procedure the subject has all the cognitive information needed for the discrimination task a priori. Hence, errors in discrimination are due mainly to inaccuracies in the auditory representation of the stimulus. In the simulation, the human observer is replaced by an array of recognizers, one for each pair of words in tlhe DRT database. An effort has been made to keep the errors due to the "observer" to a minimum, so that the overall detected errors are due mainly to inaccuracies in the auditory model representation. This effort includes a careful design of the recognizer (i.e, using an HMM with time-varying states, Ghitza and Sondhi, 1990) and the use of a speaker-dependent DRT simulation. To demonstrate the power of the suggested evaluation method, we considered the behavior of two speech analysis methods, the Fourier power spectrum and a representation based on the auditory syslem (the EIH model, Ghitza, 1988), in, quiet and in a noisy environment. The results were compared with psychophysical results for the same database. The results show that the overall number of errors made by the machine (the Fourier power spectrum or the EIK) are far greater than the overall number of errors made by a human, at all noise llevels that were tested. Further, the proposed evaluation method offers a detailed picture of the error distribution among the selected phonetic features. It shows that the errors made by the human listener sue distributed in a different way compared to the errors made by the machines, and that the distributions of errors made by the two analyzers are also quiet different from each other.
在研究一个特定的听觉模型时,一个长期存在的问题是如何评估它的性能。更准确地说,评估模型表征在多大程度上可以描述实际的人类内部表征是有意义的。在这项研究中,我们在语言感知的背景下解决了这个问题。也就是说,给定一个基于听觉系统的语音表示,它能在多大程度上保留与感知相关的语音信息?为了回答这个问题,一个诊断系统已经被开发出来,它模拟了标准诊断韵律测试(DRT, Voiers, 1983)中使用的心理物理程序。在心理物理过程中,被试先验地拥有辨别任务所需的所有认知信息。因此,辨别错误主要是由于刺激的听觉表征不准确。在模拟中,人类观察者被一组识别器所取代,每一对识别器对应DRT数据库中的单词。我们已经努力将由于“观察者”造成的错误保持在最低限度,因此总体检测到的错误主要是由于听觉模型表示中的不准确性。这项工作包括对识别器的精心设计(即,使用具有时变状态的HMM, Ghitza和Sondhi, 1990)和使用依赖于说话人的DRT模拟。为了证明所建议的评估方法的力量,我们考虑了两种语音分析方法的行为,傅里叶功率谱和基于听觉系统的表示(EIH模型,Ghitza, 1988),在安静和嘈杂的环境中。将结果与同一数据库的心理物理结果进行比较。结果表明,在测试的所有噪声水平下,机器产生的总误差(傅里叶功率谱或EIK)远远大于人类产生的总误差。此外,所提出的评价方法提供了在所选语音特征之间的误差分布的详细图像。结果表明,与机器产生的误差相比,人类听众产生的误差分布方式不同,两种分析仪产生的误差分布也有很大不同。
{"title":"An Acoustic-Phonetic Diagnostic Tool for the Evaluation of Auditory Models","authors":"O. Ghitza","doi":"10.1109/ASPAA.1991.634096","DOIUrl":"https://doi.org/10.1109/ASPAA.1991.634096","url":null,"abstract":"A long standing question that arises when studying a particular auditory model is how to evaluate its performance. More precisely, it is of interest to evaluate to what extent the model-representation can describe the actual human internal representation. In this study we address this question in the context of speech perception. That is, given a speech representation based on the auditory system, to what extent can it preserve phonetic information that is perceptually relevant? To answer this question, a diagnostic system has been developed that simulates the psychophysical procedure used in the standard Diagnostic-Rhyme Test (DRT, Voiers, 1983). In the psychophysical procedure the subject has all the cognitive information needed for the discrimination task a priori. Hence, errors in discrimination are due mainly to inaccuracies in the auditory representation of the stimulus. In the simulation, the human observer is replaced by an array of recognizers, one for each pair of words in tlhe DRT database. An effort has been made to keep the errors due to the \"observer\" to a minimum, so that the overall detected errors are due mainly to inaccuracies in the auditory model representation. This effort includes a careful design of the recognizer (i.e, using an HMM with time-varying states, Ghitza and Sondhi, 1990) and the use of a speaker-dependent DRT simulation. To demonstrate the power of the suggested evaluation method, we considered the behavior of two speech analysis methods, the Fourier power spectrum and a representation based on the auditory syslem (the EIH model, Ghitza, 1988), in, quiet and in a noisy environment. The results were compared with psychophysical results for the same database. The results show that the overall number of errors made by the machine (the Fourier power spectrum or the EIK) are far greater than the overall number of errors made by a human, at all noise llevels that were tested. Further, the proposed evaluation method offers a detailed picture of the error distribution among the selected phonetic features. It shows that the errors made by the human listener sue distributed in a different way compared to the errors made by the machines, and that the distributions of errors made by the two analyzers are also quiet different from each other.","PeriodicalId":146017,"journal":{"name":"Final Program and Paper Summaries 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127701484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Control Of Interharmonic In Polyphonic Music 复调音乐中间和声的控制
R. Maher
Amplitude beating between closely spaced frequency components is a well-known effect in musical acoustics, psychoacoustics, and other fields [l-31. Depending on the musical context and the personal preference of the listener, the presence of amplitude lbeating can either be an undesirable artifact of the limited frequency resolution of the human hearing apparatus, or a pleasant quality that adds timbral variety to an ensemble performance. In either case it would be useful to be able to control the extent of interharmonic amplitude beating in some convenient manner. The increased use of digital computing systems in music synthesis and post-production opens up many new avenues for innovative digital signal processing. This paper extends the repertoire of digital audio signal processing methods to include direct control over amplitude beating in complex audio signals due to interaction among spectral components of simultaneous musical voices. Applications of this technique include 1) discriminability improvement for weak or easily masked musical voices in complex sonic textures, and 2) a1 teration of the consonance/dissonance relationship of musical intervals and chords to retain the advantages of equal tempered tuning (for example, modulation between keys) while reducing the effects of out of tune pnrtials. Overview As previously reported [4], one means to reduce amplitude beating during additive mixing operations is to perform a time variant spectral analysis on the signals to be mixed, identify the presence of closely spaced frequency components, and selectively attenuate those components which will give rise to amplitude beats. A convenient formulation for this procedure was found to be the sinewave model of McAulay and Quatieri [5]. This approach can be described as excf usion filtering, where one of the signals to be mixed is used to design a time varying comb-like filter to exclude competing spectral energy from the other signals. The amplitude beating among closely spaced partials can also be increased to improve the detectability of a relatively ,weak musical voice in the presence of a complex background ensemble. The increased beatiing is accomplished by using time variant sinusoidal analysis to identify spectral collisions arnong the competing musical voices and then to increase the amplitude of the beating components. This technique is particularly useful when the weak voice has spectral energy in a confined range which overlaps the background material, e.g., a solo clarinet with string ensemble accompaniment. While simply boosting the level of the weak voice can improve its detectability, the combination of increased level a n d enhancemen't of interharmonic beating can increase the perceived separation between the weak signal and its competition. In other words, the presence of the weak voice is cued by its effect upon the other voices in the ensemble.
在紧密间隔的频率分量之间的振幅跳动是音乐声学、心理声学和其他领域中众所周知的效应[l-31]。根据音乐背景和听者的个人偏好,振幅跳动的存在可能是人类听觉设备有限的频率分辨率所造成的不良产物,也可能是为合奏演奏增加音色多样性的令人愉快的品质。在任何一种情况下,能够以某种方便的方式控制谐波间振幅跳动的程度都是有用的。数字计算系统在音乐合成和后期制作中的使用增加,为创新的数字信号处理开辟了许多新的途径。本文扩展了数字音频信号处理方法的范围,包括直接控制由于同步音乐声音的频谱成分之间的相互作用而导致的复杂音频信号的振幅跳动。该技术的应用包括:1)改善复杂音织体中微弱或容易被掩盖的音乐声音的可分辨性;2)音程和和弦的和/不和谐关系的迭代,以保留均匀调律的优势(例如,键之间的调制),同时减少失调音阶的影响。如前所述[4],在加性混合操作中,减少振幅跳动的一种方法是对待混合信号进行时变频谱分析,识别间隔紧密的频率成分的存在,并选择性地衰减那些会引起振幅跳动的成分。McAulay和Quatieri[5]的正弦波模型为这一过程找到了一个方便的公式。这种方法可以被描述为排除滤波,其中一个要混合的信号被用来设计一个时变的梳状滤波器,以排除来自其他信号的竞争频谱能量。在紧密间隔的部分之间的振幅跳动也可以增加,以提高在复杂背景合奏中相对较弱的音乐声音的可探测性。增加的节拍是通过使用时变正弦分析来识别竞争音乐声音之间的频谱碰撞,然后增加节拍分量的幅度来实现的。当微弱的声音的频谱能量在与背景材料重叠的有限范围内时,这种技术特别有用,例如,单簧管独奏与弦乐合奏伴奏。单纯提高弱音的电平可以提高其可检测性,而提高电平和增强谐波间跳动的组合可以增加弱信号与其竞争信号之间的感知分离。换句话说,弱音的存在是由它对合奏中其他声音的影响来提示的。
{"title":"Control Of Interharmonic In Polyphonic Music","authors":"R. Maher","doi":"10.1109/ASPAA.1991.634148","DOIUrl":"https://doi.org/10.1109/ASPAA.1991.634148","url":null,"abstract":"Amplitude beating between closely spaced frequency components is a well-known effect in musical acoustics, psychoacoustics, and other fields [l-31. Depending on the musical context and the personal preference of the listener, the presence of amplitude lbeating can either be an undesirable artifact of the limited frequency resolution of the human hearing apparatus, or a pleasant quality that adds timbral variety to an ensemble performance. In either case it would be useful to be able to control the extent of interharmonic amplitude beating in some convenient manner. The increased use of digital computing systems in music synthesis and post-production opens up many new avenues for innovative digital signal processing. This paper extends the repertoire of digital audio signal processing methods to include direct control over amplitude beating in complex audio signals due to interaction among spectral components of simultaneous musical voices. Applications of this technique include 1) discriminability improvement for weak or easily masked musical voices in complex sonic textures, and 2) a1 teration of the consonance/dissonance relationship of musical intervals and chords to retain the advantages of equal tempered tuning (for example, modulation between keys) while reducing the effects of out of tune pnrtials. Overview As previously reported [4], one means to reduce amplitude beating during additive mixing operations is to perform a time variant spectral analysis on the signals to be mixed, identify the presence of closely spaced frequency components, and selectively attenuate those components which will give rise to amplitude beats. A convenient formulation for this procedure was found to be the sinewave model of McAulay and Quatieri [5]. This approach can be described as excf usion filtering, where one of the signals to be mixed is used to design a time varying comb-like filter to exclude competing spectral energy from the other signals. The amplitude beating among closely spaced partials can also be increased to improve the detectability of a relatively ,weak musical voice in the presence of a complex background ensemble. The increased beatiing is accomplished by using time variant sinusoidal analysis to identify spectral collisions arnong the competing musical voices and then to increase the amplitude of the beating components. This technique is particularly useful when the weak voice has spectral energy in a confined range which overlaps the background material, e.g., a solo clarinet with string ensemble accompaniment. While simply boosting the level of the weak voice can improve its detectability, the combination of increased level a n d enhancemen't of interharmonic beating can increase the perceived separation between the weak signal and its competition. In other words, the presence of the weak voice is cued by its effect upon the other voices in the ensemble.","PeriodicalId":146017,"journal":{"name":"Final Program and Paper Summaries 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131586005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature extracting hearing aids for the profoundly deaf using a neural network implemented on a TMS320C51 digital signal processor 基于TMS320C51数字信号处理器的深度聋人特征提取助听器
J. Walliker, J. Daley, A. Faulkner, I. Howard
Many people with profound hearing impairment, while able to detect amplified sound are often unable to make sense of what they hear. Conventional hearing aids which amplify, filter and compress the speech signal are of little use to them. It has been demonstrated that some profoundly deaf listeners are able to make better use of speech features such as voice fundamental frequency (Fx) and frication when they are presented in a simplified form matched to their residual hearing than when conventionally presented.
许多有严重听力障碍的人,虽然能够察觉到放大的声音,但往往无法理解他们所听到的。传统的助听器对语音信号进行放大、过滤和压缩,对他们来说用处不大。研究表明,一些重度失聪的听者在以与其剩余听力相匹配的简化形式呈现时,能够更好地利用语音特征,如语音基频(Fx)和摩擦,而不是以常规方式呈现。
{"title":"Feature extracting hearing aids for the profoundly deaf using a neural network implemented on a TMS320C51 digital signal processor","authors":"J. Walliker, J. Daley, A. Faulkner, I. Howard","doi":"10.1109/ASPAA.1991.634126","DOIUrl":"https://doi.org/10.1109/ASPAA.1991.634126","url":null,"abstract":"Many people with profound hearing impairment, while able to detect amplified sound are often unable to make sense of what they hear. Conventional hearing aids which amplify, filter and compress the speech signal are of little use to them. It has been demonstrated that some profoundly deaf listeners are able to make better use of speech features such as voice fundamental frequency (Fx) and frication when they are presented in a simplified form matched to their residual hearing than when conventionally presented.","PeriodicalId":146017,"journal":{"name":"Final Program and Paper Summaries 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129589245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fundamental issues in auditory modeling 听觉建模的基本问题
B. Delgutte, P. Cariani
In principle, speech and audio coding systems can be evaluated by comparing the responses of a model of auditory processing to the original and the coded signals, providiing that the model responses includes all perceptually-relevant features of the signal, while decreasing signal redundancy. In practice, present knowledge of auditory physiology is incomplete, so that it is difficult to decide which aspects of auditory processing the model should attempt to simulate. This; talk will address three fundamental issues in auditory modeling that are important for the design of irnproved coding systems.
原则上,语音和音频编码系统可以通过将听觉处理模型的响应与原始和编码信号进行比较来评估,前提是模型响应包括信号的所有感知相关特征,同时减少信号冗余。在实践中,目前关于听觉生理学的知识是不完整的,因此很难决定该模型应该尝试模拟听觉处理的哪些方面。这一切;讲座将讨论听觉建模中的三个基本问题,这些问题对改进编码系统的设计很重要。
{"title":"Fundamental issues in auditory modeling","authors":"B. Delgutte, P. Cariani","doi":"10.1109/ASPAA.1991.634088","DOIUrl":"https://doi.org/10.1109/ASPAA.1991.634088","url":null,"abstract":"In principle, speech and audio coding systems can be evaluated by comparing the responses of a model of auditory processing to the original and the coded signals, providiing that the model responses includes all perceptually-relevant features of the signal, while decreasing signal redundancy. In practice, present knowledge of auditory physiology is incomplete, so that it is difficult to decide which aspects of auditory processing the model should attempt to simulate. This; talk will address three fundamental issues in auditory modeling that are important for the design of irnproved coding systems.","PeriodicalId":146017,"journal":{"name":"Final Program and Paper Summaries 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127231441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Final Program and Paper Summaries 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1