Dual backplate microphones have gained attention for their ability to improve sensitivity and dynamic range and suppress even-order harmonic distortion compared to traditional single backplate designs. This study investigates the nonlinear behavior of these microphones, focusing on the nonlinear capacitance changes that occur as the diaphragm moves between the two backplates. A theoretical model is developed to describe how asymmetries in the air gaps and parasitic capacities contribute to harmonic distortion. The results show that the second harmonic decreases significantly as the air gaps and the parasitic capacities become more symmetrical, confirming that the dual backplate structure can effectively cancel even-order harmonics. The model is then validated through acoustic measurements on a dual backplate micro-electro-mechanical systems microphone, from which the key model parameters are estimated. In addition, a signal-domain correction algorithm-originally designed for single backplate microphones-is adapted and shown to reduce distortion further when applied to dual backplate designs. These findings provide both a clearer understanding of nonlinear distortion mechanisms in dual backplate microphones and a practical means to improve their performance in high-demand acoustic applications.
{"title":"Nonlinear capacitance and distortion reduction in dual backplate condenser microphones.","authors":"Petr Honzík, Antonin Novak","doi":"10.1121/10.0042250","DOIUrl":"https://doi.org/10.1121/10.0042250","url":null,"abstract":"<p><p>Dual backplate microphones have gained attention for their ability to improve sensitivity and dynamic range and suppress even-order harmonic distortion compared to traditional single backplate designs. This study investigates the nonlinear behavior of these microphones, focusing on the nonlinear capacitance changes that occur as the diaphragm moves between the two backplates. A theoretical model is developed to describe how asymmetries in the air gaps and parasitic capacities contribute to harmonic distortion. The results show that the second harmonic decreases significantly as the air gaps and the parasitic capacities become more symmetrical, confirming that the dual backplate structure can effectively cancel even-order harmonics. The model is then validated through acoustic measurements on a dual backplate micro-electro-mechanical systems microphone, from which the key model parameters are estimated. In addition, a signal-domain correction algorithm-originally designed for single backplate microphones-is adapted and shown to reduce distortion further when applied to dual backplate designs. These findings provide both a clearer understanding of nonlinear distortion mechanisms in dual backplate microphones and a practical means to improve their performance in high-demand acoustic applications.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"159 1","pages":"496-504"},"PeriodicalIF":2.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146011034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grace Gervino, Janina Boecher, Ho Ming Chow, Emily Garnett, Soo-Eun Chang, Evan Usler
The purpose of the current study was to examine speech rhythm in typically developing children throughout the preschool and school-aged years. A better understanding of speech rhythm during childhood and potential differences between the sexes provides insight into the development of speech-language abilities. Fifty-eight participants (29 males/29 females) aged three to nine years were included in the study. Audio recordings of participants' speech production were collected during a narrative task. Envelope-based measures, which conceptualize speech rhythm as periodicity in the acoustic envelope, were computed. Separate general linear models were performed for each of the rhythm measures. Envelope-based measures (e.g., center of envelope power, supra-syllabic band power ratio) indicated that as children aged, their speech contained more high-frequency content and became dominated by syllabic-level rhythms. Findings suggest that both sexes exhibited a similar refinement of speech rhythm as evidenced by increases in envelope-based measures, with speech production developing a more syllabic rhythmic structure during the preschool and school-age years.
{"title":"Age-related increases in speech rhythm in typically developing children.","authors":"Grace Gervino, Janina Boecher, Ho Ming Chow, Emily Garnett, Soo-Eun Chang, Evan Usler","doi":"10.1121/10.0042238","DOIUrl":"https://doi.org/10.1121/10.0042238","url":null,"abstract":"<p><p>The purpose of the current study was to examine speech rhythm in typically developing children throughout the preschool and school-aged years. A better understanding of speech rhythm during childhood and potential differences between the sexes provides insight into the development of speech-language abilities. Fifty-eight participants (29 males/29 females) aged three to nine years were included in the study. Audio recordings of participants' speech production were collected during a narrative task. Envelope-based measures, which conceptualize speech rhythm as periodicity in the acoustic envelope, were computed. Separate general linear models were performed for each of the rhythm measures. Envelope-based measures (e.g., center of envelope power, supra-syllabic band power ratio) indicated that as children aged, their speech contained more high-frequency content and became dominated by syllabic-level rhythms. Findings suggest that both sexes exhibited a similar refinement of speech rhythm as evidenced by increases in envelope-based measures, with speech production developing a more syllabic rhythmic structure during the preschool and school-age years.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"159 1","pages":"373-383"},"PeriodicalIF":2.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145959610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vibrato in saxophone playing is produced by modulating the jaw force on the reed, creating complex reed-player interactions. This work presents a physics-based sound synthesis of saxophone vibrato, modeling the instrument's acoustics and the acousto-mechanical reed-lip interaction under lip force modulation. The saxophone's acoustic impedance is measured for use in synthesis. The mouthpiece influence is represented by an acoustic model, coupled to the saxophone through numerical simulations performed with the finite element method using open-source tools. The measured impedance is applied as a boundary condition, and viscothermal losses are included. Reed oscillations under acoustic pressure are analyzed with computer vision and high-speed imaging to estimate stiffness, resonance frequency, damping, and rest opening at various lip forces. A time-domain acoustical-mechanical simulation solves a non-linear system, with results compared to recorded vibrato performances. The study identifies parameters driving vibrato production, highlighting the key quantity linking lip force variations to the phenomenon.
{"title":"Saxophone acoustical modeling and vibrato \"a la machoire\" sound synthesis.","authors":"Diego Tonetti, Edoardo A Piana","doi":"10.1121/10.0041870","DOIUrl":"https://doi.org/10.1121/10.0041870","url":null,"abstract":"<p><p>Vibrato in saxophone playing is produced by modulating the jaw force on the reed, creating complex reed-player interactions. This work presents a physics-based sound synthesis of saxophone vibrato, modeling the instrument's acoustics and the acousto-mechanical reed-lip interaction under lip force modulation. The saxophone's acoustic impedance is measured for use in synthesis. The mouthpiece influence is represented by an acoustic model, coupled to the saxophone through numerical simulations performed with the finite element method using open-source tools. The measured impedance is applied as a boundary condition, and viscothermal losses are included. Reed oscillations under acoustic pressure are analyzed with computer vision and high-speed imaging to estimate stiffness, resonance frequency, damping, and rest opening at various lip forces. A time-domain acoustical-mechanical simulation solves a non-linear system, with results compared to recorded vibrato performances. The study identifies parameters driving vibrato production, highlighting the key quantity linking lip force variations to the phenomenon.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"159 1","pages":"141-156"},"PeriodicalIF":2.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145905791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Identification of speech sounds is influenced by spectral contrast effects (SCEs), the perceptual magnification of spectral differences between successive sounds. SCEs result in the categorization of a target sound being biased away from spectral properties in the preceding acoustic context. Given remarkable consistency in the magnitudes of these contrast effects within the same frequency region [Stilp (2019) J. Acoust. Soc. Am. 146(2), 1503-1517], it was hypothesized that they would also show stable relationships across different frequency regions. In this study, normal-hearing listeners' phoneme categorization and contrast effects were assessed where phonetic contrasts were driven by changes in low-frequency F1 ("big"-"beg" continuum), mid-frequency F3 ("dot"-"got"), or high-frequency frication spectrum regions ("sheet"-"seat"). On each trial, listeners heard a precursor sentence that was filtered to emphasize energy in the lower or higher range within one of these frequency regions, followed by a target word that hinged on the frequency region that was filtered. Results showed that SCEs influenced categorization in each frequency region, as expected. However, effect magnitudes were not correlated with each other across frequency regions within or across two participant samples. This clarifies perception-in-context on a broader scale as the influence of spectral contrast is independent across different frequency regions.
语音识别受频谱对比效应(sce)的影响,即连续语音之间频谱差异的感知放大。sce导致目标声音的分类偏离先前声学环境中的频谱特性。考虑到在同一频率区域内这些对比效应的大小具有显著的一致性[Stilp (2019) J. Acoust。Soc。Am. 146(2), 1503-1517],假设它们也会在不同的频率区域表现出稳定的关系。在这项研究中,听力正常的听者在低频F1(“big”-“beg”连续体)、中频F3(“dot”-“got”)或高频摩擦频谱区域(“sheet”-“seat”)的变化驱动下的语音对比被评估为音素分类和对比效果。在每次试验中,听众先听到一个被过滤的句子,以强调其中一个频率区域内较低或较高范围内的能量,然后是一个与被过滤的频率区域相关的目标词。结果表明,sce影响了每个频率区域的分类,正如预期的那样。然而,在两个参与者样本内部或两个参与者样本之间,不同频率区域的影响幅度并不相关。这在更广泛的范围内澄清了上下文感知,因为频谱对比度的影响在不同的频率区域是独立的。
{"title":"Independence of spectral contrast effects across different frequency regions in speech perceptiona).","authors":"Christian E Stilp, Matthew B Winn","doi":"10.1121/10.0042237","DOIUrl":"https://doi.org/10.1121/10.0042237","url":null,"abstract":"<p><p>Identification of speech sounds is influenced by spectral contrast effects (SCEs), the perceptual magnification of spectral differences between successive sounds. SCEs result in the categorization of a target sound being biased away from spectral properties in the preceding acoustic context. Given remarkable consistency in the magnitudes of these contrast effects within the same frequency region [Stilp (2019) J. Acoust. Soc. Am. 146(2), 1503-1517], it was hypothesized that they would also show stable relationships across different frequency regions. In this study, normal-hearing listeners' phoneme categorization and contrast effects were assessed where phonetic contrasts were driven by changes in low-frequency F1 (\"big\"-\"beg\" continuum), mid-frequency F3 (\"dot\"-\"got\"), or high-frequency frication spectrum regions (\"sheet\"-\"seat\"). On each trial, listeners heard a precursor sentence that was filtered to emphasize energy in the lower or higher range within one of these frequency regions, followed by a target word that hinged on the frequency region that was filtered. Results showed that SCEs influenced categorization in each frequency region, as expected. However, effect magnitudes were not correlated with each other across frequency regions within or across two participant samples. This clarifies perception-in-context on a broader scale as the influence of spectral contrast is independent across different frequency regions.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"159 1","pages":"581-591"},"PeriodicalIF":2.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146018855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study presents an iterative numerical scheme for Helmholtz scattering with Neumann boundary conditions, modeling scattering from bounded convex bodies as a sum over propagation paths. The solution is expressed as the sum of the incident wavefield, specular reflection, and edge diffraction contributions. Recasting the Neumann series representation of the nth-order solution into tensor form provides a path interpretation and formally connects iterative path-tracing approaches to the known diffraction operator solution of the scattering problem. The study applies Nyström discretization to the nested diffraction integral, yielding reusable path diffraction coefficients. An iterative scheme is proposed that efficiently explores this path-tensor structure through successive expansions. The absolute value of the wavefield associated with each path serves as the ordering key in a max heap prioritization scheme. Numerical scattering experiments on the unit cube demonstrate rapid convergence. Relative L2-norm differences in the Dirichlet trace drop below 5%, 3.5%, and 3% after 10 000 iteration steps for wavenumbers k=2,4,6 m-1, respectively, when compared to results from direct boundary element formulations. For the k=2 m-1 case, which shows the largest trace error after 10 000 iteration steps, relative L2-norm errors of <2.5% and relative L∞-norm errors of <4% in the domain are observed.
{"title":"Iterative path expansion for Helmholtz scattering with Neumann boundary conditions.","authors":"Matthias Wolfram Ospel","doi":"10.1121/10.0042258","DOIUrl":"https://doi.org/10.1121/10.0042258","url":null,"abstract":"<p><p>This study presents an iterative numerical scheme for Helmholtz scattering with Neumann boundary conditions, modeling scattering from bounded convex bodies as a sum over propagation paths. The solution is expressed as the sum of the incident wavefield, specular reflection, and edge diffraction contributions. Recasting the Neumann series representation of the nth-order solution into tensor form provides a path interpretation and formally connects iterative path-tracing approaches to the known diffraction operator solution of the scattering problem. The study applies Nyström discretization to the nested diffraction integral, yielding reusable path diffraction coefficients. An iterative scheme is proposed that efficiently explores this path-tensor structure through successive expansions. The absolute value of the wavefield associated with each path serves as the ordering key in a max heap prioritization scheme. Numerical scattering experiments on the unit cube demonstrate rapid convergence. Relative L2-norm differences in the Dirichlet trace drop below 5%, 3.5%, and 3% after 10 000 iteration steps for wavenumbers k=2,4,6 m-1, respectively, when compared to results from direct boundary element formulations. For the k=2 m-1 case, which shows the largest trace error after 10 000 iteration steps, relative L2-norm errors of <2.5% and relative L∞-norm errors of <4% in the domain are observed.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"159 1","pages":"600-609"},"PeriodicalIF":2.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146018879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-reciprocal systems have been shown to exhibit various interesting wave phenomena, such as the non-Hermitian skin effect, which causes accumulation of modes at boundaries. Recent research on discrete systems showed that this effect can pose a barrier for waves hitting an interface between reciprocal and non-reciprocal systems. Under certain conditions, however, waves can tunnel through this barrier, similar to the tunneling of particles in quantum mechanics. This work proposes and investigates an active acoustic metamaterial design to realize this tunneling phenomenon in the acoustical wave domain. The metamaterial consists of an acoustic waveguide with microphones and loudspeakers embedded in its wall. Starting from a purely discrete non-Hermitian lattice model of the system, a hybrid continuous-discrete acoustic model is derived, resulting in distributed feedback control laws to realize the desired behavior for acoustic waves. The proposed control laws are validated using frequency and time domain finite element method simulations, which include lumped electro-acoustic loudspeaker models. Additionally, an experimental demonstration is performed using a waveguide with embedded active unit cells and a digital implementation of the control laws. In both the simulations and experiments, the tunneling phenomenon is successfully observed.
{"title":"Realizing non-Hermitian tunneling phenomena using non-reciprocal active acoustic metamaterialsa),b).","authors":"Felix Langfeldt, Joe Tan, Sayan Jana, Lea Sirota","doi":"10.1121/10.0041858","DOIUrl":"https://doi.org/10.1121/10.0041858","url":null,"abstract":"<p><p>Non-reciprocal systems have been shown to exhibit various interesting wave phenomena, such as the non-Hermitian skin effect, which causes accumulation of modes at boundaries. Recent research on discrete systems showed that this effect can pose a barrier for waves hitting an interface between reciprocal and non-reciprocal systems. Under certain conditions, however, waves can tunnel through this barrier, similar to the tunneling of particles in quantum mechanics. This work proposes and investigates an active acoustic metamaterial design to realize this tunneling phenomenon in the acoustical wave domain. The metamaterial consists of an acoustic waveguide with microphones and loudspeakers embedded in its wall. Starting from a purely discrete non-Hermitian lattice model of the system, a hybrid continuous-discrete acoustic model is derived, resulting in distributed feedback control laws to realize the desired behavior for acoustic waves. The proposed control laws are validated using frequency and time domain finite element method simulations, which include lumped electro-acoustic loudspeaker models. Additionally, an experimental demonstration is performed using a waveguide with embedded active unit cells and a digital implementation of the control laws. In both the simulations and experiments, the tunneling phenomenon is successfully observed.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"158 6","pages":"4900-4911"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145794246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed acoustic sensing (DAS) with horizontal fibers has recently begun to be utilized for offshore seismic imaging. During a field experiment in the North Sea, using a fiber crossing a gas pipeline, we observed anomalous wave arrivals on a specific range of channels and shot gathers. We analyzed the arrivals and interpret them as shear waves (S-waves) that are generated when the compressional direct waves impinge on the pipeline. The S-waves subsequently propagate through the pipeline and are recorded on the fiber section crossing the pipeline. With an increased usage of the fiber network for seismic acquisition, this P-S converted wave may be observed more often in future acquisitions. Our analysis shows the pipeline acting as a wave guide over several hundred meters for signals generated in the water column. These insights may be useful for DAS-based offshore pipeline monitoring. In addition to the arrivals generated during the active acquisition, we analyzed transient signals occurring at the crossing in the passive data. While their distribution over time correlates with the tides, their generation mechanism remains unclear. No periodic signals that could be attributed to the flow in the pipeline were observed in the vicinity of the crossing.
{"title":"Observations from a fiber-pipeline crossing during active and passive seismic acquisition using distributed acoustic sensing.","authors":"Kevin Growe, Martin Landrø, Espen Birger Raknes","doi":"10.1121/10.0039544","DOIUrl":"https://doi.org/10.1121/10.0039544","url":null,"abstract":"<p><p>Distributed acoustic sensing (DAS) with horizontal fibers has recently begun to be utilized for offshore seismic imaging. During a field experiment in the North Sea, using a fiber crossing a gas pipeline, we observed anomalous wave arrivals on a specific range of channels and shot gathers. We analyzed the arrivals and interpret them as shear waves (S-waves) that are generated when the compressional direct waves impinge on the pipeline. The S-waves subsequently propagate through the pipeline and are recorded on the fiber section crossing the pipeline. With an increased usage of the fiber network for seismic acquisition, this P-S converted wave may be observed more often in future acquisitions. Our analysis shows the pipeline acting as a wave guide over several hundred meters for signals generated in the water column. These insights may be useful for DAS-based offshore pipeline monitoring. In addition to the arrivals generated during the active acquisition, we analyzed transient signals occurring at the crossing in the passive data. While their distribution over time correlates with the tides, their generation mechanism remains unclear. No periodic signals that could be attributed to the flow in the pipeline were observed in the vicinity of the crossing.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"158 6","pages":"4825-4837"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brent K Hoffmeister, Kate E Hazelwood, Hugh E Ferguson, Layla K Lammers, Keith T Hoffmeister, Emily E Bingham
Ultrasonic backscatter techniques are being developed to detect changes in cancellous bone caused by osteoporosis. Clinical implementation of these techniques may use a hand-held transducer pressed against the body. Variations in transducer angle with respect to the bone surface may cause errors in the backscatter measurements. The goal of this study was to evaluate the sensitivity of backscatter parameters to these errors. Six parameters previously identified as potentially useful for ultrasonic bone assessment were investigated: apparent integrated backscatter (AIB), frequency slope of apparent backscatter (FSAB), frequency intercept of apparent backscatter, normalized mean of the backscatter difference, normalized backscatter amplitude ratio, and the backscatter amplitude decay constant. Measurements were performed on specimens prepared from a polymer open cell rigid foam coated with a thin layer of epoxy to simulate cancellous bone with an outer cortex. Data were collected using a 3.5 MHz transducer for angles of incidence ranging from 0° to 30° relative to the specimen surface perpendicular. AIB and FSAB demonstrated the greatest sensitivity to angle-dependent errors. The source of error was identified as reflection and attenuation losses caused by the cortex. A theoretical model was developed and experimentally validated to predict these losses.
{"title":"Effect of angle of incidence on backscatter methods of ultrasonic bone assessment.","authors":"Brent K Hoffmeister, Kate E Hazelwood, Hugh E Ferguson, Layla K Lammers, Keith T Hoffmeister, Emily E Bingham","doi":"10.1121/10.0041862","DOIUrl":"https://doi.org/10.1121/10.0041862","url":null,"abstract":"<p><p>Ultrasonic backscatter techniques are being developed to detect changes in cancellous bone caused by osteoporosis. Clinical implementation of these techniques may use a hand-held transducer pressed against the body. Variations in transducer angle with respect to the bone surface may cause errors in the backscatter measurements. The goal of this study was to evaluate the sensitivity of backscatter parameters to these errors. Six parameters previously identified as potentially useful for ultrasonic bone assessment were investigated: apparent integrated backscatter (AIB), frequency slope of apparent backscatter (FSAB), frequency intercept of apparent backscatter, normalized mean of the backscatter difference, normalized backscatter amplitude ratio, and the backscatter amplitude decay constant. Measurements were performed on specimens prepared from a polymer open cell rigid foam coated with a thin layer of epoxy to simulate cancellous bone with an outer cortex. Data were collected using a 3.5 MHz transducer for angles of incidence ranging from 0° to 30° relative to the specimen surface perpendicular. AIB and FSAB demonstrated the greatest sensitivity to angle-dependent errors. The source of error was identified as reflection and attenuation losses caused by the cortex. A theoretical model was developed and experimentally validated to predict these losses.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"158 6","pages":"4857-4869"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study proposes a method for visualizing sound fields utilizing midair nonlinear acoustic phenomena in a spatially localized manner. Conventional microphone-array-based sound field visualization method requires multi-channel synchronous signal processing that handles phase information of the observed waveforms, which inevitably hinders production of cost-effective recording devices. Additionally, the inserted microphones themselves can disturb the measured sound field, and artifacts owing to the spacing between microphones may arise. To address these issues, the study introduces a measurement method that involves scanning a focal point of converging ultrasonic beams in the target sound field. The ultrasonic focus generates secondary parametric waves via frequency modulation of the target sound field only near the focal point due to the acoustic nonlinear effect. The visualization of the target field is completed by demodulating these waves measured with a single immobilized microphone located outside the field. This technique achieves spatial selectivity of recording via steering of the ultrasonic focus serving as a parametric probe, allowing the target sound field information to be reconstructed from a monaural recorded signal. This approach of sound field visualization ranging over hundreds of millimeters is based on a single-channel recording, where no recording elements densely arranged in the target sound field are required.
{"title":"Visualization of sound source positions using pinpoint nonlinear secondary emission by ultrasound focus scanning.","authors":"Shihori Kozuka, Keisuke Hasegawa, Takaaki Nara","doi":"10.1121/10.0041888","DOIUrl":"https://doi.org/10.1121/10.0041888","url":null,"abstract":"<p><p>This study proposes a method for visualizing sound fields utilizing midair nonlinear acoustic phenomena in a spatially localized manner. Conventional microphone-array-based sound field visualization method requires multi-channel synchronous signal processing that handles phase information of the observed waveforms, which inevitably hinders production of cost-effective recording devices. Additionally, the inserted microphones themselves can disturb the measured sound field, and artifacts owing to the spacing between microphones may arise. To address these issues, the study introduces a measurement method that involves scanning a focal point of converging ultrasonic beams in the target sound field. The ultrasonic focus generates secondary parametric waves via frequency modulation of the target sound field only near the focal point due to the acoustic nonlinear effect. The visualization of the target field is completed by demodulating these waves measured with a single immobilized microphone located outside the field. This technique achieves spatial selectivity of recording via steering of the ultrasonic focus serving as a parametric probe, allowing the target sound field information to be reconstructed from a monaural recorded signal. This approach of sound field visualization ranging over hundreds of millimeters is based on a single-channel recording, where no recording elements densely arranged in the target sound field are required.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"158 6","pages":"4816-4824"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145768599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Passive acoustic monitoring is critical for long-term odontocete monitoring using autonomous recording devices. However, technical constraints, such as storage capacity and data processing limitations, often require temporal subsampling. This study investigates how varying duty cycles (50%-10%) and listening periods (1 min to 6 h) affect the detection of delphinid whistles and clicks, and harbor porpoise clicks. Two types of instruments were used: broadband recorders for whistles and F-PODs for clicks. As each device offers different configuration options, subsampling schemes were tailored to each signal type. The impact of duty cycles on seasonal patterns was evaluated using daily detection positive minutes and hours and diel patterns were assessed using hourly positive minutes and daily detection positive minutes ratios. Results indicate that higher duty cycles (50%) better preserve temporal pattern representations, particularly in high-activity sites, across both instruments and signal types. Lower duty cycles reduce the quality of data representation, especially in low-activity areas. Short listening periods (5-30 min) most closely approximate metrics from continuous recordings. These findings highlight the importance of adapting subsampling strategies to instrument capabilities and the overall level of acoustic activity, which varies across taxa and sites, to obtain an accurate representation of odontocete acoustic presence.
{"title":"Effects of duty cycle on passive acoustic monitoring metrics: The case of odontocete vocalizations.","authors":"Mathilde Michel, Julie Béesau, Maëlle Torterotot, Nicole Todd, Flore Samaran","doi":"10.1121/10.0039925","DOIUrl":"https://doi.org/10.1121/10.0039925","url":null,"abstract":"<p><p>Passive acoustic monitoring is critical for long-term odontocete monitoring using autonomous recording devices. However, technical constraints, such as storage capacity and data processing limitations, often require temporal subsampling. This study investigates how varying duty cycles (50%-10%) and listening periods (1 min to 6 h) affect the detection of delphinid whistles and clicks, and harbor porpoise clicks. Two types of instruments were used: broadband recorders for whistles and F-PODs for clicks. As each device offers different configuration options, subsampling schemes were tailored to each signal type. The impact of duty cycles on seasonal patterns was evaluated using daily detection positive minutes and hours and diel patterns were assessed using hourly positive minutes and daily detection positive minutes ratios. Results indicate that higher duty cycles (50%) better preserve temporal pattern representations, particularly in high-activity sites, across both instruments and signal types. Lower duty cycles reduce the quality of data representation, especially in low-activity areas. Short listening periods (5-30 min) most closely approximate metrics from continuous recordings. These findings highlight the importance of adapting subsampling strategies to instrument capabilities and the overall level of acoustic activity, which varies across taxa and sites, to obtain an accurate representation of odontocete acoustic presence.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"158 6","pages":"5033-5046"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145819805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}