Grace Gervino, Janina Boecher, Ho Ming Chow, Emily Garnett, Soo-Eun Chang, Evan Usler
The purpose of the current study was to examine speech rhythm in typically developing children throughout the preschool and school-aged years. A better understanding of speech rhythm during childhood and potential differences between the sexes provides insight into the development of speech-language abilities. Fifty-eight participants (29 males/29 females) aged three to nine years were included in the study. Audio recordings of participants' speech production were collected during a narrative task. Envelope-based measures, which conceptualize speech rhythm as periodicity in the acoustic envelope, were computed. Separate general linear models were performed for each of the rhythm measures. Envelope-based measures (e.g., center of envelope power, supra-syllabic band power ratio) indicated that as children aged, their speech contained more high-frequency content and became dominated by syllabic-level rhythms. Findings suggest that both sexes exhibited a similar refinement of speech rhythm as evidenced by increases in envelope-based measures, with speech production developing a more syllabic rhythmic structure during the preschool and school-age years.
{"title":"Age-related increases in speech rhythm in typically developing children.","authors":"Grace Gervino, Janina Boecher, Ho Ming Chow, Emily Garnett, Soo-Eun Chang, Evan Usler","doi":"10.1121/10.0042238","DOIUrl":"https://doi.org/10.1121/10.0042238","url":null,"abstract":"<p><p>The purpose of the current study was to examine speech rhythm in typically developing children throughout the preschool and school-aged years. A better understanding of speech rhythm during childhood and potential differences between the sexes provides insight into the development of speech-language abilities. Fifty-eight participants (29 males/29 females) aged three to nine years were included in the study. Audio recordings of participants' speech production were collected during a narrative task. Envelope-based measures, which conceptualize speech rhythm as periodicity in the acoustic envelope, were computed. Separate general linear models were performed for each of the rhythm measures. Envelope-based measures (e.g., center of envelope power, supra-syllabic band power ratio) indicated that as children aged, their speech contained more high-frequency content and became dominated by syllabic-level rhythms. Findings suggest that both sexes exhibited a similar refinement of speech rhythm as evidenced by increases in envelope-based measures, with speech production developing a more syllabic rhythmic structure during the preschool and school-age years.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"159 1","pages":"373-383"},"PeriodicalIF":2.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145959610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-reciprocal systems have been shown to exhibit various interesting wave phenomena, such as the non-Hermitian skin effect, which causes accumulation of modes at boundaries. Recent research on discrete systems showed that this effect can pose a barrier for waves hitting an interface between reciprocal and non-reciprocal systems. Under certain conditions, however, waves can tunnel through this barrier, similar to the tunneling of particles in quantum mechanics. This work proposes and investigates an active acoustic metamaterial design to realize this tunneling phenomenon in the acoustical wave domain. The metamaterial consists of an acoustic waveguide with microphones and loudspeakers embedded in its wall. Starting from a purely discrete non-Hermitian lattice model of the system, a hybrid continuous-discrete acoustic model is derived, resulting in distributed feedback control laws to realize the desired behavior for acoustic waves. The proposed control laws are validated using frequency and time domain finite element method simulations, which include lumped electro-acoustic loudspeaker models. Additionally, an experimental demonstration is performed using a waveguide with embedded active unit cells and a digital implementation of the control laws. In both the simulations and experiments, the tunneling phenomenon is successfully observed.
{"title":"Realizing non-Hermitian tunneling phenomena using non-reciprocal active acoustic metamaterialsa),b).","authors":"Felix Langfeldt, Joe Tan, Sayan Jana, Lea Sirota","doi":"10.1121/10.0041858","DOIUrl":"https://doi.org/10.1121/10.0041858","url":null,"abstract":"<p><p>Non-reciprocal systems have been shown to exhibit various interesting wave phenomena, such as the non-Hermitian skin effect, which causes accumulation of modes at boundaries. Recent research on discrete systems showed that this effect can pose a barrier for waves hitting an interface between reciprocal and non-reciprocal systems. Under certain conditions, however, waves can tunnel through this barrier, similar to the tunneling of particles in quantum mechanics. This work proposes and investigates an active acoustic metamaterial design to realize this tunneling phenomenon in the acoustical wave domain. The metamaterial consists of an acoustic waveguide with microphones and loudspeakers embedded in its wall. Starting from a purely discrete non-Hermitian lattice model of the system, a hybrid continuous-discrete acoustic model is derived, resulting in distributed feedback control laws to realize the desired behavior for acoustic waves. The proposed control laws are validated using frequency and time domain finite element method simulations, which include lumped electro-acoustic loudspeaker models. Additionally, an experimental demonstration is performed using a waveguide with embedded active unit cells and a digital implementation of the control laws. In both the simulations and experiments, the tunneling phenomenon is successfully observed.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"158 6","pages":"4900-4911"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145794246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed acoustic sensing (DAS) with horizontal fibers has recently begun to be utilized for offshore seismic imaging. During a field experiment in the North Sea, using a fiber crossing a gas pipeline, we observed anomalous wave arrivals on a specific range of channels and shot gathers. We analyzed the arrivals and interpret them as shear waves (S-waves) that are generated when the compressional direct waves impinge on the pipeline. The S-waves subsequently propagate through the pipeline and are recorded on the fiber section crossing the pipeline. With an increased usage of the fiber network for seismic acquisition, this P-S converted wave may be observed more often in future acquisitions. Our analysis shows the pipeline acting as a wave guide over several hundred meters for signals generated in the water column. These insights may be useful for DAS-based offshore pipeline monitoring. In addition to the arrivals generated during the active acquisition, we analyzed transient signals occurring at the crossing in the passive data. While their distribution over time correlates with the tides, their generation mechanism remains unclear. No periodic signals that could be attributed to the flow in the pipeline were observed in the vicinity of the crossing.
{"title":"Observations from a fiber-pipeline crossing during active and passive seismic acquisition using distributed acoustic sensing.","authors":"Kevin Growe, Martin Landrø, Espen Birger Raknes","doi":"10.1121/10.0039544","DOIUrl":"https://doi.org/10.1121/10.0039544","url":null,"abstract":"<p><p>Distributed acoustic sensing (DAS) with horizontal fibers has recently begun to be utilized for offshore seismic imaging. During a field experiment in the North Sea, using a fiber crossing a gas pipeline, we observed anomalous wave arrivals on a specific range of channels and shot gathers. We analyzed the arrivals and interpret them as shear waves (S-waves) that are generated when the compressional direct waves impinge on the pipeline. The S-waves subsequently propagate through the pipeline and are recorded on the fiber section crossing the pipeline. With an increased usage of the fiber network for seismic acquisition, this P-S converted wave may be observed more often in future acquisitions. Our analysis shows the pipeline acting as a wave guide over several hundred meters for signals generated in the water column. These insights may be useful for DAS-based offshore pipeline monitoring. In addition to the arrivals generated during the active acquisition, we analyzed transient signals occurring at the crossing in the passive data. While their distribution over time correlates with the tides, their generation mechanism remains unclear. No periodic signals that could be attributed to the flow in the pipeline were observed in the vicinity of the crossing.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"158 6","pages":"4825-4837"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brent K Hoffmeister, Kate E Hazelwood, Hugh E Ferguson, Layla K Lammers, Keith T Hoffmeister, Emily E Bingham
Ultrasonic backscatter techniques are being developed to detect changes in cancellous bone caused by osteoporosis. Clinical implementation of these techniques may use a hand-held transducer pressed against the body. Variations in transducer angle with respect to the bone surface may cause errors in the backscatter measurements. The goal of this study was to evaluate the sensitivity of backscatter parameters to these errors. Six parameters previously identified as potentially useful for ultrasonic bone assessment were investigated: apparent integrated backscatter (AIB), frequency slope of apparent backscatter (FSAB), frequency intercept of apparent backscatter, normalized mean of the backscatter difference, normalized backscatter amplitude ratio, and the backscatter amplitude decay constant. Measurements were performed on specimens prepared from a polymer open cell rigid foam coated with a thin layer of epoxy to simulate cancellous bone with an outer cortex. Data were collected using a 3.5 MHz transducer for angles of incidence ranging from 0° to 30° relative to the specimen surface perpendicular. AIB and FSAB demonstrated the greatest sensitivity to angle-dependent errors. The source of error was identified as reflection and attenuation losses caused by the cortex. A theoretical model was developed and experimentally validated to predict these losses.
{"title":"Effect of angle of incidence on backscatter methods of ultrasonic bone assessment.","authors":"Brent K Hoffmeister, Kate E Hazelwood, Hugh E Ferguson, Layla K Lammers, Keith T Hoffmeister, Emily E Bingham","doi":"10.1121/10.0041862","DOIUrl":"https://doi.org/10.1121/10.0041862","url":null,"abstract":"<p><p>Ultrasonic backscatter techniques are being developed to detect changes in cancellous bone caused by osteoporosis. Clinical implementation of these techniques may use a hand-held transducer pressed against the body. Variations in transducer angle with respect to the bone surface may cause errors in the backscatter measurements. The goal of this study was to evaluate the sensitivity of backscatter parameters to these errors. Six parameters previously identified as potentially useful for ultrasonic bone assessment were investigated: apparent integrated backscatter (AIB), frequency slope of apparent backscatter (FSAB), frequency intercept of apparent backscatter, normalized mean of the backscatter difference, normalized backscatter amplitude ratio, and the backscatter amplitude decay constant. Measurements were performed on specimens prepared from a polymer open cell rigid foam coated with a thin layer of epoxy to simulate cancellous bone with an outer cortex. Data were collected using a 3.5 MHz transducer for angles of incidence ranging from 0° to 30° relative to the specimen surface perpendicular. AIB and FSAB demonstrated the greatest sensitivity to angle-dependent errors. The source of error was identified as reflection and attenuation losses caused by the cortex. A theoretical model was developed and experimentally validated to predict these losses.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"158 6","pages":"4857-4869"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study proposes a method for visualizing sound fields utilizing midair nonlinear acoustic phenomena in a spatially localized manner. Conventional microphone-array-based sound field visualization method requires multi-channel synchronous signal processing that handles phase information of the observed waveforms, which inevitably hinders production of cost-effective recording devices. Additionally, the inserted microphones themselves can disturb the measured sound field, and artifacts owing to the spacing between microphones may arise. To address these issues, the study introduces a measurement method that involves scanning a focal point of converging ultrasonic beams in the target sound field. The ultrasonic focus generates secondary parametric waves via frequency modulation of the target sound field only near the focal point due to the acoustic nonlinear effect. The visualization of the target field is completed by demodulating these waves measured with a single immobilized microphone located outside the field. This technique achieves spatial selectivity of recording via steering of the ultrasonic focus serving as a parametric probe, allowing the target sound field information to be reconstructed from a monaural recorded signal. This approach of sound field visualization ranging over hundreds of millimeters is based on a single-channel recording, where no recording elements densely arranged in the target sound field are required.
{"title":"Visualization of sound source positions using pinpoint nonlinear secondary emission by ultrasound focus scanning.","authors":"Shihori Kozuka, Keisuke Hasegawa, Takaaki Nara","doi":"10.1121/10.0041888","DOIUrl":"https://doi.org/10.1121/10.0041888","url":null,"abstract":"<p><p>This study proposes a method for visualizing sound fields utilizing midair nonlinear acoustic phenomena in a spatially localized manner. Conventional microphone-array-based sound field visualization method requires multi-channel synchronous signal processing that handles phase information of the observed waveforms, which inevitably hinders production of cost-effective recording devices. Additionally, the inserted microphones themselves can disturb the measured sound field, and artifacts owing to the spacing between microphones may arise. To address these issues, the study introduces a measurement method that involves scanning a focal point of converging ultrasonic beams in the target sound field. The ultrasonic focus generates secondary parametric waves via frequency modulation of the target sound field only near the focal point due to the acoustic nonlinear effect. The visualization of the target field is completed by demodulating these waves measured with a single immobilized microphone located outside the field. This technique achieves spatial selectivity of recording via steering of the ultrasonic focus serving as a parametric probe, allowing the target sound field information to be reconstructed from a monaural recorded signal. This approach of sound field visualization ranging over hundreds of millimeters is based on a single-channel recording, where no recording elements densely arranged in the target sound field are required.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"158 6","pages":"4816-4824"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145768599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Passive acoustic monitoring is critical for long-term odontocete monitoring using autonomous recording devices. However, technical constraints, such as storage capacity and data processing limitations, often require temporal subsampling. This study investigates how varying duty cycles (50%-10%) and listening periods (1 min to 6 h) affect the detection of delphinid whistles and clicks, and harbor porpoise clicks. Two types of instruments were used: broadband recorders for whistles and F-PODs for clicks. As each device offers different configuration options, subsampling schemes were tailored to each signal type. The impact of duty cycles on seasonal patterns was evaluated using daily detection positive minutes and hours and diel patterns were assessed using hourly positive minutes and daily detection positive minutes ratios. Results indicate that higher duty cycles (50%) better preserve temporal pattern representations, particularly in high-activity sites, across both instruments and signal types. Lower duty cycles reduce the quality of data representation, especially in low-activity areas. Short listening periods (5-30 min) most closely approximate metrics from continuous recordings. These findings highlight the importance of adapting subsampling strategies to instrument capabilities and the overall level of acoustic activity, which varies across taxa and sites, to obtain an accurate representation of odontocete acoustic presence.
{"title":"Effects of duty cycle on passive acoustic monitoring metrics: The case of odontocete vocalizations.","authors":"Mathilde Michel, Julie Béesau, Maëlle Torterotot, Nicole Todd, Flore Samaran","doi":"10.1121/10.0039925","DOIUrl":"https://doi.org/10.1121/10.0039925","url":null,"abstract":"<p><p>Passive acoustic monitoring is critical for long-term odontocete monitoring using autonomous recording devices. However, technical constraints, such as storage capacity and data processing limitations, often require temporal subsampling. This study investigates how varying duty cycles (50%-10%) and listening periods (1 min to 6 h) affect the detection of delphinid whistles and clicks, and harbor porpoise clicks. Two types of instruments were used: broadband recorders for whistles and F-PODs for clicks. As each device offers different configuration options, subsampling schemes were tailored to each signal type. The impact of duty cycles on seasonal patterns was evaluated using daily detection positive minutes and hours and diel patterns were assessed using hourly positive minutes and daily detection positive minutes ratios. Results indicate that higher duty cycles (50%) better preserve temporal pattern representations, particularly in high-activity sites, across both instruments and signal types. Lower duty cycles reduce the quality of data representation, especially in low-activity areas. Short listening periods (5-30 min) most closely approximate metrics from continuous recordings. These findings highlight the importance of adapting subsampling strategies to instrument capabilities and the overall level of acoustic activity, which varies across taxa and sites, to obtain an accurate representation of odontocete acoustic presence.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"158 6","pages":"5033-5046"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145819805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raphael Cueille, Nicolas Grimault, Mathieu Lavandier
Hearing-impaired (HI) listeners experience difficulties to understand speech in noisy environments. The aim of this study was to determine whether they are more affected by the detrimental effects of reverberation than normal-hearing (NH) listeners. Intelligibility tests were done for NH listeners and HI listeners with various degrees of hearing loss, using headphones and real-room binaural impulse responses to simulate several spatial configurations. This allowed us to investigate different effects of reverberation on speech intelligibility in noise: the temporal smearing of the target speech impairing its intelligibility, the temporal smearing of modulated noise maskers reducing the opportunity for dip listening, and the detrimental effects of reverberation on spatial release from masking (SRM). The results indicate that the HI and NH listeners were similarly affected by reverberation. The tested conditions did not reveal any effect of hearing loss asymmetry on SRM, potentially because the main asymmetries were partly compensated for by the linear amplification applied to the stimuli for the HI listeners. Finally, the data could be described reasonably well using a binaural speech intelligibility model.
{"title":"Similarity of the effects of reverberation on speech intelligibility in noise for hearing-impaired and normal-hearing listeners.","authors":"Raphael Cueille, Nicolas Grimault, Mathieu Lavandier","doi":"10.1121/10.0041883","DOIUrl":"https://doi.org/10.1121/10.0041883","url":null,"abstract":"<p><p>Hearing-impaired (HI) listeners experience difficulties to understand speech in noisy environments. The aim of this study was to determine whether they are more affected by the detrimental effects of reverberation than normal-hearing (NH) listeners. Intelligibility tests were done for NH listeners and HI listeners with various degrees of hearing loss, using headphones and real-room binaural impulse responses to simulate several spatial configurations. This allowed us to investigate different effects of reverberation on speech intelligibility in noise: the temporal smearing of the target speech impairing its intelligibility, the temporal smearing of modulated noise maskers reducing the opportunity for dip listening, and the detrimental effects of reverberation on spatial release from masking (SRM). The results indicate that the HI and NH listeners were similarly affected by reverberation. The tested conditions did not reveal any effect of hearing loss asymmetry on SRM, potentially because the main asymmetries were partly compensated for by the linear amplification applied to the stimuli for the HI listeners. Finally, the data could be described reasonably well using a binaural speech intelligibility model.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"158 6","pages":"4994-5007"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145810465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victoria A Sevich, Davia J Williams, Terrin N Tamati
Lexical difficulty impacts vowel production in adults with normal hearing. Specifically, adults with normal hearing hyperarticulate vowels in lexically hard words relative to lexically easy words. For adult cochlear implant users, auditory deprivation and subsequent exposure to a degraded auditory signal may modify phonological representations, potentially altering lexically conditioned phonetic variation. The objective of the current study was to compare vowel production in lexically hard and easy words in adults with cochlear implants and normal hearing peers. Participants read isolated monosyllabic words that varied in lexical difficulty, and vowel dispersion was calculated to assess vowel production differences based on lexical difficulty and hearing status. Results revealed that cochlear implant users and their normal hearing peers hyperarticulated vowels in hard words relative to easy words, consistent with previous studies. However, no differences in vowel production between the normal hearing and cochlear implant talkers based on lexical difficulty were found, although overall differences in the production of /i/ and /ɪ/ were observed between the two hearing groups. These findings demonstrate that adult cochlear implant users exhibit vowel production patterns comparable to those of normal hearing adults, potentially reflecting similarities in phonological representations across hearing groups.
{"title":"Effects of lexical difficulty on vowel production in adults with cochlear implants and normal hearinga).","authors":"Victoria A Sevich, Davia J Williams, Terrin N Tamati","doi":"10.1121/10.0041790","DOIUrl":"https://doi.org/10.1121/10.0041790","url":null,"abstract":"<p><p>Lexical difficulty impacts vowel production in adults with normal hearing. Specifically, adults with normal hearing hyperarticulate vowels in lexically hard words relative to lexically easy words. For adult cochlear implant users, auditory deprivation and subsequent exposure to a degraded auditory signal may modify phonological representations, potentially altering lexically conditioned phonetic variation. The objective of the current study was to compare vowel production in lexically hard and easy words in adults with cochlear implants and normal hearing peers. Participants read isolated monosyllabic words that varied in lexical difficulty, and vowel dispersion was calculated to assess vowel production differences based on lexical difficulty and hearing status. Results revealed that cochlear implant users and their normal hearing peers hyperarticulated vowels in hard words relative to easy words, consistent with previous studies. However, no differences in vowel production between the normal hearing and cochlear implant talkers based on lexical difficulty were found, although overall differences in the production of /i/ and /ɪ/ were observed between the two hearing groups. These findings demonstrate that adult cochlear implant users exhibit vowel production patterns comparable to those of normal hearing adults, potentially reflecting similarities in phonological representations across hearing groups.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"158 6","pages":"4679-4696"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145757050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conductive hearing loss typically results from ossicular chain abnormalities, commonly ossicular fixation or separation. While a precise diagnosis is useful for surgeons, distinguishing between fixation and separation before surgery is challenging. In our previous studies, we reported that sweep frequency impedance (SFI) effectively detects such middle-ear pathologies. However, due to the prolonged sound stimuli, SFI exhibited weaker resistance to noise. In this study, we introduce a novel method using short-time stimulation and adaptive noise reduction to improve SFI performance. The method was applied to both healthy individuals and patients, and a support vector machine was employed to evaluate its accuracy in distinguishing fixation and separation in clinical practice. The proposed SFI yielded results consistent with the original SFI meter but significantly shortened the evaluation time to within 200 ms. Classification results indicate that the SFI achieved accuracies of 98% and 83% for detecting ossicular separation and fixation, respectively. In contrast, such accuracies of traditional tympanometry were 70% and 49% for the separation and fixation. Additionally, the study indicates that gentle lullabies can serve as effective acoustic stimuli. These results suggest that our new SFI has potential for middle-ear testing across all age groups, from newborns to the elderly.
{"title":"Detection of ossicular chain pathologies using sweep frequency impedance with short-time stimulation and adaptive noise reduction.","authors":"Di Zhou, Teruki Toya, Hisashi Sugimoto, Wataru Takei, Ryuichi Nakajima, Tomokazu Yoshizaki, Michio Murakoshi","doi":"10.1121/10.0041762","DOIUrl":"https://doi.org/10.1121/10.0041762","url":null,"abstract":"<p><p>Conductive hearing loss typically results from ossicular chain abnormalities, commonly ossicular fixation or separation. While a precise diagnosis is useful for surgeons, distinguishing between fixation and separation before surgery is challenging. In our previous studies, we reported that sweep frequency impedance (SFI) effectively detects such middle-ear pathologies. However, due to the prolonged sound stimuli, SFI exhibited weaker resistance to noise. In this study, we introduce a novel method using short-time stimulation and adaptive noise reduction to improve SFI performance. The method was applied to both healthy individuals and patients, and a support vector machine was employed to evaluate its accuracy in distinguishing fixation and separation in clinical practice. The proposed SFI yielded results consistent with the original SFI meter but significantly shortened the evaluation time to within 200 ms. Classification results indicate that the SFI achieved accuracies of 98% and 83% for detecting ossicular separation and fixation, respectively. In contrast, such accuracies of traditional tympanometry were 70% and 49% for the separation and fixation. Additionally, the study indicates that gentle lullabies can serve as effective acoustic stimuli. These results suggest that our new SFI has potential for middle-ear testing across all age groups, from newborns to the elderly.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"158 6","pages":"4321-4334"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145661410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tao Zhuang, Longbiao He, Feng Niu, Jia-Xin Zhong, Jing Lu
Multi-channel parametric array loudspeaker (MCPAL) systems offer enhanced flexibility and promise for generating highly directional audio beams in real-world applications. However, efficient and accurate prediction of their generated sound fields remains a major challenge due to the complex nonlinear behavior and multi-channel signal processing involved. To overcome this obstacle, we propose a k-space approach for modeling arbitrary MCPAL systems arranged on a baffled planar surface. In our method, the linear ultrasound field is first solved using the angular spectrum approach, and the quasilinear audio sound field is subsequently computed efficiently in k-space. By leveraging three-dimensional fast Fourier transforms, our approach not only achieves high computational efficiency but also maintains accuracy without relying on the paraxial approximation. For typical configurations studied, the proposed method demonstrates a speed-up of more than 4 orders of magnitude, compared to the direct integration method. Our proposed approach paved the way for simulating and designing advanced MCPAL systems.
{"title":"A k-space approach to modeling multi-channel parametric array loudspeaker systems.","authors":"Tao Zhuang, Longbiao He, Feng Niu, Jia-Xin Zhong, Jing Lu","doi":"10.1121/10.0041853","DOIUrl":"https://doi.org/10.1121/10.0041853","url":null,"abstract":"<p><p>Multi-channel parametric array loudspeaker (MCPAL) systems offer enhanced flexibility and promise for generating highly directional audio beams in real-world applications. However, efficient and accurate prediction of their generated sound fields remains a major challenge due to the complex nonlinear behavior and multi-channel signal processing involved. To overcome this obstacle, we propose a k-space approach for modeling arbitrary MCPAL systems arranged on a baffled planar surface. In our method, the linear ultrasound field is first solved using the angular spectrum approach, and the quasilinear audio sound field is subsequently computed efficiently in k-space. By leveraging three-dimensional fast Fourier transforms, our approach not only achieves high computational efficiency but also maintains accuracy without relying on the paraxial approximation. For typical configurations studied, the proposed method demonstrates a speed-up of more than 4 orders of magnitude, compared to the direct integration method. Our proposed approach paved the way for simulating and designing advanced MCPAL systems.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"158 6","pages":"4651-4661"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145742322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}