This study was designed to investigate the relationship between sound level and autonomic arousal using acoustic signals similar in level and acoustic properties to common sounds in the built environment. Thirty-three young adults were exposed to background sound modeled on ventilation equipment noise presented at levels ranging from 35 to 75 dBA sound pressure level (SPL) in 2 min blocks while they sat and read quietly. Autonomic arousal was measured in terms of skin conductance level. Results suggest that there is a direct relationship between sound level and arousal, even at these realistic levels. However, the effect of habituation appears to be more important overall.
{"title":"Effects of background noise on autonomic arousal (skin conductance level).","authors":"Ann Alvar, Alexander L Francis","doi":"10.1121/10.0024272","DOIUrl":"10.1121/10.0024272","url":null,"abstract":"<p><p>This study was designed to investigate the relationship between sound level and autonomic arousal using acoustic signals similar in level and acoustic properties to common sounds in the built environment. Thirty-three young adults were exposed to background sound modeled on ventilation equipment noise presented at levels ranging from 35 to 75 dBA sound pressure level (SPL) in 2 min blocks while they sat and read quietly. Autonomic arousal was measured in terms of skin conductance level. Results suggest that there is a direct relationship between sound level and arousal, even at these realistic levels. However, the effect of habituation appears to be more important overall.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139378852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A method for superimposing the shape of the palate on three-dimensional (3D) electromagnetic articulography (EMA) data is proposed. A biteplate with a dental impression tray and EMA sensors is used to obtain the palatal shape and record the sensor positions. The biteplate is then 3D scanned, and the scanned palate is mapped to the EMA data by matching the sensor positions on the scanned image with those in the EMA readings. The average distance between the mapped palate and the EMA palate traces is roughly 1 mm for nine speakers and is comparable to the measurement error of the EMA.
本文提出了一种在三维(3D)电磁发音成像(EMA)数据上叠加腭部形状的方法。使用带有牙科印模托盘和 EMA 传感器的咬合板来获取腭部形状并记录传感器位置。然后对咬合板进行三维扫描,通过匹配扫描图像上的传感器位置和 EMA 读数中的传感器位置,将扫描的腭部映射到 EMA 数据中。在九个扬声器中,绘制的上颚与 EMA 上颚痕迹之间的平均距离约为 1 毫米,与 EMA 的测量误差相当。
{"title":"Mapping palatal shape to electromagnetic articulography data: An approach using 3D scanning and sensor matching.","authors":"Yukiko Nota, Tatsuya Kitamura, Hironori Takemoto, Kikuo Maekawa","doi":"10.1121/10.0024215","DOIUrl":"https://doi.org/10.1121/10.0024215","url":null,"abstract":"<p><p>A method for superimposing the shape of the palate on three-dimensional (3D) electromagnetic articulography (EMA) data is proposed. A biteplate with a dental impression tray and EMA sensors is used to obtain the palatal shape and record the sensor positions. The biteplate is then 3D scanned, and the scanned palate is mapped to the EMA data by matching the sensor positions on the scanned image with those in the EMA readings. The average distance between the mapped palate and the EMA palate traces is roughly 1 mm for nine speakers and is comparable to the measurement error of the EMA.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139076129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel Poirot, Antoine Bourachot, Stefan Bilbao, Richard Kronland-Martinet
This study focuses on the auditory perception of plate thickness and investigates acoustic cues that evoke thickness in the context of sound synthesis. Three hypotheses are proposed and tested through a listening test, examining the influence of damping, nonlinear phenomena, and modal frequencies on the perceived thickness of sound sources. The stimuli are generated using the numerical resolution of the Föppl-von Kármán system. We confirm that increasing the overall damping leads to an increased perceived thickness. Additionally, the emergence of an energy cascade toward higher frequencies (characteristic of thin plates) for impacts of increasing intensity evokes a thinner object.
{"title":"Auditory perception of the thickness of plates.","authors":"Samuel Poirot, Antoine Bourachot, Stefan Bilbao, Richard Kronland-Martinet","doi":"10.1121/10.0024216","DOIUrl":"https://doi.org/10.1121/10.0024216","url":null,"abstract":"<p><p>This study focuses on the auditory perception of plate thickness and investigates acoustic cues that evoke thickness in the context of sound synthesis. Three hypotheses are proposed and tested through a listening test, examining the influence of damping, nonlinear phenomena, and modal frequencies on the perceived thickness of sound sources. The stimuli are generated using the numerical resolution of the Föppl-von Kármán system. We confirm that increasing the overall damping leads to an increased perceived thickness. Additionally, the emergence of an energy cascade toward higher frequencies (characteristic of thin plates) for impacts of increasing intensity evokes a thinner object.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139089623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emma Cotter, James McVey, Linnea Weicht, Joseph Haxel
Pseudosound caused by turbulent pressure fluctuations in fluid flow past a hydrophone, referred to as flow noise, can mask propagating sounds of interest. Flow shields can mitigate flow noise by reducing non-acoustic pressure fluctuations sensed by a hydrophone. We evaluate the performance of three hydrophone flow shields (two nylon fabrics and an oil-filled enclosure) in a tidal channel with peak current speed of 1.3 m s-1. All three flow shields reduced flow noise without attenuating propagating sound below 20 kHz. The oil-filled enclosure performed best, reducing flow noise by over 30 dB at frequencies below 40 Hz.
{"title":"Performance of three hydrophone flow shields in a tidal channel.","authors":"Emma Cotter, James McVey, Linnea Weicht, Joseph Haxel","doi":"10.1121/10.0024333","DOIUrl":"https://doi.org/10.1121/10.0024333","url":null,"abstract":"<p><p>Pseudosound caused by turbulent pressure fluctuations in fluid flow past a hydrophone, referred to as flow noise, can mask propagating sounds of interest. Flow shields can mitigate flow noise by reducing non-acoustic pressure fluctuations sensed by a hydrophone. We evaluate the performance of three hydrophone flow shields (two nylon fabrics and an oil-filled enclosure) in a tidal channel with peak current speed of 1.3 m s-1. All three flow shields reduced flow noise without attenuating propagating sound below 20 kHz. The oil-filled enclosure performed best, reducing flow noise by over 30 dB at frequencies below 40 Hz.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139405444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N Ross Chapman, Michael A Ainslie, Martin Siderius
Inference of source levels for ambient ocean sound from local wind at the sea surface requires an assumption about the nature of the sound source. Depending upon the assumptions made about the nature of the sound source, whether monopole or dipole distributions, the estimated source levels from different research groups are different by several decibels over the frequency band 10-350 Hz. This paper revisits the research issues of source level of local wind-generated sound and shows that the differences in estimated source levels can be understood through a simple analysis of the source assumptions.
{"title":"Source level of wind-generated ambient sound in the oceana).","authors":"N Ross Chapman, Michael A Ainslie, Martin Siderius","doi":"10.1121/10.0024517","DOIUrl":"https://doi.org/10.1121/10.0024517","url":null,"abstract":"<p><p>Inference of source levels for ambient ocean sound from local wind at the sea surface requires an assumption about the nature of the sound source. Depending upon the assumptions made about the nature of the sound source, whether monopole or dipole distributions, the estimated source levels from different research groups are different by several decibels over the frequency band 10-350 Hz. This paper revisits the research issues of source level of local wind-generated sound and shows that the differences in estimated source levels can be understood through a simple analysis of the source assumptions.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139543730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Madeleine E Yu, Natalie Fecher, Elizabeth K Johnson
Vocal recognition of socially relevant conspecifics is an important skill throughout the animal kingdom. Human infants recognize their own mother at birth, and they distinguish between unfamiliar female talkers by 4.5 months of age. Can 4.5-month-olds also distinguish between unfamiliar male talkers? To date, no adequately powered study has addressed this question. Here, a visual fixation procedure demonstrates that, unlike adults, 4.5-month-olds (N = 48) are worse at telling apart unfamiliar male voices than they are at telling apart unfamiliar female voices. This result holds despite infants' equal attentiveness to unfamiliar male and female voices.
{"title":"Learning to identify talkers: Do 4.5-month-old infants distinguish between unfamiliar males?","authors":"Madeleine E Yu, Natalie Fecher, Elizabeth K Johnson","doi":"10.1121/10.0024271","DOIUrl":"10.1121/10.0024271","url":null,"abstract":"<p><p>Vocal recognition of socially relevant conspecifics is an important skill throughout the animal kingdom. Human infants recognize their own mother at birth, and they distinguish between unfamiliar female talkers by 4.5 months of age. Can 4.5-month-olds also distinguish between unfamiliar male talkers? To date, no adequately powered study has addressed this question. Here, a visual fixation procedure demonstrates that, unlike adults, 4.5-month-olds (N = 48) are worse at telling apart unfamiliar male voices than they are at telling apart unfamiliar female voices. This result holds despite infants' equal attentiveness to unfamiliar male and female voices.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139378853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using headphones may expose the listener to potentially harmful levels of sound. This study examines whether introducing tactile vibrations to the listening experience would encourage them to reduce their headphone volume. Fifteen participants adjusted their preferred listening levels for four diverse music tracks under audio-only and audiotactile conditions. Results indicated a significant decrease in preferred audio levels with added tactile stimulation. This effect was particularly significant in songs featuring a strong beat. In contrast, only a minimal effect was observed for genres such as classical music, which typically lack a pronounced beat, at higher vibration intensities. These findings suggest that integrating tactile feedback could be a viable strategy for lowering sound exposure risk.
{"title":"Reducing preferred listening levels in headphones through coherent audiotactile stimulation.","authors":"Eirini Liapikou, Jeremy Marozeau","doi":"10.1121/10.0024516","DOIUrl":"10.1121/10.0024516","url":null,"abstract":"<p><p>Using headphones may expose the listener to potentially harmful levels of sound. This study examines whether introducing tactile vibrations to the listening experience would encourage them to reduce their headphone volume. Fifteen participants adjusted their preferred listening levels for four diverse music tracks under audio-only and audiotactile conditions. Results indicated a significant decrease in preferred audio levels with added tactile stimulation. This effect was particularly significant in songs featuring a strong beat. In contrast, only a minimal effect was observed for genres such as classical music, which typically lack a pronounced beat, at higher vibration intensities. These findings suggest that integrating tactile feedback could be a viable strategy for lowering sound exposure risk.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139547531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper shows that a highly simplified model of speech production based on the optimization of articulatory effort versus intelligibility can account for some observed articulatory consequences of signal-to-noise ratio. Simulations of static vowels in the presence of various background noise levels show that the model predicts articulatory and acoustic modifications of the type observed in Lombard speech. These features were obtained only when the constraint applied to articulatory effort decreases as the level of background noise increases. These results support the hypothesis that Lombard speech is listener oriented and speakers adapt their articulation in noisy environments.
{"title":"Optimization-based modeling of Lombard speech articulation: Supraglottal characteristics.","authors":"Benjamin Elie, Juraj Šimko, Alice Turk","doi":"10.1121/10.0024364","DOIUrl":"10.1121/10.0024364","url":null,"abstract":"<p><p>This paper shows that a highly simplified model of speech production based on the optimization of articulatory effort versus intelligibility can account for some observed articulatory consequences of signal-to-noise ratio. Simulations of static vowels in the presence of various background noise levels show that the model predicts articulatory and acoustic modifications of the type observed in Lombard speech. These features were obtained only when the constraint applied to articulatory effort decreases as the level of background noise increases. These results support the hypothesis that Lombard speech is listener oriented and speakers adapt their articulation in noisy environments.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139418756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigates how California English speakers adjust nasal coarticulation and hyperarticulation on vowels across three speech styles: speaking slowly and clearly (imagining a hard-of-hearing addressee), casually (imagining a friend/family member addressee), and speaking quickly and clearly (imagining being an auctioneer). Results show covariation in speaking rate and vowel hyperarticulation across the styles. Additionally, results reveal that speakers produce more extensive anticipatory nasal coarticulation in the slow-clear speech style, in addition to a slower speech rate. These findings are interpreted in terms of accounts of coarticulation in which speakers selectively tune their production of nasal coarticulation based on the speaking style.
{"title":"Selective tuning of nasal coarticulation and hyperarticulation across slow-clear, casual, and fast-clear speech styles.","authors":"Michelle Cohn, Georgia Zellou","doi":"10.1121/10.0023841","DOIUrl":"10.1121/10.0023841","url":null,"abstract":"<p><p>This study investigates how California English speakers adjust nasal coarticulation and hyperarticulation on vowels across three speech styles: speaking slowly and clearly (imagining a hard-of-hearing addressee), casually (imagining a friend/family member addressee), and speaking quickly and clearly (imagining being an auctioneer). Results show covariation in speaking rate and vowel hyperarticulation across the styles. Additionally, results reveal that speakers produce more extensive anticipatory nasal coarticulation in the slow-clear speech style, in addition to a slower speech rate. These findings are interpreted in terms of accounts of coarticulation in which speakers selectively tune their production of nasal coarticulation based on the speaking style.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138815247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study evaluates the malleability of adults' perception of probabilistic phonotactic (biphone) probabilities, building on a body of literature on statistical phonotactic learning. It was first replicated that listeners categorize phonetic continua as sounds that create higher-probability sequences in their native language. Listeners were also exposed to skewed distributions of biphone contexts, which resulted in the enhancement or reversal of these effects. Thus, listeners dynamically update biphone probabilities (BPs) and bring this to bear on perception of ambiguous acoustic information. These effects can override long-term BP effects rooted in native language experience.
{"title":"Short-term exposure alters adult listeners' perception of segmental phonotactics.","authors":"Jeremy Steffman, Megha Sundara","doi":"10.1121/10.0023900","DOIUrl":"10.1121/10.0023900","url":null,"abstract":"<p><p>This study evaluates the malleability of adults' perception of probabilistic phonotactic (biphone) probabilities, building on a body of literature on statistical phonotactic learning. It was first replicated that listeners categorize phonetic continua as sounds that create higher-probability sequences in their native language. Listeners were also exposed to skewed distributions of biphone contexts, which resulted in the enhancement or reversal of these effects. Thus, listeners dynamically update biphone probabilities (BPs) and bring this to bear on perception of ambiguous acoustic information. These effects can override long-term BP effects rooted in native language experience.</p>","PeriodicalId":73538,"journal":{"name":"JASA express letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138815248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}