{"title":"声学传感器网络中的语音和音频信号处理与机器学习","authors":"Walter Kellermann, Rainer Martin, Nobutaka Ono","doi":"10.1186/s13636-023-00322-6","DOIUrl":null,"url":null,"abstract":"<p>Nowadays, we are surrounded by a plethora of recording devices, including mobile phones, laptops, tablets, smartwatches, and camcorders, among others. However, conventional multichannel signal processing methods can usually not be applied to jointly process the signals recorded by multiple distributed devices because synchronous recording is essential. Thus, commercially available microphone array processing is currently limited to a single device where all microphones are mounted. The full exploitation of the spatial diversity offered by multiple audio devices without requiring wired networking is a major challenge, whose potential practical and commercial benefits prompted significant research efforts over the past decade.</p><p>Wireless acoustic sensor networks (WASNs) have become a new paradigm of acoustic sensing to overcome the limitations of individual devices. Along with wireless communications between microphone nodes and addressing new challenges in handling asynchronous channels, unknown microphone positions, and distributed computing, the WASN enables us to spatially distribute many recording devices. These may cover a wider area and utilize the nodes to form an extended microphone array. It promises to significantly improve the performance of various audio tasks such as speech enhancement, speech recognition, diarization, scene analysis, and anomalous acoustic event detection.</p><p>For this special issue, six papers were accepted which all address the above-mentioned fundamental challenges when using WASNs: First, the question of which sensors should be used for a specific signal processing task or extraction of a target source is addressed by the papers of Guenther et al. and Kindt et al. Given a set of sensors, a method for its synchronization on waveform level in dynamic scenarios is presented by Chinaev et al., and a localization method using both sensor signals and higher-level environmental information is discussed by Grinstein et al. Finally, robust speaker counting and source separation are addressed by Hsu and Bai and the task of removing specific interference from a single sensor signal is tackled by Kawamura et al.</p><p>The paper ‘Microphone utility estimation in acoustic sensor networks using single-channel signal features’ by Guenther et al. proposes a method to assess the utility of individual sensors of a WASN for coherence-based signal processing, e.g., beamforming or blind source separation, by using appropriate single-channel signal features as proxies for waveforms. Thereby, the need for transmitting waveforms for identifying suitable sensors for a synchronized cluster of sensors is avoided and the required amount of transmitted data can be reduced by several orders of magnitude. It is shown that both estimation-theoretic processing of single-channel features and deep learning-based identification of such features lead to measures of coherence in the feature space that reflect the suitability of distributed sensors for coherent processing.</p><p>In the paper ‘Robustness of ad hoc microphone clustering using speaker embeddings: Evaluation under realistic and challenging scenarios’ by Kindt et al., the robustness of speaker embeddings learned from multiple microphone signals as a feature for identifying useful clusters for extracting a speech signal is studied with respect to several key aspects: The dependency on the distance metrics for clustering, the observation interval required for establishing robust clusters determining the stationarity requirements for the acoustic scenario, and the performance for increasingly challenging acoustic scenarios and multiple speakers. For evaluation, a source separation task in realistic noisy and reverberant environments is investigated using several separation techniques applied for the resulting clusters. The proposed speaker embeddings are also compared to established MFCC-based features with respect to multiple state-of-the-art criteria for signal enhancement.</p><p>The paper ‘Online distributed waveform-synchronization for acoustic sensor networks with dynamic topology’ by Chinaev et al. is dedicated to network-wide sample-level synchronization relying on a previously published acoustic waveform-based sampling rate-offset estimation and compensation for pairs of nodes. Assuming that the WASN is organized as a directed minimum spanning tree (MST), the proposed network-wide synchronization scheme propagates from a root node over the entire network. Additionally, a network protocol is proposed that ensures the synchronization even if the network topology changes, e.g., because of node failure, broken transmission links, or newly appearing nodes. The efficacy of the method is demonstrated for dynamic scenes in a simulated dynamic acoustic scenario in an apartment with several rooms.</p><p>In their paper ‘Dual input neural networks for positional sound source localization’, Grinstein et al. combine multiple microphone signals from a distributed microphone array with information describing the acoustic properties of the scene for improved sound source localization. This information includes the positions of microphones, the room size, and the reverberation time. They present a Dual Input Neural Network (DI-NN) as a straightforward and efficient technique to construct a neural network capable of processing two distinct data types. It is tested in different scenarios, comparing it to alternative models such as a traditional least-squares method and a convolutional recurrent neural network. Although the proposed DI-NN is not retraining for each new scenario, the authors’ results demonstrate the superiority of the proposed DI-NN, achieving a substantial reduction in localization errors on synthetic data and a data set with real recordings.</p><p>The paper ‘Learning-based robust speaker counting and separation with the aid of spatial coherence’ by Hsu and Bai tackles speaker counting and speaker separation in noisy and reverberant environments. The authors combine traditional and learning-based methods to enhance these tasks and to achieve robustness to unseen room impulse responses (RIRs) and array configurations. They formulate a three-stage approach that entails the computation of a spatial coherence matrix (SCM) based on whitened relative transfer functions (wRTFs) as a spatial signature of directional sources. They evaluate the SCM and local coherence functions to detect the activity of the target speaker. Then, the eigenvalues of the SCM and the maximum similarity of inter-frame global activity distributions between two speakers are fed into a network for speaker counting (SCnet). To extract each independent speaker signal, a global and local activity-driven network (GLADnet) is employed. The authors demonstrate the benefits of the proposed approach on a data set of real meeting recordings.</p><p>The last paper, entitled ‘Acoustic object canceller: removing a known signal from monaural recording using blind synchronization’ by Kawamura et al., addresses the problem of removing undesired interference from individual microphone signals if a reference signal for the interference is available. The authors propose a method that treats the interference as an acoustic object whose signal is linearly filtered before arriving at the receiving microphone. Assuming that the signals of the acoustic object and the microphone exhibit different sampling rates, the signals are first synchronized, and then the frequency response of the propagation path from the object to the microphone is determined via maximum-likelihood estimation using the majorization-minimization algorithm, investigating and evaluating various statistical models for the desired signal that should be preserved.</p><p>We like to thank all authors for their excellent contributions to this special issue and hope that this collection will be a useful resource for research in WASNs in the years to come.</p><h3>Authors and Affiliations</h3><ol><li><p>Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany</p><p>Walter Kellermann</p></li><li><p>Ruhr-Universität Bochum, Bochum, Germany</p><p>Rainer Martin</p></li><li><p>Tokyo Metropolitan University, Hino-shi, Japan</p><p>Nobutaka Ono</p></li></ol><span>Authors</span><ol><li><span>Walter Kellermann</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li><li><span>Rainer Martin</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li><li><span>Nobutaka Ono</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li></ol><h3>Corresponding author</h3><p>Correspondence to Walter Kellermann.</p><h3>Publisher's Note</h3><p>Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.</p><p><b>Open Access</b> This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.</p>\n<p>Reprints and Permissions</p><img alt=\"Check for updates. Verify currency and authenticity via CrossMark\" height=\"81\" src=\"data:image/svg+xml;base64,<svg height="81" width="57" xmlns="http://www.w3.org/2000/svg"><g fill="none" fill-rule="evenodd"><path d="m17.35 35.45 21.3-14.2v-17.03h-21.3" fill="#989898"/><path d="m38.65 35.45-21.3-14.2v-17.03h21.3" fill="#747474"/><path d="m28 .5c-12.98 0-23.5 10.52-23.5 23.5s10.52 23.5 23.5 23.5 23.5-10.52 23.5-23.5c0-6.23-2.48-12.21-6.88-16.62-4.41-4.4-10.39-6.88-16.62-6.88zm0 41.25c-9.8 0-17.75-7.95-17.75-17.75s7.95-17.75 17.75-17.75 17.75 7.95 17.75 17.75c0 4.71-1.87 9.22-5.2 12.55s-7.84 5.2-12.55 5.2z" fill="#535353"/><path d="m41 36c-5.81 6.23-15.23 7.45-22.43 2.9-7.21-4.55-10.16-13.57-7.03-21.5l-4.92-3.11c-4.95 10.7-1.19 23.42 8.78 29.71 9.97 6.3 23.07 4.22 30.6-4.86z" fill="#9c9c9c"/><path d="m.2 58.45c0-.75.11-1.42.33-2.01s.52-1.09.91-1.5c.38-.41.83-.73 1.34-.94.51-.22 1.06-.32 1.65-.32.56 0 1.06.11 1.51.35.44.23.81.5 1.1.81l-.91 1.01c-.24-.24-.49-.42-.75-.56-.27-.13-.58-.2-.93-.2-.39 0-.73.08-1.05.23-.31.16-.58.37-.81.66-.23.28-.41.63-.53 1.04-.13.41-.19.88-.19 1.39 0 1.04.23 1.86.68 2.46.45.59 1.06.88 1.84.88.41 0 .77-.07 1.07-.23s.59-.39.85-.68l.91 1c-.38.43-.8.76-1.28.99-.47.22-1 .34-1.58.34-.59 0-1.13-.1-1.64-.31-.5-.2-.94-.51-1.31-.91-.38-.4-.67-.9-.88-1.48-.22-.59-.33-1.26-.33-2.02zm8.4-5.33h1.61v2.54l-.05 1.33c.29-.27.61-.51.96-.72s.76-.31 1.24-.31c.73 0 1.27.23 1.61.71.33.47.5 1.14.5 2.02v4.31h-1.61v-4.1c0-.57-.08-.97-.25-1.21-.17-.23-.45-.35-.83-.35-.3 0-.56.08-.79.22-.23.15-.49.36-.78.64v4.8h-1.61zm7.37 6.45c0-.56.09-1.06.26-1.51.18-.45.42-.83.71-1.14.29-.3.63-.54 1.01-.71.39-.17.78-.25 1.18-.25.47 0 .88.08 1.23.24.36.16.65.38.89.67s.42.63.54 1.03c.12.41.18.84.18 1.32 0 .32-.02.57-.07.76h-4.36c.07.62.29 1.1.65 1.44.36.33.82.5 1.38.5.29 0 .57-.04.83-.13s.51-.21.76-.37l.55 1.01c-.33.21-.69.39-1.09.53-.41.14-.83.21-1.26.21-.48 0-.92-.08-1.34-.25-.41-.16-.76-.4-1.07-.7-.31-.31-.55-.69-.72-1.13-.18-.44-.26-.95-.26-1.52zm4.6-.62c0-.55-.11-.98-.34-1.28-.23-.31-.58-.47-1.06-.47-.41 0-.77.15-1.07.45-.31.29-.5.73-.58 1.3zm2.5.62c0-.57.09-1.08.28-1.53.18-.44.43-.82.75-1.13s.69-.54 1.1-.71c.42-.16.85-.24 1.31-.24.45 0 .84.08 1.17.23s.61.34.85.57l-.77 1.02c-.19-.16-.38-.28-.56-.37-.19-.09-.39-.14-.61-.14-.56 0-1.01.21-1.35.63-.35.41-.52.97-.52 1.67 0 .69.17 1.24.51 1.66.34.41.78.62 1.32.62.28 0 .54-.06.78-.17.24-.12.45-.26.64-.42l.67 1.03c-.33.29-.69.51-1.08.65-.39.15-.78.23-1.18.23-.46 0-.9-.08-1.31-.24-.4-.16-.75-.39-1.05-.7s-.53-.69-.7-1.13c-.17-.45-.25-.96-.25-1.53zm6.91-6.45h1.58v6.17h.05l2.54-3.16h1.77l-2.35 2.8 2.59 4.07h-1.75l-1.77-2.98-1.08 1.23v1.75h-1.58zm13.69 1.27c-.25-.11-.5-.17-.75-.17-.58 0-.87.39-.87 1.16v.75h1.34v1.27h-1.34v5.6h-1.61v-5.6h-.92v-1.2l.92-.07v-.72c0-.35.04-.68.13-.98.08-.31.21-.57.4-.79s.42-.39.71-.51c.28-.12.63-.18 1.04-.18.24 0 .48.02.69.07.22.05.41.1.57.17zm.48 5.18c0-.57.09-1.08.27-1.53.17-.44.41-.82.72-1.13.3-.31.65-.54 1.04-.71.39-.16.8-.24 1.23-.24s.84.08 1.24.24c.4.17.74.4 1.04.71s.54.69.72 1.13c.19.45.28.96.28 1.53s-.09 1.08-.28 1.53c-.18.44-.42.82-.72 1.13s-.64.54-1.04.7-.81.24-1.24.24-.84-.08-1.23-.24-.74-.39-1.04-.7c-.31-.31-.55-.69-.72-1.13-.18-.45-.27-.96-.27-1.53zm1.65 0c0 .69.14 1.24.43 1.66.28.41.68.62 1.18.62.51 0 .9-.21 1.19-.62.29-.42.44-.97.44-1.66 0-.7-.15-1.26-.44-1.67-.29-.42-.68-.63-1.19-.63-.5 0-.9.21-1.18.63-.29.41-.43.97-.43 1.67zm6.48-3.44h1.33l.12 1.21h.05c.24-.44.54-.79.88-1.02.35-.24.7-.36 1.07-.36.32 0 .59.05.78.14l-.28 1.4-.33-.09c-.11-.01-.23-.02-.38-.02-.27 0-.56.1-.86.31s-.55.58-.77 1.1v4.2h-1.61zm-47.87 15h1.61v4.1c0 .57.08.97.25 1.2.17.24.44.35.81.35.3 0 .57-.07.8-.22.22-.15.47-.39.73-.73v-4.7h1.61v6.87h-1.32l-.12-1.01h-.04c-.3.36-.63.64-.98.86-.35.21-.76.32-1.24.32-.73 0-1.27-.24-1.61-.71-.33-.47-.5-1.14-.5-2.02zm9.46 7.43v2.16h-1.61v-9.59h1.33l.12.72h.05c.29-.24.61-.45.97-.63.35-.17.72-.26 1.1-.26.43 0 .81.08 1.15.24.33.17.61.4.84.71.24.31.41.68.53 1.11.13.42.19.91.19 1.44 0 .59-.09 1.11-.25 1.57-.16.47-.38.85-.65 1.16-.27.32-.58.56-.94.73-.35.16-.72.25-1.1.25-.3 0-.6-.07-.9-.2s-.59-.31-.87-.56zm0-2.3c.26.22.5.37.73.45.24.09.46.13.66.13.46 0 .84-.2 1.15-.6.31-.39.46-.98.46-1.77 0-.69-.12-1.22-.35-1.61-.23-.38-.61-.57-1.13-.57-.49 0-.99.26-1.52.77zm5.87-1.69c0-.56.08-1.06.25-1.51.16-.45.37-.83.65-1.14.27-.3.58-.54.93-.71s.71-.25 1.08-.25c.39 0 .73.07 1 .2.27.14.54.32.81.55l-.06-1.1v-2.49h1.61v9.88h-1.33l-.11-.74h-.06c-.25.25-.54.46-.88.64-.33.18-.69.27-1.06.27-.87 0-1.56-.32-2.07-.95s-.76-1.51-.76-2.65zm1.67-.01c0 .74.13 1.31.4 1.7.26.38.65.58 1.15.58.51 0 .99-.26 1.44-.77v-3.21c-.24-.21-.48-.36-.7-.45-.23-.08-.46-.12-.7-.12-.45 0-.82.19-1.13.59-.31.39-.46.95-.46 1.68zm6.35 1.59c0-.73.32-1.3.97-1.71.64-.4 1.67-.68 3.08-.84 0-.17-.02-.34-.07-.51-.05-.16-.12-.3-.22-.43s-.22-.22-.38-.3c-.15-.06-.34-.1-.58-.1-.34 0-.68.07-1 .2s-.63.29-.93.47l-.59-1.08c.39-.24.81-.45 1.28-.63.47-.17.99-.26 1.54-.26.86 0 1.51.25 1.93.76s.63 1.25.63 2.21v4.07h-1.32l-.12-.76h-.05c-.3.27-.63.48-.98.66s-.73.27-1.14.27c-.61 0-1.1-.19-1.48-.56-.38-.36-.57-.85-.57-1.46zm1.57-.12c0 .3.09.53.27.67.19.14.42.21.71.21.28 0 .54-.07.77-.2s.48-.31.73-.56v-1.54c-.47.06-.86.13-1.18.23-.31.09-.57.19-.76.31s-.33.25-.41.4c-.09.15-.13.31-.13.48zm6.29-3.63h-.98v-1.2l1.06-.07.2-1.88h1.34v1.88h1.75v1.27h-1.75v3.28c0 .8.32 1.2.97 1.2.12 0 .24-.01.37-.04.12-.03.24-.07.34-.11l.28 1.19c-.19.06-.4.12-.64.17-.23.05-.49.08-.76.08-.4 0-.74-.06-1.02-.18-.27-.13-.49-.3-.67-.52-.17-.21-.3-.48-.37-.78-.08-.3-.12-.64-.12-1.01zm4.36 2.17c0-.56.09-1.06.27-1.51s.41-.83.71-1.14c.29-.3.63-.54 1.01-.71.39-.17.78-.25 1.18-.25.47 0 .88.08 1.23.24.36.16.65.38.89.67s.42.63.54 1.03c.12.41.18.84.18 1.32 0 .32-.02.57-.07.76h-4.37c.08.62.29 1.1.65 1.44.36.33.82.5 1.38.5.3 0 .58-.04.84-.13.25-.09.51-.21.76-.37l.54 1.01c-.32.21-.69.39-1.09.53s-.82.21-1.26.21c-.47 0-.92-.08-1.33-.25-.41-.16-.77-.4-1.08-.7-.3-.31-.54-.69-.72-1.13-.17-.44-.26-.95-.26-1.52zm4.61-.62c0-.55-.11-.98-.34-1.28-.23-.31-.58-.47-1.06-.47-.41 0-.77.15-1.08.45-.31.29-.5.73-.57 1.3zm3.01 2.23c.31.24.61.43.92.57.3.13.63.2.98.2.38 0 .65-.08.83-.23s.27-.35.27-.6c0-.14-.05-.26-.13-.37-.08-.1-.2-.2-.34-.28-.14-.09-.29-.16-.47-.23l-.53-.22c-.23-.09-.46-.18-.69-.3-.23-.11-.44-.24-.62-.4s-.33-.35-.45-.55c-.12-.21-.18-.46-.18-.75 0-.61.23-1.1.68-1.49.44-.38 1.06-.57 1.83-.57.48 0 .91.08 1.29.25s.71.36.99.57l-.74.98c-.24-.17-.49-.32-.73-.42-.25-.11-.51-.16-.78-.16-.35 0-.6.07-.76.21-.17.15-.25.33-.25.54 0 .14.04.26.12.36s.18.18.31.26c.14.07.29.14.46.21l.54.19c.23.09.47.18.7.29s.44.24.64.4c.19.16.34.35.46.58.11.23.17.5.17.82 0 .3-.06.58-.17.83-.12.26-.29.48-.51.68-.23.19-.51.34-.84.45-.34.11-.72.17-1.15.17-.48 0-.95-.09-1.41-.27-.46-.19-.86-.41-1.2-.68z" fill="#535353"/></g></svg>\" width=\"57\"/><h3>Cite this article</h3><p>Kellermann, W., Martin, R. & Ono, N. Signal processing and machine learning for speech and audio in acoustic sensor networks. <i>J AUDIO SPEECH MUSIC PROC.</i> <b>2023</b>, 54 (2023). https://doi.org/10.1186/s13636-023-00322-6</p><p>Download citation<svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" role=\"img\" width=\"16\"><use xlink:href=\"#icon-eds-i-download-medium\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"></use></svg></p><ul data-test=\"publication-history\"><li><p>Published<span>: </span><span><time datetime=\"2023-12-17\">17 December 2023</time></span></p></li><li><p>DOI</abbr><span>: </span><span>https://doi.org/10.1186/s13636-023-00322-6</span></p></li></ul><h3>Share this article</h3><p>Anyone you share the following link with will be able to read this content:</p><button data-track=\"click\" data-track-action=\"get shareable link\" data-track-external=\"\" data-track-label=\"button\" type=\"button\">Get shareable link</button><p>Sorry, a shareable link is not currently available for this article.</p><p data-track=\"click\" data-track-action=\"select share url\" data-track-label=\"button\"></p><button data-track=\"click\" data-track-action=\"copy share url\" data-track-external=\"\" data-track-label=\"button\" type=\"button\">Copy to clipboard</button><p> Provided by the Springer Nature SharedIt content-sharing initiative </p>","PeriodicalId":49202,"journal":{"name":"Eurasip Journal on Audio Speech and Music Processing","volume":"55 1","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2023-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Signal processing and machine learning for speech and audio in acoustic sensor networks\",\"authors\":\"Walter Kellermann, Rainer Martin, Nobutaka Ono\",\"doi\":\"10.1186/s13636-023-00322-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Nowadays, we are surrounded by a plethora of recording devices, including mobile phones, laptops, tablets, smartwatches, and camcorders, among others. However, conventional multichannel signal processing methods can usually not be applied to jointly process the signals recorded by multiple distributed devices because synchronous recording is essential. Thus, commercially available microphone array processing is currently limited to a single device where all microphones are mounted. The full exploitation of the spatial diversity offered by multiple audio devices without requiring wired networking is a major challenge, whose potential practical and commercial benefits prompted significant research efforts over the past decade.</p><p>Wireless acoustic sensor networks (WASNs) have become a new paradigm of acoustic sensing to overcome the limitations of individual devices. Along with wireless communications between microphone nodes and addressing new challenges in handling asynchronous channels, unknown microphone positions, and distributed computing, the WASN enables us to spatially distribute many recording devices. These may cover a wider area and utilize the nodes to form an extended microphone array. It promises to significantly improve the performance of various audio tasks such as speech enhancement, speech recognition, diarization, scene analysis, and anomalous acoustic event detection.</p><p>For this special issue, six papers were accepted which all address the above-mentioned fundamental challenges when using WASNs: First, the question of which sensors should be used for a specific signal processing task or extraction of a target source is addressed by the papers of Guenther et al. and Kindt et al. Given a set of sensors, a method for its synchronization on waveform level in dynamic scenarios is presented by Chinaev et al., and a localization method using both sensor signals and higher-level environmental information is discussed by Grinstein et al. Finally, robust speaker counting and source separation are addressed by Hsu and Bai and the task of removing specific interference from a single sensor signal is tackled by Kawamura et al.</p><p>The paper ‘Microphone utility estimation in acoustic sensor networks using single-channel signal features’ by Guenther et al. proposes a method to assess the utility of individual sensors of a WASN for coherence-based signal processing, e.g., beamforming or blind source separation, by using appropriate single-channel signal features as proxies for waveforms. Thereby, the need for transmitting waveforms for identifying suitable sensors for a synchronized cluster of sensors is avoided and the required amount of transmitted data can be reduced by several orders of magnitude. It is shown that both estimation-theoretic processing of single-channel features and deep learning-based identification of such features lead to measures of coherence in the feature space that reflect the suitability of distributed sensors for coherent processing.</p><p>In the paper ‘Robustness of ad hoc microphone clustering using speaker embeddings: Evaluation under realistic and challenging scenarios’ by Kindt et al., the robustness of speaker embeddings learned from multiple microphone signals as a feature for identifying useful clusters for extracting a speech signal is studied with respect to several key aspects: The dependency on the distance metrics for clustering, the observation interval required for establishing robust clusters determining the stationarity requirements for the acoustic scenario, and the performance for increasingly challenging acoustic scenarios and multiple speakers. For evaluation, a source separation task in realistic noisy and reverberant environments is investigated using several separation techniques applied for the resulting clusters. The proposed speaker embeddings are also compared to established MFCC-based features with respect to multiple state-of-the-art criteria for signal enhancement.</p><p>The paper ‘Online distributed waveform-synchronization for acoustic sensor networks with dynamic topology’ by Chinaev et al. is dedicated to network-wide sample-level synchronization relying on a previously published acoustic waveform-based sampling rate-offset estimation and compensation for pairs of nodes. Assuming that the WASN is organized as a directed minimum spanning tree (MST), the proposed network-wide synchronization scheme propagates from a root node over the entire network. Additionally, a network protocol is proposed that ensures the synchronization even if the network topology changes, e.g., because of node failure, broken transmission links, or newly appearing nodes. The efficacy of the method is demonstrated for dynamic scenes in a simulated dynamic acoustic scenario in an apartment with several rooms.</p><p>In their paper ‘Dual input neural networks for positional sound source localization’, Grinstein et al. combine multiple microphone signals from a distributed microphone array with information describing the acoustic properties of the scene for improved sound source localization. This information includes the positions of microphones, the room size, and the reverberation time. They present a Dual Input Neural Network (DI-NN) as a straightforward and efficient technique to construct a neural network capable of processing two distinct data types. It is tested in different scenarios, comparing it to alternative models such as a traditional least-squares method and a convolutional recurrent neural network. Although the proposed DI-NN is not retraining for each new scenario, the authors’ results demonstrate the superiority of the proposed DI-NN, achieving a substantial reduction in localization errors on synthetic data and a data set with real recordings.</p><p>The paper ‘Learning-based robust speaker counting and separation with the aid of spatial coherence’ by Hsu and Bai tackles speaker counting and speaker separation in noisy and reverberant environments. The authors combine traditional and learning-based methods to enhance these tasks and to achieve robustness to unseen room impulse responses (RIRs) and array configurations. They formulate a three-stage approach that entails the computation of a spatial coherence matrix (SCM) based on whitened relative transfer functions (wRTFs) as a spatial signature of directional sources. They evaluate the SCM and local coherence functions to detect the activity of the target speaker. Then, the eigenvalues of the SCM and the maximum similarity of inter-frame global activity distributions between two speakers are fed into a network for speaker counting (SCnet). To extract each independent speaker signal, a global and local activity-driven network (GLADnet) is employed. The authors demonstrate the benefits of the proposed approach on a data set of real meeting recordings.</p><p>The last paper, entitled ‘Acoustic object canceller: removing a known signal from monaural recording using blind synchronization’ by Kawamura et al., addresses the problem of removing undesired interference from individual microphone signals if a reference signal for the interference is available. The authors propose a method that treats the interference as an acoustic object whose signal is linearly filtered before arriving at the receiving microphone. Assuming that the signals of the acoustic object and the microphone exhibit different sampling rates, the signals are first synchronized, and then the frequency response of the propagation path from the object to the microphone is determined via maximum-likelihood estimation using the majorization-minimization algorithm, investigating and evaluating various statistical models for the desired signal that should be preserved.</p><p>We like to thank all authors for their excellent contributions to this special issue and hope that this collection will be a useful resource for research in WASNs in the years to come.</p><h3>Authors and Affiliations</h3><ol><li><p>Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany</p><p>Walter Kellermann</p></li><li><p>Ruhr-Universität Bochum, Bochum, Germany</p><p>Rainer Martin</p></li><li><p>Tokyo Metropolitan University, Hino-shi, Japan</p><p>Nobutaka Ono</p></li></ol><span>Authors</span><ol><li><span>Walter Kellermann</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li><li><span>Rainer Martin</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li><li><span>Nobutaka Ono</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li></ol><h3>Corresponding author</h3><p>Correspondence to Walter Kellermann.</p><h3>Publisher's Note</h3><p>Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.</p><p><b>Open Access</b> This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.</p>\\n<p>Reprints and Permissions</p><img alt=\\\"Check for updates. Verify currency and authenticity via CrossMark\\\" height=\\\"81\\\" src=\\\"data:image/svg+xml;base64,<svg height="81" width="57" xmlns="http://www.w3.org/2000/svg"><g fill="none" fill-rule="evenodd"><path d="m17.35 35.45 21.3-14.2v-17.03h-21.3" fill="#989898"/><path d="m38.65 35.45-21.3-14.2v-17.03h21.3" fill="#747474"/><path d="m28 .5c-12.98 0-23.5 10.52-23.5 23.5s10.52 23.5 23.5 23.5 23.5-10.52 23.5-23.5c0-6.23-2.48-12.21-6.88-16.62-4.41-4.4-10.39-6.88-16.62-6.88zm0 41.25c-9.8 0-17.75-7.95-17.75-17.75s7.95-17.75 17.75-17.75 17.75 7.95 17.75 17.75c0 4.71-1.87 9.22-5.2 12.55s-7.84 5.2-12.55 5.2z" fill="#535353"/><path d="m41 36c-5.81 6.23-15.23 7.45-22.43 2.9-7.21-4.55-10.16-13.57-7.03-21.5l-4.92-3.11c-4.95 10.7-1.19 23.42 8.78 29.71 9.97 6.3 23.07 4.22 30.6-4.86z" fill="#9c9c9c"/><path d="m.2 58.45c0-.75.11-1.42.33-2.01s.52-1.09.91-1.5c.38-.41.83-.73 1.34-.94.51-.22 1.06-.32 1.65-.32.56 0 1.06.11 1.51.35.44.23.81.5 1.1.81l-.91 1.01c-.24-.24-.49-.42-.75-.56-.27-.13-.58-.2-.93-.2-.39 0-.73.08-1.05.23-.31.16-.58.37-.81.66-.23.28-.41.63-.53 1.04-.13.41-.19.88-.19 1.39 0 1.04.23 1.86.68 2.46.45.59 1.06.88 1.84.88.41 0 .77-.07 1.07-.23s.59-.39.85-.68l.91 1c-.38.43-.8.76-1.28.99-.47.22-1 .34-1.58.34-.59 0-1.13-.1-1.64-.31-.5-.2-.94-.51-1.31-.91-.38-.4-.67-.9-.88-1.48-.22-.59-.33-1.26-.33-2.02zm8.4-5.33h1.61v2.54l-.05 1.33c.29-.27.61-.51.96-.72s.76-.31 1.24-.31c.73 0 1.27.23 1.61.71.33.47.5 1.14.5 2.02v4.31h-1.61v-4.1c0-.57-.08-.97-.25-1.21-.17-.23-.45-.35-.83-.35-.3 0-.56.08-.79.22-.23.15-.49.36-.78.64v4.8h-1.61zm7.37 6.45c0-.56.09-1.06.26-1.51.18-.45.42-.83.71-1.14.29-.3.63-.54 1.01-.71.39-.17.78-.25 1.18-.25.47 0 .88.08 1.23.24.36.16.65.38.89.67s.42.63.54 1.03c.12.41.18.84.18 1.32 0 .32-.02.57-.07.76h-4.36c.07.62.29 1.1.65 1.44.36.33.82.5 1.38.5.29 0 .57-.04.83-.13s.51-.21.76-.37l.55 1.01c-.33.21-.69.39-1.09.53-.41.14-.83.21-1.26.21-.48 0-.92-.08-1.34-.25-.41-.16-.76-.4-1.07-.7-.31-.31-.55-.69-.72-1.13-.18-.44-.26-.95-.26-1.52zm4.6-.62c0-.55-.11-.98-.34-1.28-.23-.31-.58-.47-1.06-.47-.41 0-.77.15-1.07.45-.31.29-.5.73-.58 1.3zm2.5.62c0-.57.09-1.08.28-1.53.18-.44.43-.82.75-1.13s.69-.54 1.1-.71c.42-.16.85-.24 1.31-.24.45 0 .84.08 1.17.23s.61.34.85.57l-.77 1.02c-.19-.16-.38-.28-.56-.37-.19-.09-.39-.14-.61-.14-.56 0-1.01.21-1.35.63-.35.41-.52.97-.52 1.67 0 .69.17 1.24.51 1.66.34.41.78.62 1.32.62.28 0 .54-.06.78-.17.24-.12.45-.26.64-.42l.67 1.03c-.33.29-.69.51-1.08.65-.39.15-.78.23-1.18.23-.46 0-.9-.08-1.31-.24-.4-.16-.75-.39-1.05-.7s-.53-.69-.7-1.13c-.17-.45-.25-.96-.25-1.53zm6.91-6.45h1.58v6.17h.05l2.54-3.16h1.77l-2.35 2.8 2.59 4.07h-1.75l-1.77-2.98-1.08 1.23v1.75h-1.58zm13.69 1.27c-.25-.11-.5-.17-.75-.17-.58 0-.87.39-.87 1.16v.75h1.34v1.27h-1.34v5.6h-1.61v-5.6h-.92v-1.2l.92-.07v-.72c0-.35.04-.68.13-.98.08-.31.21-.57.4-.79s.42-.39.71-.51c.28-.12.63-.18 1.04-.18.24 0 .48.02.69.07.22.05.41.1.57.17zm.48 5.18c0-.57.09-1.08.27-1.53.17-.44.41-.82.72-1.13.3-.31.65-.54 1.04-.71.39-.16.8-.24 1.23-.24s.84.08 1.24.24c.4.17.74.4 1.04.71s.54.69.72 1.13c.19.45.28.96.28 1.53s-.09 1.08-.28 1.53c-.18.44-.42.82-.72 1.13s-.64.54-1.04.7-.81.24-1.24.24-.84-.08-1.23-.24-.74-.39-1.04-.7c-.31-.31-.55-.69-.72-1.13-.18-.45-.27-.96-.27-1.53zm1.65 0c0 .69.14 1.24.43 1.66.28.41.68.62 1.18.62.51 0 .9-.21 1.19-.62.29-.42.44-.97.44-1.66 0-.7-.15-1.26-.44-1.67-.29-.42-.68-.63-1.19-.63-.5 0-.9.21-1.18.63-.29.41-.43.97-.43 1.67zm6.48-3.44h1.33l.12 1.21h.05c.24-.44.54-.79.88-1.02.35-.24.7-.36 1.07-.36.32 0 .59.05.78.14l-.28 1.4-.33-.09c-.11-.01-.23-.02-.38-.02-.27 0-.56.1-.86.31s-.55.58-.77 1.1v4.2h-1.61zm-47.87 15h1.61v4.1c0 .57.08.97.25 1.2.17.24.44.35.81.35.3 0 .57-.07.8-.22.22-.15.47-.39.73-.73v-4.7h1.61v6.87h-1.32l-.12-1.01h-.04c-.3.36-.63.64-.98.86-.35.21-.76.32-1.24.32-.73 0-1.27-.24-1.61-.71-.33-.47-.5-1.14-.5-2.02zm9.46 7.43v2.16h-1.61v-9.59h1.33l.12.72h.05c.29-.24.61-.45.97-.63.35-.17.72-.26 1.1-.26.43 0 .81.08 1.15.24.33.17.61.4.84.71.24.31.41.68.53 1.11.13.42.19.91.19 1.44 0 .59-.09 1.11-.25 1.57-.16.47-.38.85-.65 1.16-.27.32-.58.56-.94.73-.35.16-.72.25-1.1.25-.3 0-.6-.07-.9-.2s-.59-.31-.87-.56zm0-2.3c.26.22.5.37.73.45.24.09.46.13.66.13.46 0 .84-.2 1.15-.6.31-.39.46-.98.46-1.77 0-.69-.12-1.22-.35-1.61-.23-.38-.61-.57-1.13-.57-.49 0-.99.26-1.52.77zm5.87-1.69c0-.56.08-1.06.25-1.51.16-.45.37-.83.65-1.14.27-.3.58-.54.93-.71s.71-.25 1.08-.25c.39 0 .73.07 1 .2.27.14.54.32.81.55l-.06-1.1v-2.49h1.61v9.88h-1.33l-.11-.74h-.06c-.25.25-.54.46-.88.64-.33.18-.69.27-1.06.27-.87 0-1.56-.32-2.07-.95s-.76-1.51-.76-2.65zm1.67-.01c0 .74.13 1.31.4 1.7.26.38.65.58 1.15.58.51 0 .99-.26 1.44-.77v-3.21c-.24-.21-.48-.36-.7-.45-.23-.08-.46-.12-.7-.12-.45 0-.82.19-1.13.59-.31.39-.46.95-.46 1.68zm6.35 1.59c0-.73.32-1.3.97-1.71.64-.4 1.67-.68 3.08-.84 0-.17-.02-.34-.07-.51-.05-.16-.12-.3-.22-.43s-.22-.22-.38-.3c-.15-.06-.34-.1-.58-.1-.34 0-.68.07-1 .2s-.63.29-.93.47l-.59-1.08c.39-.24.81-.45 1.28-.63.47-.17.99-.26 1.54-.26.86 0 1.51.25 1.93.76s.63 1.25.63 2.21v4.07h-1.32l-.12-.76h-.05c-.3.27-.63.48-.98.66s-.73.27-1.14.27c-.61 0-1.1-.19-1.48-.56-.38-.36-.57-.85-.57-1.46zm1.57-.12c0 .3.09.53.27.67.19.14.42.21.71.21.28 0 .54-.07.77-.2s.48-.31.73-.56v-1.54c-.47.06-.86.13-1.18.23-.31.09-.57.19-.76.31s-.33.25-.41.4c-.09.15-.13.31-.13.48zm6.29-3.63h-.98v-1.2l1.06-.07.2-1.88h1.34v1.88h1.75v1.27h-1.75v3.28c0 .8.32 1.2.97 1.2.12 0 .24-.01.37-.04.12-.03.24-.07.34-.11l.28 1.19c-.19.06-.4.12-.64.17-.23.05-.49.08-.76.08-.4 0-.74-.06-1.02-.18-.27-.13-.49-.3-.67-.52-.17-.21-.3-.48-.37-.78-.08-.3-.12-.64-.12-1.01zm4.36 2.17c0-.56.09-1.06.27-1.51s.41-.83.71-1.14c.29-.3.63-.54 1.01-.71.39-.17.78-.25 1.18-.25.47 0 .88.08 1.23.24.36.16.65.38.89.67s.42.63.54 1.03c.12.41.18.84.18 1.32 0 .32-.02.57-.07.76h-4.37c.08.62.29 1.1.65 1.44.36.33.82.5 1.38.5.3 0 .58-.04.84-.13.25-.09.51-.21.76-.37l.54 1.01c-.32.21-.69.39-1.09.53s-.82.21-1.26.21c-.47 0-.92-.08-1.33-.25-.41-.16-.77-.4-1.08-.7-.3-.31-.54-.69-.72-1.13-.17-.44-.26-.95-.26-1.52zm4.61-.62c0-.55-.11-.98-.34-1.28-.23-.31-.58-.47-1.06-.47-.41 0-.77.15-1.08.45-.31.29-.5.73-.57 1.3zm3.01 2.23c.31.24.61.43.92.57.3.13.63.2.98.2.38 0 .65-.08.83-.23s.27-.35.27-.6c0-.14-.05-.26-.13-.37-.08-.1-.2-.2-.34-.28-.14-.09-.29-.16-.47-.23l-.53-.22c-.23-.09-.46-.18-.69-.3-.23-.11-.44-.24-.62-.4s-.33-.35-.45-.55c-.12-.21-.18-.46-.18-.75 0-.61.23-1.1.68-1.49.44-.38 1.06-.57 1.83-.57.48 0 .91.08 1.29.25s.71.36.99.57l-.74.98c-.24-.17-.49-.32-.73-.42-.25-.11-.51-.16-.78-.16-.35 0-.6.07-.76.21-.17.15-.25.33-.25.54 0 .14.04.26.12.36s.18.18.31.26c.14.07.29.14.46.21l.54.19c.23.09.47.18.7.29s.44.24.64.4c.19.16.34.35.46.58.11.23.17.5.17.82 0 .3-.06.58-.17.83-.12.26-.29.48-.51.68-.23.19-.51.34-.84.45-.34.11-.72.17-1.15.17-.48 0-.95-.09-1.41-.27-.46-.19-.86-.41-1.2-.68z" fill="#535353"/></g></svg>\\\" width=\\\"57\\\"/><h3>Cite this article</h3><p>Kellermann, W., Martin, R. & Ono, N. Signal processing and machine learning for speech and audio in acoustic sensor networks. <i>J AUDIO SPEECH MUSIC PROC.</i> <b>2023</b>, 54 (2023). https://doi.org/10.1186/s13636-023-00322-6</p><p>Download citation<svg aria-hidden=\\\"true\\\" focusable=\\\"false\\\" height=\\\"16\\\" role=\\\"img\\\" width=\\\"16\\\"><use xlink:href=\\\"#icon-eds-i-download-medium\\\" xmlns:xlink=\\\"http://www.w3.org/1999/xlink\\\"></use></svg></p><ul data-test=\\\"publication-history\\\"><li><p>Published<span>: </span><span><time datetime=\\\"2023-12-17\\\">17 December 2023</time></span></p></li><li><p>DOI</abbr><span>: </span><span>https://doi.org/10.1186/s13636-023-00322-6</span></p></li></ul><h3>Share this article</h3><p>Anyone you share the following link with will be able to read this content:</p><button data-track=\\\"click\\\" data-track-action=\\\"get shareable link\\\" data-track-external=\\\"\\\" data-track-label=\\\"button\\\" type=\\\"button\\\">Get shareable link</button><p>Sorry, a shareable link is not currently available for this article.</p><p data-track=\\\"click\\\" data-track-action=\\\"select share url\\\" data-track-label=\\\"button\\\"></p><button data-track=\\\"click\\\" data-track-action=\\\"copy share url\\\" data-track-external=\\\"\\\" data-track-label=\\\"button\\\" type=\\\"button\\\">Copy to clipboard</button><p> Provided by the Springer Nature SharedIt content-sharing initiative </p>\",\"PeriodicalId\":49202,\"journal\":{\"name\":\"Eurasip Journal on Audio Speech and Music Processing\",\"volume\":\"55 1\",\"pages\":\"\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2023-12-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Eurasip Journal on Audio Speech and Music Processing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1186/s13636-023-00322-6\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ACOUSTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Eurasip Journal on Audio Speech and Music Processing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1186/s13636-023-00322-6","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0
摘要
将来自分布式麦克风阵列的多个麦克风信号与描述场景声学特性的信息相结合,以改进声源定位。这些信息包括麦克风的位置、房间大小和混响时间。他们提出的双输入神经网络(DI-NN)是一种简单高效的技术,用于构建能够处理两种不同数据类型的神经网络。他们在不同的场景中对其进行了测试,并将其与传统的最小二乘法和卷积递归神经网络等其他模型进行了比较。虽然拟议的 DI-NN 并不针对每个新场景进行再训练,但作者的研究结果证明了拟议的 DI-NN 的优越性,在合成数据和真实录音数据集上实现了定位误差的大幅降低。作者将传统方法和基于学习的方法相结合,以加强这些任务,并实现对未知房间脉冲响应 (RIR) 和阵列配置的鲁棒性。他们提出了一种三阶段方法,该方法需要计算空间相干矩阵(SCM),其基础是作为定向声源空间特征的白化相对传递函数(wRTF)。他们通过评估空间相干矩阵和局部相干函数来检测目标扬声器的活动。然后,将 SCM 的特征值和两个扬声器之间帧间全局活动分布的最大相似度输入扬声器计数网络(SCnet)。为了提取每个独立的说话者信号,采用了全局和局部活动驱动网络(GLADnet)。Kawamura 等人撰写的最后一篇论文题为 "Acoustic object canceller: removing a known signal from monaural recording using blind synchronization"(声学对象消除器:利用盲同步从单声道录音中消除已知信号),解决了在有干扰参考信号的情况下从单个麦克风信号中消除不期望干扰的问题。作者提出的方法是将干扰视为一个声学对象,其信号在到达接收麦克风之前经过线性滤波。假定声学物体和麦克风的信号表现出不同的采样率,首先对信号进行同步,然后使用大化最小化算法通过最大似然估计确定从物体到麦克风传播路径的频率响应,研究和评估应保留的理想信号的各种统计模型。作者和单位德国埃尔兰根-纽伦堡弗里德里希-亚历山大大学沃尔特-凯勒曼德国波鸿鲁尔大学雷纳-马丁日本日野市东京都立大学大野信孝日本Nobutaka Ono作者Walter Kellermann查看作者发表的文章您也可以在PubMed Google Scholar中搜索该作者Rainer Martin查看作者发表的文章您也可以在PubMed Google Scholar中搜索该作者Nobutaka Ono查看作者发表的文章您也可以在PubMed Google Scholar中搜索该作者通讯作者给Walter Kellermann的回信。开放获取本文采用知识共享署名 4.0 国际许可协议进行许可,该协议允许以任何媒介或格式使用、共享、改编、分发和复制,只要您适当注明原作者和来源,提供知识共享许可协议的链接,并说明是否进行了修改。本文中的图片或其他第三方材料均包含在文章的知识共享许可协议中,除非在材料的署名栏中另有说明。如果材料未包含在文章的知识共享许可协议中,且您打算使用的材料不符合法律规定或超出许可使用范围,您需要直接从版权所有者处获得许可。要查看该许可的副本,请访问 http://creativecommons.org/licenses/by/4.0/.Reprints and PermissionsCite this articleKellermann, W., Martin, R. & Ono, N. Signal processing and machine learning for speech and audio in acoustic sensor networks.J audio speech music proc.2023, 54 (2023). https://doi.org/10.1186/s13636-023-00322-6Download citationPublished: 17 December 2023DOI: https://doi.org/10.
Signal processing and machine learning for speech and audio in acoustic sensor networks
Nowadays, we are surrounded by a plethora of recording devices, including mobile phones, laptops, tablets, smartwatches, and camcorders, among others. However, conventional multichannel signal processing methods can usually not be applied to jointly process the signals recorded by multiple distributed devices because synchronous recording is essential. Thus, commercially available microphone array processing is currently limited to a single device where all microphones are mounted. The full exploitation of the spatial diversity offered by multiple audio devices without requiring wired networking is a major challenge, whose potential practical and commercial benefits prompted significant research efforts over the past decade.
Wireless acoustic sensor networks (WASNs) have become a new paradigm of acoustic sensing to overcome the limitations of individual devices. Along with wireless communications between microphone nodes and addressing new challenges in handling asynchronous channels, unknown microphone positions, and distributed computing, the WASN enables us to spatially distribute many recording devices. These may cover a wider area and utilize the nodes to form an extended microphone array. It promises to significantly improve the performance of various audio tasks such as speech enhancement, speech recognition, diarization, scene analysis, and anomalous acoustic event detection.
For this special issue, six papers were accepted which all address the above-mentioned fundamental challenges when using WASNs: First, the question of which sensors should be used for a specific signal processing task or extraction of a target source is addressed by the papers of Guenther et al. and Kindt et al. Given a set of sensors, a method for its synchronization on waveform level in dynamic scenarios is presented by Chinaev et al., and a localization method using both sensor signals and higher-level environmental information is discussed by Grinstein et al. Finally, robust speaker counting and source separation are addressed by Hsu and Bai and the task of removing specific interference from a single sensor signal is tackled by Kawamura et al.
The paper ‘Microphone utility estimation in acoustic sensor networks using single-channel signal features’ by Guenther et al. proposes a method to assess the utility of individual sensors of a WASN for coherence-based signal processing, e.g., beamforming or blind source separation, by using appropriate single-channel signal features as proxies for waveforms. Thereby, the need for transmitting waveforms for identifying suitable sensors for a synchronized cluster of sensors is avoided and the required amount of transmitted data can be reduced by several orders of magnitude. It is shown that both estimation-theoretic processing of single-channel features and deep learning-based identification of such features lead to measures of coherence in the feature space that reflect the suitability of distributed sensors for coherent processing.
In the paper ‘Robustness of ad hoc microphone clustering using speaker embeddings: Evaluation under realistic and challenging scenarios’ by Kindt et al., the robustness of speaker embeddings learned from multiple microphone signals as a feature for identifying useful clusters for extracting a speech signal is studied with respect to several key aspects: The dependency on the distance metrics for clustering, the observation interval required for establishing robust clusters determining the stationarity requirements for the acoustic scenario, and the performance for increasingly challenging acoustic scenarios and multiple speakers. For evaluation, a source separation task in realistic noisy and reverberant environments is investigated using several separation techniques applied for the resulting clusters. The proposed speaker embeddings are also compared to established MFCC-based features with respect to multiple state-of-the-art criteria for signal enhancement.
The paper ‘Online distributed waveform-synchronization for acoustic sensor networks with dynamic topology’ by Chinaev et al. is dedicated to network-wide sample-level synchronization relying on a previously published acoustic waveform-based sampling rate-offset estimation and compensation for pairs of nodes. Assuming that the WASN is organized as a directed minimum spanning tree (MST), the proposed network-wide synchronization scheme propagates from a root node over the entire network. Additionally, a network protocol is proposed that ensures the synchronization even if the network topology changes, e.g., because of node failure, broken transmission links, or newly appearing nodes. The efficacy of the method is demonstrated for dynamic scenes in a simulated dynamic acoustic scenario in an apartment with several rooms.
In their paper ‘Dual input neural networks for positional sound source localization’, Grinstein et al. combine multiple microphone signals from a distributed microphone array with information describing the acoustic properties of the scene for improved sound source localization. This information includes the positions of microphones, the room size, and the reverberation time. They present a Dual Input Neural Network (DI-NN) as a straightforward and efficient technique to construct a neural network capable of processing two distinct data types. It is tested in different scenarios, comparing it to alternative models such as a traditional least-squares method and a convolutional recurrent neural network. Although the proposed DI-NN is not retraining for each new scenario, the authors’ results demonstrate the superiority of the proposed DI-NN, achieving a substantial reduction in localization errors on synthetic data and a data set with real recordings.
The paper ‘Learning-based robust speaker counting and separation with the aid of spatial coherence’ by Hsu and Bai tackles speaker counting and speaker separation in noisy and reverberant environments. The authors combine traditional and learning-based methods to enhance these tasks and to achieve robustness to unseen room impulse responses (RIRs) and array configurations. They formulate a three-stage approach that entails the computation of a spatial coherence matrix (SCM) based on whitened relative transfer functions (wRTFs) as a spatial signature of directional sources. They evaluate the SCM and local coherence functions to detect the activity of the target speaker. Then, the eigenvalues of the SCM and the maximum similarity of inter-frame global activity distributions between two speakers are fed into a network for speaker counting (SCnet). To extract each independent speaker signal, a global and local activity-driven network (GLADnet) is employed. The authors demonstrate the benefits of the proposed approach on a data set of real meeting recordings.
The last paper, entitled ‘Acoustic object canceller: removing a known signal from monaural recording using blind synchronization’ by Kawamura et al., addresses the problem of removing undesired interference from individual microphone signals if a reference signal for the interference is available. The authors propose a method that treats the interference as an acoustic object whose signal is linearly filtered before arriving at the receiving microphone. Assuming that the signals of the acoustic object and the microphone exhibit different sampling rates, the signals are first synchronized, and then the frequency response of the propagation path from the object to the microphone is determined via maximum-likelihood estimation using the majorization-minimization algorithm, investigating and evaluating various statistical models for the desired signal that should be preserved.
We like to thank all authors for their excellent contributions to this special issue and hope that this collection will be a useful resource for research in WASNs in the years to come.
You can also search for this author in PubMedGoogle Scholar
Rainer MartinView author publications
You can also search for this author in PubMedGoogle Scholar
Nobutaka OnoView author publications
You can also search for this author in PubMedGoogle Scholar
Corresponding author
Correspondence to Walter Kellermann.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Reprints and Permissions
Cite this article
Kellermann, W., Martin, R. & Ono, N. Signal processing and machine learning for speech and audio in acoustic sensor networks. J AUDIO SPEECH MUSIC PROC.2023, 54 (2023). https://doi.org/10.1186/s13636-023-00322-6
Download citation
Published:
DOI: https://doi.org/10.1186/s13636-023-00322-6
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
期刊介绍:
The aim of “EURASIP Journal on Audio, Speech, and Music Processing” is to bring together researchers, scientists and engineers working on the theory and applications of the processing of various audio signals, with a specific focus on speech and music. EURASIP Journal on Audio, Speech, and Music Processing will be an interdisciplinary journal for the dissemination of all basic and applied aspects of speech communication and audio processes.