首页 > 最新文献

IEEE open journal of signal processing最新文献

英文 中文
The Drone-vs-Bird Detection Grand Challenge at ICASSP 2023: A Review of Methods and Results 2023 年国际航空科学与技术会议上的无人机与鸟类探测大挑战:方法和结果回顾
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-19 DOI: 10.1109/OJSP.2024.3379073
Angelo Coluccia;Alessio Fascista;Lars Sommer;Arne Schumann;Anastasios Dimou;Dimitrios Zarpalas
This paper presents the 6th edition of the “Drone-vs-Bird” detection challenge, jointly organized with the WOSDETC workshop within the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2023. The main objective of the challenge is to advance the current state-of-the-art in detecting the presence of one or more Unmanned Aerial Vehicles (UAVs) in real video scenes, while facing challenging conditions such as moving cameras, disturbing environmental factors, and the presence of birds flying in the foreground. For this purpose, a video dataset was provided for training the proposed solutions, and a separate test dataset was released a few days before the challenge deadline to assess their performance. The dataset has continually expanded over consecutive installments of the Drone-vs-Bird challenge and remains openly available to the research community, for non-commercial purposes. The challenge attracted novel signal processing solutions, mainly based on deep learning algorithms. The paper illustrates the results achieved by the teams that successfully participated in the 2023 challenge, offering a concise overview of the state-of-the-art in the field of drone detection using video signal processing. Additionally, the paper provides valuable insights into potential directions for future research, building upon the main pros and limitations of the solutions presented by the participating teams.
本文介绍了第六届 "无人机对鸟 "检测挑战赛,该挑战赛是在 2023 年电气和电子工程师学会声学、语音和信号处理(ICASSP)国际会议期间与 WOSDETC 研讨会联合举办的。该挑战赛的主要目的是,在面临摄像机移动、环境因素干扰和前景有鸟类飞行等挑战性条件时,推进当前最先进的技术,检测真实视频场景中是否存在一个或多个无人飞行器(UAV)。为此,我们提供了一个视频数据集来训练所提出的解决方案,并在挑战赛截止日期前几天发布了一个单独的测试数据集来评估其性能。该数据集在连续几届无人机对鸟挑战赛中不断扩大,并一直向研究界开放,用于非商业目的。挑战赛吸引了主要基于深度学习算法的新型信号处理解决方案。本文介绍了成功参加 2023 年挑战赛的团队所取得的成果,简明扼要地概述了利用视频信号处理技术检测无人机领域的最新进展。此外,论文还以参赛团队所展示的解决方案的主要优点和局限性为基础,为未来研究的潜在方向提供了有价值的见解。
{"title":"The Drone-vs-Bird Detection Grand Challenge at ICASSP 2023: A Review of Methods and Results","authors":"Angelo Coluccia;Alessio Fascista;Lars Sommer;Arne Schumann;Anastasios Dimou;Dimitrios Zarpalas","doi":"10.1109/OJSP.2024.3379073","DOIUrl":"https://doi.org/10.1109/OJSP.2024.3379073","url":null,"abstract":"This paper presents the 6th edition of the “Drone-vs-Bird” detection challenge, jointly organized with the WOSDETC workshop within the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2023. The main objective of the challenge is to advance the current state-of-the-art in detecting the presence of one or more Unmanned Aerial Vehicles (UAVs) in real video scenes, while facing challenging conditions such as moving cameras, disturbing environmental factors, and the presence of birds flying in the foreground. For this purpose, a video dataset was provided for training the proposed solutions, and a separate test dataset was released a few days before the challenge deadline to assess their performance. The dataset has continually expanded over consecutive installments of the Drone-vs-Bird challenge and remains openly available to the research community, for non-commercial purposes. The challenge attracted novel signal processing solutions, mainly based on deep learning algorithms. The paper illustrates the results achieved by the teams that successfully participated in the 2023 challenge, offering a concise overview of the state-of-the-art in the field of drone detection using video signal processing. Additionally, the paper provides valuable insights into potential directions for future research, building upon the main pros and limitations of the solutions presented by the participating teams.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"5 ","pages":"766-779"},"PeriodicalIF":2.9,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10475518","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141448001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding Envelope and Frequency-Following EEG Responses to Continuous Speech Using Deep Neural Networks 利用深度神经网络解码连续语音的包络和频率跟随脑电图响应
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-18 DOI: 10.1109/OJSP.2024.3378593
Mike D. Thornton;Danilo P. Mandic;Tobias J. Reichenbach
The electroencephalogram (EEG) offers a non-invasive means by which a listener's auditory system may be monitored during continuous speech perception. Reliable auditory-EEG decoders could facilitate the objective diagnosis of hearing disorders, or find applications in cognitively-steered hearing aids. Previously, we developed decoders for the ICASSP Auditory EEG Signal Processing Grand Challenge (SPGC). These decoders placed first in the match-mismatch task: given a short temporal segment of EEG recordings, and two candidate speech segments, the task is to identify which of the two speech segments is temporally aligned, or matched, with the EEG segment. The decoders made use of cortical responses to the speech envelope, as well as speech-related frequency-following responses, to relate the EEG recordings to the speech stimuli. Here we comprehensively document the methods by which the decoders were developed. We extend our previous analysis by exploring the association between speaker characteristics (pitch and sex) and classification accuracy, and provide a full statistical analysis of the final performance of the decoders as evaluated on a heldout portion of the dataset. Finally, the generalisation capabilities of the decoders are characterised, by evaluating them using an entirely different dataset which contains EEG recorded under a variety of speech-listening conditions. The results show that the match-mismatch decoders achieve accurate and robust classification accuracies, and they can even serve as auditory attention decoders without additional training.
脑电图(EEG)提供了一种非侵入性方法,可在连续言语感知过程中监测听者的听觉系统。可靠的听觉脑电图解码器有助于客观诊断听力障碍,或应用于认知导向助听器。此前,我们为 ICASSP 听觉脑电图信号处理大挑战(SPGC)开发了解码器。这些解码器在 "匹配-不匹配 "任务中名列第一:给定一个短时段的脑电图记录和两个候选语音片段,任务是识别两个语音片段中哪个与脑电图片段在时间上一致或匹配。解码器利用大脑皮层对语音包络的反应以及与语音相关的频率跟随反应,将脑电图记录与语音刺激联系起来。在此,我们将全面记录解码器的开发方法。通过探索说话者特征(音调和性别)与分类准确性之间的关联,我们对之前的分析进行了扩展,并对解码器的最终性能进行了全面的统计分析,并对数据集的一部分进行了评估。最后,通过使用一个完全不同的数据集对解码器的泛化能力进行评估,该数据集包含在各种语音收听条件下记录的脑电图。结果表明,匹配-错配解码器实现了准确而稳健的分类精度,甚至可以作为听觉注意力解码器使用,而无需额外的训练。
{"title":"Decoding Envelope and Frequency-Following EEG Responses to Continuous Speech Using Deep Neural Networks","authors":"Mike D. Thornton;Danilo P. Mandic;Tobias J. Reichenbach","doi":"10.1109/OJSP.2024.3378593","DOIUrl":"https://doi.org/10.1109/OJSP.2024.3378593","url":null,"abstract":"The electroencephalogram (EEG) offers a non-invasive means by which a listener's auditory system may be monitored during continuous speech perception. Reliable auditory-EEG decoders could facilitate the objective diagnosis of hearing disorders, or find applications in cognitively-steered hearing aids. Previously, we developed decoders for the ICASSP Auditory EEG Signal Processing Grand Challenge (SPGC). These decoders placed first in the match-mismatch task: given a short temporal segment of EEG recordings, and two candidate speech segments, the task is to identify which of the two speech segments is temporally aligned, or matched, with the EEG segment. The decoders made use of cortical responses to the speech envelope, as well as speech-related frequency-following responses, to relate the EEG recordings to the speech stimuli. Here we comprehensively document the methods by which the decoders were developed. We extend our previous analysis by exploring the association between speaker characteristics (pitch and sex) and classification accuracy, and provide a full statistical analysis of the final performance of the decoders as evaluated on a heldout portion of the dataset. Finally, the generalisation capabilities of the decoders are characterised, by evaluating them using an entirely different dataset which contains EEG recorded under a variety of speech-listening conditions. The results show that the match-mismatch decoders achieve accurate and robust classification accuracies, and they can even serve as auditory attention decoders without additional training.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"5 ","pages":"700-716"},"PeriodicalIF":2.9,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10474145","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141447973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sea-Wave: Speech Envelope Reconstruction From Auditory EEG With an Adapted WaveNet 海浪:利用改编波网从听觉脑电图重建语音包络
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-18 DOI: 10.1109/OJSP.2024.3378594
Liuyin Yang;Bob Van Dyck;Marc M. Van Hulle
Speech envelope reconstruction from EEG is shown to bear clinical potential to assess speech intelligibility. Linear models are commonly used to this end, but they have recently been outperformed in reconstruction scores by non-linear deep neural networks, particularly by dilated convolutional networks. This study presents Sea-Wave, a WaveNet-based architecture for speech envelope reconstruction that outperforms the state-of-the-art model. Our model is an extension of our submission for the Auditory EEG Challenge of the ICASSP Signal Processing Grand Challenge 2023. We improve upon our prior work by evaluating model components and hyperparameters through an ablation study and hyperparameter search, respectively. Our best subject-independent model achieves a Pearson correlation of 22.58% on seen and 11.58% on unseen subjects. After subject-specific fine-tuning, we find an average relative improvement of 30% for the seen subjects and a Pearson correlation of 56.57% for the best seen subject.Finally, we explore several model visualizations to obtain a better understanding of the model, the differences across subjects and the EEG features that relate to auditory perception.
脑电图的语音包络重建被证明具有评估语音清晰度的临床潜力。线性模型通常用于此目的,但最近非线性深度神经网络,特别是扩张卷积网络的重建得分超过了线性模型。本研究介绍了 Sea-Wave,这是一种基于 WaveNet 的语音包络重构架构,其性能优于最先进的模型。我们的模型是我们提交的 2023 年 ICASSP 信号处理大挑战赛听觉脑电图挑战的扩展。我们通过消融研究和超参数搜索分别评估了模型组件和超参数,从而改进了之前的工作。我们的最佳受试者无关模型在可见受试者身上实现了 22.58% 的皮尔逊相关性,在未见受试者身上实现了 11.58% 的皮尔逊相关性。最后,我们探索了几种模型可视化方法,以便更好地理解模型、不同受试者之间的差异以及与听觉感知相关的脑电图特征。
{"title":"Sea-Wave: Speech Envelope Reconstruction From Auditory EEG With an Adapted WaveNet","authors":"Liuyin Yang;Bob Van Dyck;Marc M. Van Hulle","doi":"10.1109/OJSP.2024.3378594","DOIUrl":"https://doi.org/10.1109/OJSP.2024.3378594","url":null,"abstract":"Speech envelope reconstruction from EEG is shown to bear clinical potential to assess speech intelligibility. Linear models are commonly used to this end, but they have recently been outperformed in reconstruction scores by non-linear deep neural networks, particularly by dilated convolutional networks. This study presents Sea-Wave, a WaveNet-based architecture for speech envelope reconstruction that outperforms the state-of-the-art model. Our model is an extension of our submission for the Auditory EEG Challenge of the ICASSP Signal Processing Grand Challenge 2023. We improve upon our prior work by evaluating model components and hyperparameters through an ablation study and hyperparameter search, respectively. Our best subject-independent model achieves a Pearson correlation of 22.58% on seen and 11.58% on unseen subjects. After subject-specific fine-tuning, we find an average relative improvement of 30% for the seen subjects and a Pearson correlation of 56.57% for the best seen subject.Finally, we explore several model visualizations to obtain a better understanding of the model, the differences across subjects and the EEG features that relate to auditory perception.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"5 ","pages":"686-699"},"PeriodicalIF":2.9,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10474194","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141448000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Overview of the ADReSS-M Signal Processing Grand Challenge on Multilingual Alzheimer's Dementia Recognition Through Spontaneous Speech 通过自发语音识别多语种阿尔茨海默氏症痴呆症的 ADReSS-M 信号处理大挑战概述
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-18 DOI: 10.1109/OJSP.2024.3378595
Saturnino Luz;Fasih Haider;Davida Fromm;Ioulietta Lazarou;Ioannis Kompatsiaris;Brian MacWhinney
The ADReSS-M Signal Processing Grand Challenge was held at the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023. The challenge targeted difficult automatic prediction problems of great societal and medical relevance, namely, the detection of Alzheimer's Dementia (AD) and the estimation of cognitive test scoress. Participants were invited to create models for the assessment of cognitive function based on spontaneous speech data. Most of these models employed signal processing and machine learning methods. The ADReSS-M challenge was designed to assess the extent to which predictive models built based on speech in one language generalise to another language. The language data compiled and made available for ADReSS-M comprised English, for model training, and Greek, for model testing and validation. To the best of our knowledge no previous shared research task investigated acoustic features of the speech signal or linguistic characteristics in the context of multilingual AD detection. This paper describes the context of the ADReSS-M challenge, its data sets, its predictive tasks, the evaluation methodology we employed, our baseline models and results, and the top five submissions. The paper concludes with a summary discussion of the ADReSS-M results, and our critical assessment of the future outlook in this field.
ADReSS-M 信号处理大挑战在 2023 年电气和电子工程师学会声学、语音和信号处理国际会议(ICASSP 2023)上举行。挑战赛的目标是具有重大社会和医学意义的高难度自动预测问题,即阿尔茨海默氏症(AD)的检测和认知测试 scoress 的估计。参赛者应邀创建了基于自发语音数据的认知功能评估模型。这些模型大多采用了信号处理和机器学习方法。ADReSS-M 挑战赛旨在评估根据一种语言的语音建立的预测模型在多大程度上可以推广到另一种语言。为 ADReSS-M 编制和提供的语言数据包括用于模型训练的英语和用于模型测试和验证的希腊语。据我们所知,在多语言 AD 检测的背景下,以前没有任何共同研究任务调查过语音信号的声学特征或语言特点。本文介绍了 ADReSS-M 挑战赛的背景、数据集、预测任务、我们采用的评估方法、我们的基准模型和结果,以及前五名的提交作品。最后,本文对 ADReSS-M 的结果进行了总结性讨论,并对该领域的未来前景进行了批判性评估。
{"title":"An Overview of the ADReSS-M Signal Processing Grand Challenge on Multilingual Alzheimer's Dementia Recognition Through Spontaneous Speech","authors":"Saturnino Luz;Fasih Haider;Davida Fromm;Ioulietta Lazarou;Ioannis Kompatsiaris;Brian MacWhinney","doi":"10.1109/OJSP.2024.3378595","DOIUrl":"https://doi.org/10.1109/OJSP.2024.3378595","url":null,"abstract":"The ADReSS-M Signal Processing Grand Challenge was held at the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023. The challenge targeted difficult automatic prediction problems of great societal and medical relevance, namely, the detection of Alzheimer's Dementia (AD) and the estimation of cognitive test scoress. Participants were invited to create models for the assessment of cognitive function based on spontaneous speech data. Most of these models employed signal processing and machine learning methods. The ADReSS-M challenge was designed to assess the extent to which predictive models built based on speech in one language generalise to another language. The language data compiled and made available for ADReSS-M comprised English, for model training, and Greek, for model testing and validation. To the best of our knowledge no previous shared research task investigated acoustic features of the speech signal or linguistic characteristics in the context of multilingual AD detection. This paper describes the context of the ADReSS-M challenge, its data sets, its predictive tasks, the evaluation methodology we employed, our baseline models and results, and the top five submissions. The paper concludes with a summary discussion of the ADReSS-M results, and our critical assessment of the future outlook in this field.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"5 ","pages":"738-749"},"PeriodicalIF":2.9,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10474114","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141447982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ICASSP 2023 Deep Noise Suppression Challenge ICASSP 2023 深度噪声抑制挑战赛
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-18 DOI: 10.1109/OJSP.2024.3378602
Harishchandra Dubey;Ashkan Aazami;Vishak Gopal;Babak Naderi;Sebastian Braun;Ross Cutler;Alex Ju;Mehdi Zohourian;Min Tang;Mehrsa Golestaneh;Robert Aichner
The ICASSP 2023 Deep Noise Suppression (DNS) Challenge marks the fifth edition of the DNS challenge series. DNS challenges were organized from 2019 to 2023 to foster research in the field of DNS. Previous DNS challenges were held at INTERSPEECH 2020, ICASSP 2021, INTERSPEECH 2021, and ICASSP 2022. This challenge aims to advance models capable of jointly addressing denoising, dereverberation, and interfering talker suppression, with separate tracks focusing on headset and speakerphone scenarios. The challenge facilitates personalized deep noise suppression by providing accompanying enrollment clips for each test clip, each containing the primary talker only, which can be used to compute a speaker identity feature and disentangle primary and interfering speech. While the majority of models submitted to the challenge were personalized, the same teams emerged as the winners in both tracks. The best models demonstrated improvements of 0.145 and 0.141 in the challenge's score, respectively, when compared to the noisy blind test set. We present additional analysis and draw comparisons to previous challenges.
ICASSP 2023 深度噪声抑制(DNS)挑战赛是 DNS 系列挑战赛的第五届。DNS 挑战赛于 2019 年至 2023 年举办,旨在促进 DNS 领域的研究。前几届 DNS 挑战赛分别在 INTERSPEECH 2020、ICASSP 2021、INTERSPEECH 2021 和 ICASSP 2022 上举行。本次挑战赛旨在推进能够共同解决去噪、去混响和干扰通话抑制问题的模型,并将耳机和免提电话场景作为不同赛道的重点。该挑战赛通过为每个测试片段提供随附的注册片段(每个片段仅包含主要说话者)来促进个性化深度噪声抑制,这些片段可用于计算说话者身份特征并区分主要语音和干扰语音。虽然提交给挑战赛的大多数模型都是个性化的,但在两个赛道中都有相同的团队获胜。与噪声盲测试集相比,最佳模型的挑战得分分别提高了 0.145 和 0.141。我们还进行了其他分析,并与之前的挑战赛进行了比较。
{"title":"ICASSP 2023 Deep Noise Suppression Challenge","authors":"Harishchandra Dubey;Ashkan Aazami;Vishak Gopal;Babak Naderi;Sebastian Braun;Ross Cutler;Alex Ju;Mehdi Zohourian;Min Tang;Mehrsa Golestaneh;Robert Aichner","doi":"10.1109/OJSP.2024.3378602","DOIUrl":"https://doi.org/10.1109/OJSP.2024.3378602","url":null,"abstract":"The ICASSP 2023 Deep Noise Suppression (DNS) Challenge marks the fifth edition of the DNS challenge series. DNS challenges were organized from 2019 to 2023 to foster research in the field of DNS. Previous DNS challenges were held at INTERSPEECH 2020, ICASSP 2021, INTERSPEECH 2021, and ICASSP 2022. This challenge aims to advance models capable of jointly addressing denoising, dereverberation, and interfering talker suppression, with separate tracks focusing on headset and speakerphone scenarios. The challenge facilitates personalized deep noise suppression by providing accompanying enrollment clips for each test clip, each containing the primary talker only, which can be used to compute a speaker identity feature and disentangle primary and interfering speech. While the majority of models submitted to the challenge were personalized, the same teams emerged as the winners in both tracks. The best models demonstrated improvements of 0.145 and 0.141 in the challenge's score, respectively, when compared to the noisy blind test set. We present additional analysis and draw comparisons to previous challenges.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"5 ","pages":"725-737"},"PeriodicalIF":2.9,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10474162","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141447985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Automated Seizure Detection With Wearable EEG – Grand Challenge 利用可穿戴脑电图实现癫痫发作自动检测 - 大挑战
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-18 DOI: 10.1109/OJSP.2024.3378604
Miguel Bhagubai;Lauren Swinnen;Evy Cleeren;Wim Van Paesschen;Maarten De Vos;Christos Chatzichristos
The diagnosis of epilepsy can be confirmed in-hospital via video-electroencephalography (vEEG). Currently, long-term monitoring is limited to self-reporting seizure occurrences by the patients. In recent years, the development of wearable sensors has allowed monitoring patients outside of specialized environments. The application of wearable EEG devices for monitoring epileptic patients in ambulatory environments is still dampened by the low performance achieved by automated seizure detection frameworks. In this work, we present the results of a seizure detection grand challenge, organized as an attempt to stimulate the development of automated methodologies for detection of seizures on wearable EEG. The main drawbacks for developing wearable EEG seizure detection algorithms is the lack of data needed for training such frameworks. In this challenge, we provided participants with a large dataset of 42 patients with focal epilepsy, containing continuous recordings of behind-the-ear (bte) EEG. We challenged participants to develop a robust seizure classifier based on wearable EEG. Additionally, we proposed a subtask in order to motivate data-centric approaches to improve the training and performance of seizure detection models. An additional dataset, containing recordings with a bte-EEG wearable device, was employed to evaluate the work submitted by participants. In this paper, we present the five best scoring methodologies. The best performing approach was a feature-based decision tree ensemble algorithm with data augmentation via Fourier Transform surrogates. The organization of this challenge is of high importance for improving automated EEG analysis for epilepsy diagnosis, working towards implementing these technologies in clinical practice.
癫痫的诊断可在医院内通过视频脑电图(vEEG)得到确认。目前,长期监测仅限于患者自我报告癫痫发作情况。近年来,随着可穿戴传感器的发展,可以在专业环境之外对患者进行监测。可穿戴脑电图设备在非卧床环境中监测癫痫患者的应用仍受到自动癫痫发作检测框架性能低下的影响。在这项工作中,我们介绍了癫痫发作检测大型挑战赛的结果,该挑战赛旨在促进可穿戴脑电图癫痫发作自动检测方法的开发。开发可穿戴脑电图癫痫发作检测算法的主要缺点是缺乏训练此类框架所需的数据。在这次挑战赛中,我们为参赛者提供了一个大型数据集,其中包含 42 名局灶性癫痫患者的耳后脑电图连续记录。我们要求参赛者基于可穿戴脑电图开发出稳健的癫痫发作分类器。此外,我们还提出了一个子任务,以激励以数据为中心的方法来改进癫痫发作检测模型的训练和性能。我们还采用了一个包含 bte-EEG 可穿戴设备记录的额外数据集来评估参赛者提交的作品。在本文中,我们介绍了五种得分最高的方法。表现最好的方法是基于特征的决策树集合算法,并通过傅立叶变换替代物进行数据增强。这次挑战赛的举办对于改进癫痫诊断的自动脑电图分析,努力将这些技术应用于临床实践具有重要意义。
{"title":"Towards Automated Seizure Detection With Wearable EEG – Grand Challenge","authors":"Miguel Bhagubai;Lauren Swinnen;Evy Cleeren;Wim Van Paesschen;Maarten De Vos;Christos Chatzichristos","doi":"10.1109/OJSP.2024.3378604","DOIUrl":"https://doi.org/10.1109/OJSP.2024.3378604","url":null,"abstract":"The diagnosis of epilepsy can be confirmed in-hospital via video-electroencephalography (vEEG). Currently, long-term monitoring is limited to self-reporting seizure occurrences by the patients. In recent years, the development of wearable sensors has allowed monitoring patients outside of specialized environments. The application of wearable EEG devices for monitoring epileptic patients in ambulatory environments is still dampened by the low performance achieved by automated seizure detection frameworks. In this work, we present the results of a seizure detection grand challenge, organized as an attempt to stimulate the development of automated methodologies for detection of seizures on wearable EEG. The main drawbacks for developing wearable EEG seizure detection algorithms is the lack of data needed for training such frameworks. In this challenge, we provided participants with a large dataset of 42 patients with focal epilepsy, containing continuous recordings of behind-the-ear (bte) EEG. We challenged participants to develop a robust seizure classifier based on wearable EEG. Additionally, we proposed a subtask in order to motivate data-centric approaches to improve the training and performance of seizure detection models. An additional dataset, containing recordings with a bte-EEG wearable device, was employed to evaluate the work submitted by participants. In this paper, we present the five best scoring methodologies. The best performing approach was a feature-based decision tree ensemble algorithm with data augmentation via Fourier Transform surrogates. The organization of this challenge is of high importance for improving automated EEG analysis for epilepsy diagnosis, working towards implementing these technologies in clinical practice.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"5 ","pages":"717-724"},"PeriodicalIF":2.9,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10474132","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141447983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
REM-U-Net: Deep Learning Based Agile REM Prediction With Energy-Efficient Cell-Free Use Case REM-U-Net:基于深度学习的敏捷 REM 预测与高能效无小区用例
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-18 DOI: 10.1109/OJSP.2024.3378591
Hazem Sallouha;Shamik Sarkar;Enes Krijestorac;Danijela Cabric
Radio environment maps (REMs) hold a central role in optimizing wireless network deployment, enhancing network performance, and ensuring effective spectrum management. Conventional REM prediction methods are either excessively time-consuming, e.g., ray tracing, or inaccurate, e.g., statistical models, limiting their adoption in modern inherently dynamic wireless networks. Deep learning-based REM prediction has recently attracted considerable attention as an appealing, accurate, and time-efficient alternative. However, existing works on REM prediction using deep learning are either confined to 2D maps or use a relatively small dataset. In this paper, we introduce a runtime-efficient REM prediction framework based on U-Nets, trained on a large-scale 3D maps dataset. In addition, data preprocessing steps are investigated to further refine the REM prediction accuracy. The proposed U-Net framework, along with preprocessing steps, are evaluated in the context of the 2023 IEEE ICASSP Signal Processing Grand Challenge, namely, the First Pathloss Radio Map Prediction Challenge. The evaluation results demonstrate that the proposed method achieves an average normalized root-mean-square error (RMSE) of 0.045 with an average of 14 milliseconds (ms) runtime. Finally, we position our achieved REM prediction accuracy in the context of a relevant cell-free massive multiple-input multiple-output (CF-mMIMO) use case. We demonstrate that one can obviate consuming energy on large-scale fading (LSF) measurements and rely on predicted REM instead to decide which sleep access points (APs) to switch on in a CF-mMIMO network that adopts a minimum propagation loss AP switch ON/OFF strategy.
无线电环境图(REM)在优化无线网络部署、提高网络性能和确保有效的频谱管理方面发挥着核心作用。传统的 REM 预测方法要么过于耗时(如光线跟踪),要么不准确(如统计模型),限制了它们在现代动态无线网络中的应用。基于深度学习的 REM 预测作为一种有吸引力、准确且省时的替代方法,最近引起了广泛关注。然而,利用深度学习进行 REM 预测的现有工作要么局限于二维地图,要么使用相对较小的数据集。在本文中,我们介绍了一种基于 U-Nets 的运行时间高效的 REM 预测框架,该框架在大规模三维地图数据集上进行了训练。此外,我们还研究了数据预处理步骤,以进一步提高 REM 预测的准确性。在 2023 年 IEEE ICASSP 信号处理大挑战赛(即首届路径损耗无线电地图预测挑战赛)的背景下,对所提出的 U-Net 框架和预处理步骤进行了评估。评估结果表明,所提方法的平均归一化均方根误差 (RMSE) 为 0.045,平均运行时间为 14 毫秒 (ms)。最后,我们将所实现的 REM 预测准确性与相关的无蜂窝大规模多输入多输出(CF-mMIMO)用例相结合。我们证明,在采用最小传播损耗接入点开关策略的 CF-mMIMO 网络中,可以避免在大规模衰落(LSF)测量上消耗能量,而是依靠预测的 REM 来决定开启哪些睡眠接入点(AP)。
{"title":"REM-U-Net: Deep Learning Based Agile REM Prediction With Energy-Efficient Cell-Free Use Case","authors":"Hazem Sallouha;Shamik Sarkar;Enes Krijestorac;Danijela Cabric","doi":"10.1109/OJSP.2024.3378591","DOIUrl":"https://doi.org/10.1109/OJSP.2024.3378591","url":null,"abstract":"Radio environment maps (REMs) hold a central role in optimizing wireless network deployment, enhancing network performance, and ensuring effective spectrum management. Conventional REM prediction methods are either excessively time-consuming, e.g., ray tracing, or inaccurate, e.g., statistical models, limiting their adoption in modern inherently dynamic wireless networks. Deep learning-based REM prediction has recently attracted considerable attention as an appealing, accurate, and time-efficient alternative. However, existing works on REM prediction using deep learning are either confined to 2D maps or use a relatively small dataset. In this paper, we introduce a runtime-efficient REM prediction framework based on U-Nets, trained on a large-scale 3D maps dataset. In addition, data preprocessing steps are investigated to further refine the REM prediction accuracy. The proposed U-Net framework, along with preprocessing steps, are evaluated in the context of \u0000<italic>the 2023 IEEE ICASSP Signal Processing Grand Challenge, namely, the First Pathloss Radio Map Prediction Challenge</i>\u0000. The evaluation results demonstrate that the proposed method achieves an average normalized root-mean-square error (RMSE) of 0.045 with an average of 14 milliseconds (ms) runtime. Finally, we position our achieved REM prediction accuracy in the context of a relevant cell-free massive multiple-input multiple-output (CF-mMIMO) use case. We demonstrate that one can obviate consuming energy on large-scale fading (LSF) measurements and rely on predicted REM instead to decide which sleep access points (APs) to switch on in a CF-mMIMO network that adopts a minimum propagation loss AP switch ON/OFF strategy.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"5 ","pages":"750-765"},"PeriodicalIF":2.9,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10474197","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141447987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ICASSP 2023 Acoustic Echo Cancellation Challenge ICASSP 2023 声学回声消除挑战赛
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-13 DOI: 10.1109/OJSP.2024.3376289
Ross Cutler;Ando Saabas;Tanel Pärnamaa;Marju Purin;Evgenii Indenbom;Nicolae-Cătălin Ristea;Jegor Gužvin;Hannes Gamper;Sebastian Braun;Robert Aichner
The ICASSP 2023 Acoustic Echo Cancellation Challenge is intended to stimulate research in acoustic echo cancellation (AEC), which is an important area of speech enhancement and is still a top issue in audio communication. This is the fourth AEC challenge and it is enhanced by adding a second track for personalized acoustic echo cancellation, reducing the algorithmic + buffering latency to 20 ms, as well as including a full-band version of AECMOS (Purin et al., 2020). We open source two large datasets to train AEC models under both single talk and double talk scenarios. These datasets consist of recordings from more than 10,000 real audio devices and human speakers in real environments, as well as a synthetic dataset. We open source an online subjective test framework and provide an objective metric for researchers to quickly test their results. The winners of this challenge were selected based on the average mean opinion score (MOS) achieved across all scenarios and the word accuracy (WAcc) rate.
ICASSP 2023 声学回声消除挑战赛旨在激励声学回声消除(AEC)方面的研究,这是语音增强的一个重要领域,目前仍是音频通信领域的首要问题。这是第四届声学回声消除挑战赛,通过增加个性化声学回声消除的第二个赛道、将算法+缓冲延迟降低到 20 毫秒以及包括全频段版本的 AECMOS(Purin 等人,2020 年),该挑战赛得到了加强。我们开源了两个大型数据集,用于在单人通话和双人通话场景下训练 AEC 模型。这些数据集包括来自 10,000 多个真实音频设备和真实环境中人类扬声器的录音,以及一个合成数据集。我们开源了一个在线主观测试框架,并为研究人员快速测试其结果提供了一个客观指标。本次挑战赛的优胜者是根据在所有场景中取得的平均意见分(MOS)和单词准确率(WAcc)选出的。
{"title":"ICASSP 2023 Acoustic Echo Cancellation Challenge","authors":"Ross Cutler;Ando Saabas;Tanel Pärnamaa;Marju Purin;Evgenii Indenbom;Nicolae-Cătălin Ristea;Jegor Gužvin;Hannes Gamper;Sebastian Braun;Robert Aichner","doi":"10.1109/OJSP.2024.3376289","DOIUrl":"https://doi.org/10.1109/OJSP.2024.3376289","url":null,"abstract":"The ICASSP 2023 Acoustic Echo Cancellation Challenge is intended to stimulate research in acoustic echo cancellation (AEC), which is an important area of speech enhancement and is still a top issue in audio communication. This is the fourth AEC challenge and it is enhanced by adding a second track for personalized acoustic echo cancellation, reducing the algorithmic + buffering latency to 20 ms, as well as including a full-band version of AECMOS (Purin et al., 2020). We open source two large datasets to train AEC models under both single talk and double talk scenarios. These datasets consist of recordings from more than 10,000 real audio devices and human speakers in real environments, as well as a synthetic dataset. We open source an online subjective test framework and provide an objective metric for researchers to quickly test their results. The winners of this challenge were selected based on the average mean opinion score (MOS) achieved across all scenarios and the word accuracy (WAcc) rate.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"5 ","pages":"675-685"},"PeriodicalIF":2.9,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10472289","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141447984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed Combined Channel Estimation and Optimal Uplink Receive Combining for User- Centric Cell-Free Massive MIMO Systems 针对以用户为中心的无小区大规模多输入多输出系统的分布式组合信道估计和最佳上行链路接收组合
Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-13 DOI: 10.1109/OJSP.2024.3377098
Robbe Van Rompaey;Marc Moonen
Cell-free massive MIMO (CFmMIMO) is considered as one of the enablers to meet the demand for increasing data rates of next generation (6G) wireless communications. In user-centric CFmMIMO, each user equipment (UE) is served by a user-selected set of surrounding access points (APs), requiring efficient signal processing algorithms minimizing inter-AP communications, while still providing a good quality of service to all UEs. This paper provides algorithms for channel estimation (CE) and uplink (UL) receive combining (RC), designed for CFmMIMO channels using different assumptions on the structure of the channel covariances. Three different channel models are considered: line-of-sight (LoS) channels, non-LoS (NLoS) channels (the common Rayleigh fading model) and a combination of LoS and NLoS channels (the general Rician fading model). The LoS component introduces correlation between the channels at different APs that can be exploited to improve the CE and the RC. The channel estimates and receive combiners are obtained in each AP by processing the local antenna signals of the AP, together with compressed versions of all the other antenna signals of the APs serving the UE, during UL training. To make the proposed method scalable, the distributed user-centric channel estimation and receive combining (DUCERC) algorithm is presented that significantly reduces the necessary communications between the APs. The effectiveness of the proposed method and algorithm is demonstrated via numerical simulations.
无小区大规模多输入多输出(CFmMIMO)被认为是满足下一代(6G)无线通信不断提高的数据传输速率需求的推动因素之一。在以用户为中心的 CFmMIMO 中,每个用户设备(UE)都由用户选择的一组周围接入点(AP)提供服务,这就要求采用高效的信号处理算法,最大限度地减少接入点之间的通信,同时还能为所有 UE 提供良好的服务质量。本文提供了针对 CFmMIMO 信道设计的信道估计 (CE) 和上行链路 (UL) 接收合并 (RC) 算法,使用了对信道协方差结构的不同假设。本文考虑了三种不同的信道模型:视距(LoS)信道、非视距(NLoS)信道(常见的瑞利衰落模型)以及 LoS 和 NLoS 信道的组合(一般瑞利衰落模型)。LoS 部分引入了不同接入点信道之间的相关性,可用于改善 CE 和 RC。在 UL 培训期间,每个接入点通过处理接入点的本地天线信号以及为 UE 服务的接入点的所有其他天线信号的压缩版本,获得信道估计值和接收合路器。为了使提出的方法具有可扩展性,提出了分布式以用户为中心的信道估计和接收组合(DUCERC)算法,该算法大大减少了接入点之间的必要通信。通过数值模拟,证明了所提方法和算法的有效性。
{"title":"Distributed Combined Channel Estimation and Optimal Uplink Receive Combining for User- Centric Cell-Free Massive MIMO Systems","authors":"Robbe Van Rompaey;Marc Moonen","doi":"10.1109/OJSP.2024.3377098","DOIUrl":"https://doi.org/10.1109/OJSP.2024.3377098","url":null,"abstract":"Cell-free massive MIMO (CFmMIMO) is considered as one of the enablers to meet the demand for increasing data rates of next generation (6G) wireless communications. In user-centric CFmMIMO, each user equipment (UE) is served by a user-selected set of surrounding access points (APs), requiring efficient signal processing algorithms minimizing inter-AP communications, while still providing a good quality of service to all UEs. This paper provides algorithms for channel estimation (CE) and uplink (UL) receive combining (RC), designed for CFmMIMO channels using different assumptions on the structure of the channel covariances. Three different channel models are considered: line-of-sight (LoS) channels, non-LoS (NLoS) channels (the common Rayleigh fading model) and a combination of LoS and NLoS channels (the general Rician fading model). The LoS component introduces correlation between the channels at different APs that can be exploited to improve the CE and the RC. The channel estimates and receive combiners are obtained in each AP by processing the local antenna signals of the AP, together with compressed versions of all the other antenna signals of the APs serving the UE, during UL training. To make the proposed method scalable, the distributed user-centric channel estimation and receive combining (DUCERC) algorithm is presented that significantly reduces the necessary communications between the APs. The effectiveness of the proposed method and algorithm is demonstrated via numerical simulations.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"5 ","pages":"559-576"},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10472081","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140648034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Regularized Locality Preserving Indexing for Fiedler Vector Estimation 用于费德勒矢量估计的稳健正则化位置保持索引法
IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-13 DOI: 10.1109/OJSP.2024.3400683
Aylin Taştan;Michael Muma;Abdelhak M. Zoubir
The Fiedler vector is the eigenvector associated with the algebraic connectivity of the graph Laplacian. It is central to graph analysis as it provides substantial information to learn the latent structure of a graph. In real-world applications, however, the data may be subject to heavy-tailed noise and outliers which deteriorate the structure of the Fiedler vector estimate and lead to a breakdown of popular methods. Thus, we propose a Robust Regularized Locality Preserving Indexing (RRLPI) Fiedler vector estimation method that approximates the nonlinear manifold structure of the Laplace Beltrami operator while minimizing the impact of outliers. To achieve this aim, an analysis of the effects of two fundamental outlier types on the eigen-decomposition of block affinity matrices is conducted. Then, an error model is formulated based on which the RRLPI method is developed. It includes an unsupervised regularization parameter selection algorithm that leverages the geometric structure of the projection space. The performance is benchmarked against existing methods in terms of detection probability, partitioning quality, image segmentation capability, robustness and computation time using a large variety of synthetic and real data experiments.
费德勒向量是与图拉普拉奇代数连接性相关的特征向量。它是图分析的核心,因为它为了解图的潜在结构提供了大量信息。然而,在实际应用中,数据可能会受到重尾噪声和异常值的影响,从而使费德勒向量估计值的结构恶化,导致常用方法失效。因此,我们提出了一种稳健正则化位置保持索引(RRLPI)费德勒向量估计方法,它可以近似拉普拉斯-贝特拉米算子的非线性流形结构,同时将异常值的影响降至最低。为实现这一目标,我们分析了两种基本离群值类型对块亲和矩阵特征分解的影响。然后,建立了一个误差模型,并在此基础上开发了 RRLPI 方法。它包括一种利用投影空间几何结构的无监督正则化参数选择算法。在检测概率、分割质量、图像分割能力、鲁棒性和计算时间等方面,通过大量的合成和真实数据实验,对现有方法的性能进行了基准测试。
{"title":"Robust Regularized Locality Preserving Indexing for Fiedler Vector Estimation","authors":"Aylin Taştan;Michael Muma;Abdelhak M. Zoubir","doi":"10.1109/OJSP.2024.3400683","DOIUrl":"https://doi.org/10.1109/OJSP.2024.3400683","url":null,"abstract":"The Fiedler vector is the eigenvector associated with the algebraic connectivity of the graph Laplacian. It is central to graph analysis as it provides substantial information to learn the latent structure of a graph. In real-world applications, however, the data may be subject to heavy-tailed noise and outliers which deteriorate the structure of the Fiedler vector estimate and lead to a breakdown of popular methods. Thus, we propose a Robust Regularized Locality Preserving Indexing (RRLPI) Fiedler vector estimation method that approximates the nonlinear manifold structure of the Laplace Beltrami operator while minimizing the impact of outliers. To achieve this aim, an analysis of the effects of two fundamental outlier types on the eigen-decomposition of block affinity matrices is conducted. Then, an error model is formulated based on which the RRLPI method is developed. It includes an unsupervised regularization parameter selection algorithm that leverages the geometric structure of the projection space. The performance is benchmarked against existing methods in terms of detection probability, partitioning quality, image segmentation capability, robustness and computation time using a large variety of synthetic and real data experiments.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"5 ","pages":"867-885"},"PeriodicalIF":2.9,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10530068","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE open journal of signal processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1