首页 > 最新文献

Frontiers in signal processing最新文献

英文 中文
Spread spectrum modulation recognition based on phase diagram entropy 基于相位图熵的扩频调制识别
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-07-05 DOI: 10.3389/frsip.2023.1197619
Denis Stanescu, A. Digulescu, C. Ioana, A. Serbanescu
Wireless communication technologies are undergoing intensive study and are experiencing accelerated progress which leads to a large increase in the number of end-users. Because of this, the radio spectrum has become more crowded than ever. These previously mentioned aspects lead to the urgent need for more reliable and intelligent communication systems that can improve the spectrum efficiency. Specifically, modulation scheme recognition occupies a crucial position in the civil and military application, especially with the emergence of Software Defined Radio (SDR). The modulation recognition is an indispensable task while performing spectrum sensing in Cognitive Radio (CR). Spread spectrum (SS) techniques represent the foundation for the design of Cognitive Radio systems. In this work, we propose a new method of characterization of Spread spectrum modulations capable of providing relevant information for the process of recognition of this type of modulations. Using the proposed approach, results higher than 90% are obtained in the modulation classification process, thus bringing an advantage over the classical methods, whose performance is below 75%.
无线通信技术正在进行深入的研究,并正在加速发展,从而导致最终用户数量的大量增加。正因为如此,无线电频谱比以往任何时候都更加拥挤。这些方面导致迫切需要更可靠和智能的通信系统,可以提高频谱效率。特别是随着软件定义无线电(SDR)的出现,调制方案识别在民用和军事应用中占有至关重要的地位。调制识别是认知无线电频谱感知中不可缺少的一项任务。扩频技术是认知无线电系统设计的基础。在这项工作中,我们提出了一种新的扩频调制表征方法,能够为识别这种类型的调制过程提供相关信息。该方法在调制分类过程中的准确率高于90%,优于传统方法的75%以下的准确率。
{"title":"Spread spectrum modulation recognition based on phase diagram entropy","authors":"Denis Stanescu, A. Digulescu, C. Ioana, A. Serbanescu","doi":"10.3389/frsip.2023.1197619","DOIUrl":"https://doi.org/10.3389/frsip.2023.1197619","url":null,"abstract":"Wireless communication technologies are undergoing intensive study and are experiencing accelerated progress which leads to a large increase in the number of end-users. Because of this, the radio spectrum has become more crowded than ever. These previously mentioned aspects lead to the urgent need for more reliable and intelligent communication systems that can improve the spectrum efficiency. Specifically, modulation scheme recognition occupies a crucial position in the civil and military application, especially with the emergence of Software Defined Radio (SDR). The modulation recognition is an indispensable task while performing spectrum sensing in Cognitive Radio (CR). Spread spectrum (SS) techniques represent the foundation for the design of Cognitive Radio systems. In this work, we propose a new method of characterization of Spread spectrum modulations capable of providing relevant information for the process of recognition of this type of modulations. Using the proposed approach, results higher than 90% are obtained in the modulation classification process, thus bringing an advantage over the classical methods, whose performance is below 75%.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83485629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The disparity between optimal and practical Lagrangian multiplier estimation in video encoders 视频编码器中最优拉格朗日乘子估计与实际拉格朗日乘子估计的差异
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-07-03 DOI: 10.3389/frsip.2023.1205104
D. Ringis, Vibhoothi, François Pitié, A. Kokaram
With video streaming making up 80% of the global internet bandwidth, the need to deliver high-quality video at low bitrate, combined with the high complexity of modern codecs, has led to the idea of a per-clip optimisation approach in transcoding. In this paper, we revisit the Lagrangian multiplier parameter, which is at the core of rate-distortion optimisation. Currently, video encoders use prediction models to set this parameter but these models are agnostic to the video at hand. We explore the gains that could be achieved using a per-clip direct-search optimisation of the Lagrangian multiplier parameter. We evaluate this optimisation framework on a much larger corpus of videos than that has been attempted by previous research. Our results show that per-clip optimisation of the Lagrangian multiplier leads to BD-Rate average improvements of 1.87% for x265 across a 10 k clip corpus of modern videos, and up to 25% in a single clip. Average improvements of 0.69% are reported for libaom-av1 on a subset of 100 clips. However, we show that a per-clip, per-frame-type optimisation of λ for libaom-av1 can increase these average gains to 2.5% and up to 14.9% on a single clip. Our optimisation scheme requires about 50–250 additional encodes per-clip but we show that significant speed-up can be made using proxy videos in the optimisation. These computational gains (of up to ×200) incur a slight loss to BD-Rate improvement because the optimisation is conducted at lower resolutions. Overall, this paper highlights the value of re-examining the estimation of the Lagrangian multiplier in modern codecs as there are significant gains still available without changing the tools used in the standards.
视频流占全球互联网带宽的80%,需要以低比特率传输高质量的视频,再加上现代编解码器的高度复杂性,导致了转码中每个片段优化方法的想法。在本文中,我们重新审视拉格朗日乘子参数,这是在率失真优化的核心。目前,视频编码器使用预测模型来设置该参数,但这些模型对手头的视频是不可知的。我们探索了使用拉格朗日乘子参数的每个片段直接搜索优化可以实现的增益。我们在一个比以前的研究尝试过的更大的视频语料库上评估这个优化框架。我们的结果表明,拉格朗日乘子的每个片段优化导致x265在10 k现代视频剪辑语料库中的BD-Rate平均提高1.87%,在单个剪辑中高达25%。在100个片段的子集上,libaom-av1的平均改进率为0.69%。然而,我们表明,针对libaom-av1的每个片段,每个帧类型的λ优化可以在单个片段上将这些平均增益提高到2.5%和高达14.9%。我们的优化方案需要每个剪辑大约50-250个额外的编码,但我们表明,在优化中使用代理视频可以实现显著的加速。由于优化是在较低的分辨率下进行的,因此这些计算增益(高达×200)会导致BD-Rate改进的轻微损失。总的来说,本文强调了在现代编解码器中重新检查拉格朗日乘子估计的价值,因为在不改变标准中使用的工具的情况下仍然可以获得显着的收益。
{"title":"The disparity between optimal and practical Lagrangian multiplier estimation in video encoders","authors":"D. Ringis, Vibhoothi, François Pitié, A. Kokaram","doi":"10.3389/frsip.2023.1205104","DOIUrl":"https://doi.org/10.3389/frsip.2023.1205104","url":null,"abstract":"With video streaming making up 80% of the global internet bandwidth, the need to deliver high-quality video at low bitrate, combined with the high complexity of modern codecs, has led to the idea of a per-clip optimisation approach in transcoding. In this paper, we revisit the Lagrangian multiplier parameter, which is at the core of rate-distortion optimisation. Currently, video encoders use prediction models to set this parameter but these models are agnostic to the video at hand. We explore the gains that could be achieved using a per-clip direct-search optimisation of the Lagrangian multiplier parameter. We evaluate this optimisation framework on a much larger corpus of videos than that has been attempted by previous research. Our results show that per-clip optimisation of the Lagrangian multiplier leads to BD-Rate average improvements of 1.87% for x265 across a 10 k clip corpus of modern videos, and up to 25% in a single clip. Average improvements of 0.69% are reported for libaom-av1 on a subset of 100 clips. However, we show that a per-clip, per-frame-type optimisation of λ for libaom-av1 can increase these average gains to 2.5% and up to 14.9% on a single clip. Our optimisation scheme requires about 50–250 additional encodes per-clip but we show that significant speed-up can be made using proxy videos in the optimisation. These computational gains (of up to ×200) incur a slight loss to BD-Rate improvement because the optimisation is conducted at lower resolutions. Overall, this paper highlights the value of re-examining the estimation of the Lagrangian multiplier in modern codecs as there are significant gains still available without changing the tools used in the standards.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86668664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multilevel dynamic model for documenting, reactivating and preserving interactive multimedia art 记录、重新激活和保存交互式多媒体艺术的多层次动态模型
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-06-30 DOI: 10.3389/frsip.2023.1183294
Alessandro Fiordelmondo, A. Russo, Mattia Pizzato, Luca Zecchinato, S. Canazza
Preserving interactive multimedia artworks is a challenging research field due to their complex nature and technological obsolescence. Established preservation strategies are inadequate since they do not cover the complex relations between analogue and digital components, their short life expectancies, and the experience produced when the artworks are activated. The existence of many projects in this research area highlights the urgency to create a preservation practice focused on the new multimedia art forms. The paper introduces the Multilevel Dynamic Preservation (MDP) model, developed at the Centro di Sonologia Computazionale (CSC) of the University of Padova, which aims to preserve multimedia artworks through different levels of information (about the components, their relationship and the activated experiences) through various exhibitions and thus as a process or a dynamic object. The model has been developed through several case studies. This paper reports a specific and complex one: the “hybrid reactivation” of the Il caos delle sfere, a 1999 interactive installation by Italian composer Carlo De Pirro. The entire reactivation process aims at preserving its identity, rather than simply replicating the original installation, and consists of both the replacement of old and non-functioning components components (“adaptive/update approach”) and the reactivation of original parts (“purist approach“)-hence the name “hybrid reactivation”. Through this case study, it was possible to test and optimize the model in all aspects: from collecting old documentation and using it for reactivation to creating new documentation and archiving the entire artwork. The model allows us to preserve the artwork as a process of change, minimizing the loss of information about previous versions. Most importantly, it lets us rethink the concept of the authenticity of interactive multimedia art, shifting the focus from materiality to the experience and function that artworks activate. The model avoids recording both the last reactivation and the first exhibition as authentic. It records the process of transformation between reactivations. It is through this process that the authenticity of the artwork can be inferred.
保存互动多媒体艺术作品是一个具有挑战性的研究领域,因为它们的复杂性和技术过时。现有的保护策略是不够的,因为它们没有涵盖模拟和数字组件之间的复杂关系,它们的短暂寿命,以及艺术品被激活时产生的体验。在这一研究领域中存在着许多项目,这凸显了创建一个以新的多媒体艺术形式为重点的保护实践的紧迫性。本文介绍了多级动态保存(MDP)模型,该模型是由帕多瓦大学的Sonologia Computazionale中心(CSC)开发的,旨在通过各种展览,通过不同层次的信息(关于组件,它们的关系和激活的经验)来保存多媒体艺术品,从而作为一个过程或一个动态对象。该模型是通过几个案例研究发展起来的。本文报道了一个具体而复杂的例子:意大利作曲家卡洛·德·皮罗(Carlo De Pirro) 1999年创作的互动装置作品《Il caos delle sere》的“混合再激活”。整个重新激活过程旨在保持其特性,而不是简单地复制原始安装,包括替换旧的和不能正常工作的组件组件(“自适应/更新方法”)和重新激活原始部件(“纯粹方法”)-因此得名“混合重新激活”。通过这个案例研究,可以在各个方面测试和优化模型:从收集旧文档并使用它重新激活到创建新文档并存档整个美术作品。该模型允许我们将艺术品保存为一个变化的过程,最大限度地减少关于以前版本的信息损失。最重要的是,它让我们重新思考互动多媒体艺术的真实性概念,将焦点从物质性转移到艺术品所激活的体验和功能上。该模型避免将最后一次重新激活和第一次展览都记录为真实的。它记录了再激活之间的转换过程。通过这个过程,可以推断艺术品的真实性。
{"title":"A multilevel dynamic model for documenting, reactivating and preserving interactive multimedia art","authors":"Alessandro Fiordelmondo, A. Russo, Mattia Pizzato, Luca Zecchinato, S. Canazza","doi":"10.3389/frsip.2023.1183294","DOIUrl":"https://doi.org/10.3389/frsip.2023.1183294","url":null,"abstract":"Preserving interactive multimedia artworks is a challenging research field due to their complex nature and technological obsolescence. Established preservation strategies are inadequate since they do not cover the complex relations between analogue and digital components, their short life expectancies, and the experience produced when the artworks are activated. The existence of many projects in this research area highlights the urgency to create a preservation practice focused on the new multimedia art forms. The paper introduces the Multilevel Dynamic Preservation (MDP) model, developed at the Centro di Sonologia Computazionale (CSC) of the University of Padova, which aims to preserve multimedia artworks through different levels of information (about the components, their relationship and the activated experiences) through various exhibitions and thus as a process or a dynamic object. The model has been developed through several case studies. This paper reports a specific and complex one: the “hybrid reactivation” of the Il caos delle sfere, a 1999 interactive installation by Italian composer Carlo De Pirro. The entire reactivation process aims at preserving its identity, rather than simply replicating the original installation, and consists of both the replacement of old and non-functioning components components (“adaptive/update approach”) and the reactivation of original parts (“purist approach“)-hence the name “hybrid reactivation”. Through this case study, it was possible to test and optimize the model in all aspects: from collecting old documentation and using it for reactivation to creating new documentation and archiving the entire artwork. The model allows us to preserve the artwork as a process of change, minimizing the loss of information about previous versions. Most importantly, it lets us rethink the concept of the authenticity of interactive multimedia art, shifting the focus from materiality to the experience and function that artworks activate. The model avoids recording both the last reactivation and the first exhibition as authentic. It records the process of transformation between reactivations. It is through this process that the authenticity of the artwork can be inferred.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86281441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptual video quality assessment: the journey continues! 感性视频质量测评:征程还在继续!
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-06-27 DOI: 10.3389/frsip.2023.1193523
Avinab Saha, Sai Karthikey Pentapati, Zaixi Shang, Ramit Pahwa, Bowen Chen, Hakan Emre Gedik, Sandeep Mishra, A. Bovik
Perceptual Video Quality Assessment (VQA) is one of the most fundamental and challenging problems in the field of Video Engineering. Along with video compression, it has become one of two dominant theoretical and algorithmic technologies in television streaming and social media. Over the last 2 decades, the volume of video traffic over the internet has grown exponentially, powered by rapid advancements in cloud services, faster video compression technologies, and increased access to high-speed, low-latency wireless internet connectivity. This has given rise to issues related to delivering extraordinary volumes of picture and video data to an increasingly sophisticated and demanding global audience. Consequently, developing algorithms to measure the quality of pictures and videos as perceived by humans has become increasingly critical since these algorithms can be used to perceptually optimize trade-offs between quality and bandwidth consumption. VQA models have evolved from algorithms developed for generic 2D videos to specialized algorithms explicitly designed for on-demand video streaming, user-generated content (UGC), virtual and augmented reality (VR and AR), cloud gaming, high dynamic range (HDR), and high frame rate (HFR) scenarios. Along the way, we also describe the advancement in algorithm design, beginning with traditional hand-crafted feature-based methods and finishing with current deep-learning models powering accurate VQA algorithms. We also discuss the evolution of Subjective Video Quality databases containing videos and human-annotated quality scores, which are the necessary tools to create, test, compare, and benchmark VQA algorithms. To finish, we discuss emerging trends in VQA algorithm design and general perspectives on the evolution of Video Quality Assessment in the foreseeable future.
感知视频质量评估(VQA)是视频工程领域最基本、最具挑战性的问题之一。与视频压缩一起,它已经成为电视流媒体和社交媒体的两大主要理论和算法技术之一。在过去的20年里,由于云服务的快速发展、更快的视频压缩技术以及高速、低延迟的无线互联网连接的增加,互联网上的视频流量呈指数级增长。这就产生了向日益复杂和要求越来越高的全球观众提供大量图片和视频数据的问题。因此,开发算法来衡量人类感知的图片和视频的质量变得越来越重要,因为这些算法可以用来感知优化质量和带宽消耗之间的权衡。VQA模型已经从为普通2D视频开发的算法发展到专门为点播视频流、用户生成内容(UGC)、虚拟现实和增强现实(VR和AR)、云游戏、高动态范围(HDR)和高帧率(HFR)场景设计的算法。在此过程中,我们还描述了算法设计的进步,从传统的手工制作的基于特征的方法开始,并以当前的深度学习模型为精确的VQA算法提供动力。我们还讨论了包含视频和人工注释质量分数的主观视频质量数据库的发展,这些数据库是创建、测试、比较和基准测试VQA算法的必要工具。最后,我们讨论了VQA算法设计的新趋势以及视频质量评估在可预见的未来发展的一般观点。
{"title":"Perceptual video quality assessment: the journey continues!","authors":"Avinab Saha, Sai Karthikey Pentapati, Zaixi Shang, Ramit Pahwa, Bowen Chen, Hakan Emre Gedik, Sandeep Mishra, A. Bovik","doi":"10.3389/frsip.2023.1193523","DOIUrl":"https://doi.org/10.3389/frsip.2023.1193523","url":null,"abstract":"Perceptual Video Quality Assessment (VQA) is one of the most fundamental and challenging problems in the field of Video Engineering. Along with video compression, it has become one of two dominant theoretical and algorithmic technologies in television streaming and social media. Over the last 2 decades, the volume of video traffic over the internet has grown exponentially, powered by rapid advancements in cloud services, faster video compression technologies, and increased access to high-speed, low-latency wireless internet connectivity. This has given rise to issues related to delivering extraordinary volumes of picture and video data to an increasingly sophisticated and demanding global audience. Consequently, developing algorithms to measure the quality of pictures and videos as perceived by humans has become increasingly critical since these algorithms can be used to perceptually optimize trade-offs between quality and bandwidth consumption. VQA models have evolved from algorithms developed for generic 2D videos to specialized algorithms explicitly designed for on-demand video streaming, user-generated content (UGC), virtual and augmented reality (VR and AR), cloud gaming, high dynamic range (HDR), and high frame rate (HFR) scenarios. Along the way, we also describe the advancement in algorithm design, beginning with traditional hand-crafted feature-based methods and finishing with current deep-learning models powering accurate VQA algorithms. We also discuss the evolution of Subjective Video Quality databases containing videos and human-annotated quality scores, which are the necessary tools to create, test, compare, and benchmark VQA algorithms. To finish, we discuss emerging trends in VQA algorithm design and general perspectives on the evolution of Video Quality Assessment in the foreseeable future.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74386511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
4DEgo: ego-velocity estimation from high-resolution radar data dego:从高分辨率雷达数据估计自我速度
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-06-27 DOI: 10.3389/frsip.2023.1198205
Prashant Rai, N. Strokina, R. Ghabcheloo
Automotive radars allow for perception of the environment in adverse visibility and weather conditions. New high-resolution sensors have demonstrated potential for tasks beyond obstacle detection and velocity adjustment, such as mapping or target tracking. This paper proposes an end-to-end method for ego-velocity estimation based on radar scan registration. Our architecture includes a 3D convolution over all three channels of the heatmap, capturing features associated with motion, and an attention mechanism for selecting significant features for regression. To the best of our knowledge, this is the first work utilizing the full 3D radar heatmap for ego-velocity estimation. We verify the efficacy of our approach using the publicly available ColoRadar dataset and study the effect of architectural choices and distributional shifts on performance.
汽车雷达允许感知环境在不利的能见度和天气条件。新的高分辨率传感器已经证明了在障碍物检测和速度调整之外的任务中的潜力,例如绘图或目标跟踪。提出了一种基于雷达扫描配准的端到端自我速度估计方法。我们的架构包括热图的所有三个通道的3D卷积,捕获与运动相关的特征,以及选择重要特征进行回归的注意机制。据我们所知,这是第一个利用全3D雷达热图进行自我速度估计的工作。我们使用公开可用的ColoRadar数据集验证了我们方法的有效性,并研究了架构选择和分布变化对性能的影响。
{"title":"4DEgo: ego-velocity estimation from high-resolution radar data","authors":"Prashant Rai, N. Strokina, R. Ghabcheloo","doi":"10.3389/frsip.2023.1198205","DOIUrl":"https://doi.org/10.3389/frsip.2023.1198205","url":null,"abstract":"Automotive radars allow for perception of the environment in adverse visibility and weather conditions. New high-resolution sensors have demonstrated potential for tasks beyond obstacle detection and velocity adjustment, such as mapping or target tracking. This paper proposes an end-to-end method for ego-velocity estimation based on radar scan registration. Our architecture includes a 3D convolution over all three channels of the heatmap, capturing features associated with motion, and an attention mechanism for selecting significant features for regression. To the best of our knowledge, this is the first work utilizing the full 3D radar heatmap for ego-velocity estimation. We verify the efficacy of our approach using the publicly available ColoRadar dataset and study the effect of architectural choices and distributional shifts on performance.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89775602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Epileptic seizure prediction based on multiresolution convolutional neural networks 基于多分辨率卷积神经网络的癫痫发作预测
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-05-30 DOI: 10.3389/frsip.2023.1175305
Ali K. Ibrahim, H. Zhuang, E. Tognoli, Ali Muhamed Ali, N. Erdol
Epilepsy withholds patients’ control of their body or consciousness and puts them at risk in the course of their daily life. This article pursues the development of a smart neurocomputational technology to alert epileptic patients wearing EEG sensors of an impending seizure. An innovative approach for epileptic seizure prediction has been proposed to improve prediction accuracy and reduce the false alarm rate in comparison with state-of-the-art benchmarks. Maximal overlap discrete wavelet transform was used to decompose EEG signals into different frequency resolutions, and a multiresolution convolutional neural network is designed to extract discriminative features from each frequency band. The algorithm automatically generates patient-specific features to best classify preictal and interictal segments of the subject. The method can be applied to any patient case from any dataset without the need for a handcrafted feature extraction procedure. The proposed approach was tested with two popular epilepsy patient datasets. It achieved a sensitivity of 82% and a false prediction rate of 0.058 with the Children’s Hospital Boston-MIT scalp EEG dataset and a sensitivity of 85% and a false prediction rate of 0.19 with the American Epilepsy Society Seizure Prediction Challenge dataset. This technology provides a personalized solution for the patient that has improved sensitivity and specificity, yet because of the algorithm’s intrinsic ability for generalization, it emancipates from the reliance on epileptologists’ expertise to tune a wearable technological aid, which will ultimately help to deploy it broadly, including in medically underserved locations across the globe.
癫痫使患者无法控制自己的身体或意识,使他们在日常生活中处于危险之中。本文致力于开发一种智能神经计算技术,以提醒癫痫患者佩戴脑电图传感器即将发作。与最先进的基准相比,提出了一种创新的癫痫发作预测方法,以提高预测准确性并降低误报率。采用最大重叠离散小波变换对脑电信号进行不同频率分辨率的分解,设计多分辨率卷积神经网络提取各频带的判别特征。该算法自动生成患者特定的特征,以对主题的前段和间隔段进行最佳分类。该方法可以应用于任何数据集的任何病例,而不需要手工制作的特征提取过程。该方法在两个流行的癫痫患者数据集上进行了测试。对于儿童医院波士顿-麻省理工学院头皮EEG数据集,该方法的灵敏度为82%,错误预测率为0.058;对于美国癫痫协会癫痫发作预测挑战数据集,该方法的灵敏度为85%,错误预测率为0.19。这项技术为患者提供了一种个性化的解决方案,提高了灵敏度和特异性,但由于该算法固有的泛化能力,它从依赖癫痫医生的专业知识来调整可穿戴技术辅助装置中解放出来,最终将有助于广泛部署,包括在全球医疗服务不足的地区。
{"title":"Epileptic seizure prediction based on multiresolution convolutional neural networks","authors":"Ali K. Ibrahim, H. Zhuang, E. Tognoli, Ali Muhamed Ali, N. Erdol","doi":"10.3389/frsip.2023.1175305","DOIUrl":"https://doi.org/10.3389/frsip.2023.1175305","url":null,"abstract":"Epilepsy withholds patients’ control of their body or consciousness and puts them at risk in the course of their daily life. This article pursues the development of a smart neurocomputational technology to alert epileptic patients wearing EEG sensors of an impending seizure. An innovative approach for epileptic seizure prediction has been proposed to improve prediction accuracy and reduce the false alarm rate in comparison with state-of-the-art benchmarks. Maximal overlap discrete wavelet transform was used to decompose EEG signals into different frequency resolutions, and a multiresolution convolutional neural network is designed to extract discriminative features from each frequency band. The algorithm automatically generates patient-specific features to best classify preictal and interictal segments of the subject. The method can be applied to any patient case from any dataset without the need for a handcrafted feature extraction procedure. The proposed approach was tested with two popular epilepsy patient datasets. It achieved a sensitivity of 82% and a false prediction rate of 0.058 with the Children’s Hospital Boston-MIT scalp EEG dataset and a sensitivity of 85% and a false prediction rate of 0.19 with the American Epilepsy Society Seizure Prediction Challenge dataset. This technology provides a personalized solution for the patient that has improved sensitivity and specificity, yet because of the algorithm’s intrinsic ability for generalization, it emancipates from the reliance on epileptologists’ expertise to tune a wearable technological aid, which will ultimately help to deploy it broadly, including in medically underserved locations across the globe.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84629413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From 2D to 3D video conferencing: modular RGB-D capture and reconstruction for interactive natural user representations in immersive extended reality (XR) communication 从2D到3D视频会议:沉浸式扩展现实(XR)通信中交互式自然用户表示的模块化RGB-D捕获和重建
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-05-22 DOI: 10.3389/frsip.2023.1139897
S. Gunkel, S. Dijkstra-Soudarissanane, H. Stokking, O. Niamut
With recent advancements in Virtual Reality (VR) and Augmented Reality (AR) hardware, many new immersive Extended Reality (XR) applications and services arose. One challenge that remains is to solve the social isolation often felt in these extended reality experiences and to enable a natural multi-user communication with high Social Presence. While a multitude of solutions exist to address this issue with computer-generated “artificial” avatars (based on pre-rendered 3D models), this form of user representation might not be sufficient for conveying a sense of co-presence for many use cases. In particular, for personal communication (for example, with family, doctor, or sales representatives) or for applications requiring photorealistic rendering. One alternative solution is to capture users (and objects) with the help of RGBD sensors to allow real-time photorealistic representations of users. In this paper, we present a complete and modular RGBD capture application and outline the different steps needed to utilize RGBD as means of photorealistic 3D user representations. We outline different capture modalities, as well as individual functional processing blocks, with its advantages and disadvantages. We evaluate our approach in two ways, a technical evaluation of the operation of the different modules and two small-scale user evaluations within integrated applications. The integrated applications present the use of the modular RGBD capture in both augmented reality and virtual reality communication application use cases, tested in realistic real-world settings. Our examples show that the proposed modular capture and reconstruction pipeline allows for easy evaluation and extension of each step of the processing pipeline. Furthermore, it allows parallel code execution, keeping performance overhead and delay low. Finally, our proposed methods show that an integration of 3D photorealistic user representations into existing video communication transmission systems is feasible and allows for new immersive extended reality applications.
随着虚拟现实(VR)和增强现实(AR)硬件的最新发展,出现了许多新的沉浸式扩展现实(XR)应用程序和服务。仍然存在的一个挑战是解决在这些扩展现实体验中经常感到的社会隔离,并使具有高社交存在的自然多用户通信成为可能。虽然有许多解决方案可以通过计算机生成的“人工”化身(基于预渲染的3D模型)来解决这个问题,但对于许多用例来说,这种形式的用户表示可能不足以传达共同存在的感觉。特别是,对于个人通信(例如,与家人、医生或销售代表)或需要逼真渲染的应用程序。另一种解决方案是在RGBD传感器的帮助下捕获用户(和对象),以实现用户的实时逼真表示。在本文中,我们提出了一个完整的模块化RGBD捕获应用程序,并概述了利用RGBD作为逼真3D用户表示手段所需的不同步骤。我们概述了不同的捕获方式,以及单个功能处理块,其优点和缺点。我们以两种方式评估我们的方法,一种是对不同模块操作的技术评估,另一种是对综合应用程序中的两种小规模用户评估。集成应用程序在增强现实和虚拟现实通信应用用例中展示了模块化RGBD捕获的使用,并在现实世界环境中进行了测试。我们的示例表明,所提出的模块化捕获和重建管道允许轻松评估和扩展处理管道的每个步骤。此外,它允许并行代码执行,保持较低的性能开销和延迟。最后,我们提出的方法表明,将3D逼真的用户表示集成到现有的视频通信传输系统中是可行的,并允许新的沉浸式扩展现实应用。
{"title":"From 2D to 3D video conferencing: modular RGB-D capture and reconstruction for interactive natural user representations in immersive extended reality (XR) communication","authors":"S. Gunkel, S. Dijkstra-Soudarissanane, H. Stokking, O. Niamut","doi":"10.3389/frsip.2023.1139897","DOIUrl":"https://doi.org/10.3389/frsip.2023.1139897","url":null,"abstract":"With recent advancements in Virtual Reality (VR) and Augmented Reality (AR) hardware, many new immersive Extended Reality (XR) applications and services arose. One challenge that remains is to solve the social isolation often felt in these extended reality experiences and to enable a natural multi-user communication with high Social Presence. While a multitude of solutions exist to address this issue with computer-generated “artificial” avatars (based on pre-rendered 3D models), this form of user representation might not be sufficient for conveying a sense of co-presence for many use cases. In particular, for personal communication (for example, with family, doctor, or sales representatives) or for applications requiring photorealistic rendering. One alternative solution is to capture users (and objects) with the help of RGBD sensors to allow real-time photorealistic representations of users. In this paper, we present a complete and modular RGBD capture application and outline the different steps needed to utilize RGBD as means of photorealistic 3D user representations. We outline different capture modalities, as well as individual functional processing blocks, with its advantages and disadvantages. We evaluate our approach in two ways, a technical evaluation of the operation of the different modules and two small-scale user evaluations within integrated applications. The integrated applications present the use of the modular RGBD capture in both augmented reality and virtual reality communication application use cases, tested in realistic real-world settings. Our examples show that the proposed modular capture and reconstruction pipeline allows for easy evaluation and extension of each step of the processing pipeline. Furthermore, it allows parallel code execution, keeping performance overhead and delay low. Finally, our proposed methods show that an integration of 3D photorealistic user representations into existing video communication transmission systems is feasible and allows for new immersive extended reality applications.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"122 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77391334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Data-driven airborne bayesian forward-looking superresolution imaging based on generalized Gaussian distribution 基于广义高斯分布的数据驱动机载贝叶斯前视超分辨率成像
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-05-11 DOI: 10.3389/frsip.2023.1093203
Hongmeng Chen, Zeyu Wang, Yingjie Zhang, X. Jin, Wenquan Gao, Jizhou Yu
Airborne forward-looking radar (AFLR) has been more and more impoatant due to its wide application in the military and civilian fields, such as automatic driving, sea surveillance, airport surveillance and guidance. Recently, sparse deconvolution technique has been paid much attention in AFLR. However, the azimuth resolution performance gradually decreases with the complexity of the imaging scene. In this paper, a data-driven airborne Bayesian forward-looking superresolution imaging algorithm based on generalized gaussian distribution (GGD- Bayesian) for complex imaging scene is proposed. The generalized gaussian distribution is utilized to describe the sparsity information of the imaging scene, which is quite essential to adaptively fit different imaging scenes. Moreover, the mathematical model for forward-looking imaging was established under the maximum a posteriori (MAP) criterion based on the Bayesian framework. To solve the above optimization problem, quasi-Newton algorithm is derived and used. The main contribution of the paper is the automatic selection for the sparsity parameter in the process of forward-looking imaging. The performance assessment with simulated data has demonstrated the effectiveness of our proposed GGD- Bayesian algorithm under complex scenarios.
机载前视雷达(AFLR)在自动驾驶、海上监视、机场监视和制导等军事和民用领域的广泛应用,使其越来越受到重视。近年来,稀疏反褶积技术在AFLR中受到了广泛的关注。然而,随着成像场景的复杂性,方位角分辨率性能逐渐降低。针对复杂成像场景,提出了一种基于广义高斯分布(GGD- Bayesian)的数据驱动机载贝叶斯前视超分辨率成像算法。利用广义高斯分布来描述成像场景的稀疏度信息,这对于自适应适应不同的成像场景是至关重要的。基于贝叶斯框架,建立了最大后验准则下的前视成像数学模型。为解决上述优化问题,推导并应用了准牛顿算法。本文的主要贡献在于前视成像过程中稀疏度参数的自动选择。模拟数据的性能评估证明了我们提出的GGD-贝叶斯算法在复杂场景下的有效性。
{"title":"Data-driven airborne bayesian forward-looking superresolution imaging based on generalized Gaussian distribution","authors":"Hongmeng Chen, Zeyu Wang, Yingjie Zhang, X. Jin, Wenquan Gao, Jizhou Yu","doi":"10.3389/frsip.2023.1093203","DOIUrl":"https://doi.org/10.3389/frsip.2023.1093203","url":null,"abstract":"Airborne forward-looking radar (AFLR) has been more and more impoatant due to its wide application in the military and civilian fields, such as automatic driving, sea surveillance, airport surveillance and guidance. Recently, sparse deconvolution technique has been paid much attention in AFLR. However, the azimuth resolution performance gradually decreases with the complexity of the imaging scene. In this paper, a data-driven airborne Bayesian forward-looking superresolution imaging algorithm based on generalized gaussian distribution (GGD- Bayesian) for complex imaging scene is proposed. The generalized gaussian distribution is utilized to describe the sparsity information of the imaging scene, which is quite essential to adaptively fit different imaging scenes. Moreover, the mathematical model for forward-looking imaging was established under the maximum a posteriori (MAP) criterion based on the Bayesian framework. To solve the above optimization problem, quasi-Newton algorithm is derived and used. The main contribution of the paper is the automatic selection for the sparsity parameter in the process of forward-looking imaging. The performance assessment with simulated data has demonstrated the effectiveness of our proposed GGD- Bayesian algorithm under complex scenarios.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85594732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Apparent color picker: color prediction model to extract apparent color in photos 显色选择器:颜色预测模型,用于提取照片中的显色
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-05-09 DOI: 10.3389/frsip.2023.1133210
Yuki Kubota, Shigeo Yoshida, M. Inami
A color extraction interface reflecting human color perception helps pick colors from natural images as users see. Apparent color in photos differs from pixel color due to complex factors, including color constancy and adjacent color. However, methodologies for estimating the apparent color in photos have yet to be proposed. In this paper, the authors investigate suitable model structures and features for constructing an apparent color picker, which extracts the apparent color from natural photos. Regression models were constructed based on the psychophysical dataset for given images to predict the apparent color from image features. The linear regression model incorporates features that reflect multi-scale adjacent colors. The evaluation experiments confirm that the estimated color was closer to the apparent color than the pixel color for an average of 70%–80% of the images. However, the accuracy decreased for several conditions, including low and high saturation at low luminance. The authors believe that the proposed methodology could be applied to develop user interfaces to compensate for the discrepancy between human perception and computer predictions.
反映人类色彩感知的颜色提取界面可以帮助用户从自然图像中选择颜色。照片中的表观颜色与像素颜色不同,这是由于包括颜色恒常性和相邻颜色在内的复杂因素。然而,估计照片中表观颜色的方法尚未提出。在本文中,作者研究了合适的模型结构和特征,用于构造一个从自然照片中提取表观颜色的表观颜色选择器。基于给定图像的心理物理数据集构建回归模型,从图像特征中预测表观颜色。线性回归模型包含反映多尺度相邻颜色的特征。评价实验证实,在平均70% ~ 80%的图像中,估计颜色比像素颜色更接近表观颜色。然而,在几种情况下,包括低亮度下的低饱和度和高饱和度,精度会下降。作者认为,所提出的方法可以应用于开发用户界面,以弥补人类感知和计算机预测之间的差异。
{"title":"Apparent color picker: color prediction model to extract apparent color in photos","authors":"Yuki Kubota, Shigeo Yoshida, M. Inami","doi":"10.3389/frsip.2023.1133210","DOIUrl":"https://doi.org/10.3389/frsip.2023.1133210","url":null,"abstract":"A color extraction interface reflecting human color perception helps pick colors from natural images as users see. Apparent color in photos differs from pixel color due to complex factors, including color constancy and adjacent color. However, methodologies for estimating the apparent color in photos have yet to be proposed. In this paper, the authors investigate suitable model structures and features for constructing an apparent color picker, which extracts the apparent color from natural photos. Regression models were constructed based on the psychophysical dataset for given images to predict the apparent color from image features. The linear regression model incorporates features that reflect multi-scale adjacent colors. The evaluation experiments confirm that the estimated color was closer to the apparent color than the pixel color for an average of 70%–80% of the images. However, the accuracy decreased for several conditions, including low and high saturation at low luminance. The authors believe that the proposed methodology could be applied to develop user interfaces to compensate for the discrepancy between human perception and computer predictions.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89670813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Recent trends in multimedia forensics and visual content verification 社论:多媒体取证和视觉内容验证的最新趋势
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-05-09 DOI: 10.3389/frsip.2023.1210123
R. Caldelli, Duc Tien Dang Nguyen, Cecilia Pasquini
Huge amounts of multimedia content are in fact generated every day, pervading the web and popular sharing platforms such as social networks. Such data carry embedded traces due to the whole creation and sharing cycle, which can be recovered and exploited to assess the authenticity of a specific asset. This includes identifying the provenance of media data, the generation device or crafting method, as well as potential manipulation of the multimedia signal. Also, the massive introduction of artificial intelligence and of modern performing devices, together with new paradigms for content sharing and usage, have determined the need to research novel methodologies that can globally take into account all these important changes. This Research Topic gathers cutting-edge techniques for the forensic analysis and verification of media data, including solutions at the edge of signal processing, machine/ deep learning, and multimedia analysis. Research approaches to multimedia forensics have rapidly evolved in the last years, as a consequence of both technological advancements inmedia creation and distribution, andmethodological advancements in signal processing and learning. One evident aspect is the disruptive diffusion of deep learning models for addressing tasks related to audio-visual data. As a consequence of the impressive performance boost they brought in different areas, deep architectures nowadays dominate in multimedia forensics research as well. Then, forensic methodologies need to be updated with respect to the constant evolution of acquisition devices and data formats. Therefore, algorithms are also designed with the goal of efficiently analyzing high-resolution data, possibly subject to advanced in-camera processing. In addition, there is an increasing need for detection technologies that are able to identify synthetically generated visual data, in response to the impressive advancements of generative models based on Artificial intelligence (AI) such as Generative Adversarial Networks (GANs). We are glad to introduce the accepted manuscripts to this Research Topic, which are well aligned with these cutting-edge research trends and are authored by highly recognized OPEN ACCESS
事实上,每天都会产生大量的多媒体内容,遍布网络和社交网络等流行的分享平台。由于整个创建和共享周期,这些数据带有嵌入的痕迹,可以恢复和利用这些痕迹来评估特定资产的真实性。这包括识别媒体数据的来源、生成设备或制作方法,以及对多媒体信号的潜在操纵。此外,人工智能和现代表演设备的大量引入,以及内容共享和使用的新范式,决定了研究能够在全球范围内考虑所有这些重要变化的新方法的必要性。本研究课题汇集了媒体数据取证分析和验证的前沿技术,包括信号处理、机器/深度学习和多媒体分析的前沿解决方案。由于媒体创造和传播方面的技术进步,以及信号处理和学习方面的方法进步,多媒体取证的研究方法在过去几年中迅速发展。一个明显的方面是用于解决与视听数据相关任务的深度学习模型的破坏性扩散。由于它们在不同领域带来了令人印象深刻的性能提升,深度架构如今在多媒体取证研究中也占主导地位。然后,取证方法需要随着采集设备和数据格式的不断发展而更新。因此,算法的设计也以高效地分析高分辨率数据为目标,可能需要进行先进的相机内处理。此外,对于能够识别合成生成的视觉数据的检测技术的需求越来越大,以响应基于人工智能(AI)的生成模型(如生成对抗网络(gan))的令人印象深刻的进步。我们很高兴为本研究课题引入被接受的稿件,这些稿件与这些前沿研究趋势很好地结合在一起,并且是由高度认可的OPEN ACCESS撰写的
{"title":"Editorial: Recent trends in multimedia forensics and visual content verification","authors":"R. Caldelli, Duc Tien Dang Nguyen, Cecilia Pasquini","doi":"10.3389/frsip.2023.1210123","DOIUrl":"https://doi.org/10.3389/frsip.2023.1210123","url":null,"abstract":"Huge amounts of multimedia content are in fact generated every day, pervading the web and popular sharing platforms such as social networks. Such data carry embedded traces due to the whole creation and sharing cycle, which can be recovered and exploited to assess the authenticity of a specific asset. This includes identifying the provenance of media data, the generation device or crafting method, as well as potential manipulation of the multimedia signal. Also, the massive introduction of artificial intelligence and of modern performing devices, together with new paradigms for content sharing and usage, have determined the need to research novel methodologies that can globally take into account all these important changes. This Research Topic gathers cutting-edge techniques for the forensic analysis and verification of media data, including solutions at the edge of signal processing, machine/ deep learning, and multimedia analysis. Research approaches to multimedia forensics have rapidly evolved in the last years, as a consequence of both technological advancements inmedia creation and distribution, andmethodological advancements in signal processing and learning. One evident aspect is the disruptive diffusion of deep learning models for addressing tasks related to audio-visual data. As a consequence of the impressive performance boost they brought in different areas, deep architectures nowadays dominate in multimedia forensics research as well. Then, forensic methodologies need to be updated with respect to the constant evolution of acquisition devices and data formats. Therefore, algorithms are also designed with the goal of efficiently analyzing high-resolution data, possibly subject to advanced in-camera processing. In addition, there is an increasing need for detection technologies that are able to identify synthetically generated visual data, in response to the impressive advancements of generative models based on Artificial intelligence (AI) such as Generative Adversarial Networks (GANs). We are glad to introduce the accepted manuscripts to this Research Topic, which are well aligned with these cutting-edge research trends and are authored by highly recognized OPEN ACCESS","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74924706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in signal processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1