首页 > 最新文献

Frontiers in signal processing最新文献

英文 中文
Video fingerprinting: Past, present, and future 视频指纹识别:过去,现在和未来
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-09-02 DOI: 10.3389/frsip.2022.984169
M. Allouche, M. Mitrea
The last decades have seen video production and consumption rise significantly: TV/cinematography, social networking, digital marketing, and video surveillance incrementally and cumulatively turned video content into the predilection type of data to be exchanged, stored, and processed. Belonging to video processing realm, video fingerprinting (also referred to as content-based copy detection or near duplicate detection) regroups research efforts devoted to identifying duplicated and/or replicated versions of a given video sequence (query) in a reference video dataset. The present paper reports on a state-of-the-art study on the past and present of video fingerprinting, while attempting to identify trends for its development. First, the conceptual basis and evaluation frameworks are set. This way, the methodological approaches (situated at the cross-roads of image processing, machine learning, and neural networks) can be structured and discussed. Finally, fingerprinting is confronted to the challenges raised by the emerging video applications (e.g., unmanned vehicles or fake news) and to the constraints they set in terms of content traceability and computational complexity. The relationship with other technologies for content tracking (e.g., DLT - Distributed Ledger Technologies) are also presented and discussed.
在过去的几十年里,视频制作和消费显著增长:电视/电影摄影、社交网络、数字营销和视频监控逐渐和累积地将视频内容转变为可交换、存储和处理的偏好类型的数据。视频指纹(也称为基于内容的复制检测或近重复检测)属于视频处理领域,它将致力于识别参考视频数据集中给定视频序列(查询)的重复和/或复制版本的研究工作重新分组。本文件报告了对视频指纹识别的过去和现在的最新研究,同时试图确定其发展趋势。首先,建立了概念基础和评价框架。通过这种方式,方法学方法(位于图像处理、机器学习和神经网络的交叉路口)可以被结构化和讨论。最后,指纹识别面临着新兴视频应用(例如,无人驾驶车辆或假新闻)带来的挑战,以及它们在内容可追溯性和计算复杂性方面设置的限制。还介绍和讨论了与其他内容跟踪技术(例如DLT -分布式账本技术)的关系。
{"title":"Video fingerprinting: Past, present, and future","authors":"M. Allouche, M. Mitrea","doi":"10.3389/frsip.2022.984169","DOIUrl":"https://doi.org/10.3389/frsip.2022.984169","url":null,"abstract":"The last decades have seen video production and consumption rise significantly: TV/cinematography, social networking, digital marketing, and video surveillance incrementally and cumulatively turned video content into the predilection type of data to be exchanged, stored, and processed. Belonging to video processing realm, video fingerprinting (also referred to as content-based copy detection or near duplicate detection) regroups research efforts devoted to identifying duplicated and/or replicated versions of a given video sequence (query) in a reference video dataset. The present paper reports on a state-of-the-art study on the past and present of video fingerprinting, while attempting to identify trends for its development. First, the conceptual basis and evaluation frameworks are set. This way, the methodological approaches (situated at the cross-roads of image processing, machine learning, and neural networks) can be structured and discussed. Finally, fingerprinting is confronted to the challenges raised by the emerging video applications (e.g., unmanned vehicles or fake news) and to the constraints they set in terms of content traceability and computational complexity. The relationship with other technologies for content tracking (e.g., DLT - Distributed Ledger Technologies) are also presented and discussed.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82057395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Blind visual quality assessment of light field images based on distortion maps 基于畸变图的光场图像盲视质量评价
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-08-26 DOI: 10.3389/frsip.2022.815058
Sana Alamgeer, Mylène C. Q. Farias
Light Field (LF) cameras capture spatial and angular information of a scene, generating a high-dimensional data that brings several challenges to compression, transmission, and reconstruction algorithms. One research area that has been attracting a lot of attention is the design of Light Field images quality assessment (LF-IQA) methods. In this paper, we propose a No-Reference (NR) LF-IQA method that is based on reference-free distortion maps. With this goal, we first generate a synthetically distorted dataset of 2D images. Then, we compute SSIM distortion maps of these images and use these maps as ground error maps. We train a GAN architecture using these SSIM distortion maps as quality labels. This trained model is used to generate reference-free distortion maps of sub-aperture images of LF contents. Finally, the quality prediction is obtained performing the following steps: 1) perform a non-linear dimensionality reduction with a isometric mapping of the generated distortion maps to obtain the LFI feature vectors and 2) perform a regression using a Random Forest Regressor (RFR) algorithm to obtain the LF quality estimates. Results show that the proposed method is robust and accurate, outperforming several state-of-the-art LF-IQA methods.
光场(LF)相机捕获场景的空间和角度信息,生成高维数据,这给压缩、传输和重建算法带来了一些挑战。光场图像质量评价(LF-IQA)方法的设计是近年来备受关注的一个研究领域。本文提出了一种基于无参考失真图的无参考(NR) LF-IQA方法。为了实现这个目标,我们首先生成一个合成扭曲的二维图像数据集。然后,我们计算这些图像的SSIM失真图,并将这些图用作地面误差图。我们使用这些SSIM失真图作为质量标签来训练GAN架构。该模型用于生成LF内容子孔径图像的无参考失真图。最后,通过以下步骤获得质量预测:1)使用生成的失真图的等距映射进行非线性降维,以获得LFI特征向量;2)使用随机森林回归(RFR)算法进行回归,以获得LF质量估计。结果表明,该方法具有鲁棒性和准确性,优于几种最先进的LF-IQA方法。
{"title":"Blind visual quality assessment of light field images based on distortion maps","authors":"Sana Alamgeer, Mylène C. Q. Farias","doi":"10.3389/frsip.2022.815058","DOIUrl":"https://doi.org/10.3389/frsip.2022.815058","url":null,"abstract":"Light Field (LF) cameras capture spatial and angular information of a scene, generating a high-dimensional data that brings several challenges to compression, transmission, and reconstruction algorithms. One research area that has been attracting a lot of attention is the design of Light Field images quality assessment (LF-IQA) methods. In this paper, we propose a No-Reference (NR) LF-IQA method that is based on reference-free distortion maps. With this goal, we first generate a synthetically distorted dataset of 2D images. Then, we compute SSIM distortion maps of these images and use these maps as ground error maps. We train a GAN architecture using these SSIM distortion maps as quality labels. This trained model is used to generate reference-free distortion maps of sub-aperture images of LF contents. Finally, the quality prediction is obtained performing the following steps: 1) perform a non-linear dimensionality reduction with a isometric mapping of the generated distortion maps to obtain the LFI feature vectors and 2) perform a regression using a Random Forest Regressor (RFR) algorithm to obtain the LF quality estimates. Results show that the proposed method is robust and accurate, outperforming several state-of-the-art LF-IQA methods.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"194 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72459259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial up-sampling of HRTF sets using generative adversarial networks: A pilot study 使用生成对抗网络的HRTF集空间上采样:一项试点研究
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-08-23 DOI: 10.3389/frsip.2022.904398
Pongsakorn Siripornpitak, Isaac Engel, Isaac Squires, Samuel J. Cooper, L. Picinali
Headphones-based spatial audio simulations rely on Head-related Transfer Functions (HRTFs) in order to reconstruct the sound field at the entrance of the listener’s ears. A HRTF is strongly dependent on the listener’s specific anatomical structures, and it has been shown that virtual sounds recreated with someone else’s HRTF result in worse localisation accuracy, as well as altering other subjective measures such as externalisation and realism. Acoustic measurements of the filtering effects generated by ears, head and torso has proven to be one of the most reliable ways to obtain a personalised HRTF. However this requires a dedicated and expensive setup, and is time-intensive. In order to simplify the measurement setup, thereby improving the scalability of the process, we are exploring strategies to reduce the number of acoustic measurements without degrading the spatial resolution of the HRTF. Traditionally, spatial up-sampling of HRTF sets is achieved through barycentric interpolation or by employing the spherical harmonics framework. However, such methods often perform poorly when the provided HRTF data is spatially very sparse. This work investigates the use of generative adversarial networks (GANs) to tackle the up-sampling problem, offering an initial insight about the suitability of this technique. Numerical evaluations based on spectral magnitude error and perceptual model outputs are presented on single spatial dimensions, therefore considering sources positioned only in one of the three main planes: Horizontal, median, and frontal. Results suggest that traditional HRTF interpolation methods perform better than the proposed GAN-based one when the distance between measurements is smaller than 90°, but for the sparsest conditions (i.e., one measurement every 120°–180°), the proposed approach outperforms the others.
基于耳机的空间音频模拟依赖于头部相关传递函数(hrtf)来重建听者耳入口处的声场。HRTF强烈依赖于听者的特定解剖结构,并且已经证明,用其他人的HRTF重建的虚拟声音会导致更差的定位准确性,以及改变其他主观指标,如外化和真实感。耳朵、头部和躯干产生的过滤效果的声学测量已被证明是获得个性化HRTF的最可靠方法之一。然而,这需要一个专门的和昂贵的设置,并且是耗时的。为了简化测量设置,从而提高过程的可扩展性,我们正在探索在不降低HRTF空间分辨率的情况下减少声学测量次数的策略。传统上,HRTF集的空间上采样是通过质心插值或采用球面谐波框架实现的。但是,当提供的HRTF数据在空间上非常稀疏时,这些方法的性能通常很差。这项工作研究了生成对抗网络(GANs)的使用,以解决上采样问题,提供了关于该技术适用性的初步见解。基于光谱幅度误差和感知模型输出的数值评估在单一空间维度上呈现,因此考虑仅位于三个主要平面之一的源:水平,中位和正面。结果表明,当测量值之间的距离小于90°时,传统的HRTF插值方法比基于gan的插值方法性能更好,但对于最稀疏的条件(即每120°-180°测量一次),本文方法优于其他方法。
{"title":"Spatial up-sampling of HRTF sets using generative adversarial networks: A pilot study","authors":"Pongsakorn Siripornpitak, Isaac Engel, Isaac Squires, Samuel J. Cooper, L. Picinali","doi":"10.3389/frsip.2022.904398","DOIUrl":"https://doi.org/10.3389/frsip.2022.904398","url":null,"abstract":"Headphones-based spatial audio simulations rely on Head-related Transfer Functions (HRTFs) in order to reconstruct the sound field at the entrance of the listener’s ears. A HRTF is strongly dependent on the listener’s specific anatomical structures, and it has been shown that virtual sounds recreated with someone else’s HRTF result in worse localisation accuracy, as well as altering other subjective measures such as externalisation and realism. Acoustic measurements of the filtering effects generated by ears, head and torso has proven to be one of the most reliable ways to obtain a personalised HRTF. However this requires a dedicated and expensive setup, and is time-intensive. In order to simplify the measurement setup, thereby improving the scalability of the process, we are exploring strategies to reduce the number of acoustic measurements without degrading the spatial resolution of the HRTF. Traditionally, spatial up-sampling of HRTF sets is achieved through barycentric interpolation or by employing the spherical harmonics framework. However, such methods often perform poorly when the provided HRTF data is spatially very sparse. This work investigates the use of generative adversarial networks (GANs) to tackle the up-sampling problem, offering an initial insight about the suitability of this technique. Numerical evaluations based on spectral magnitude error and perceptual model outputs are presented on single spatial dimensions, therefore considering sources positioned only in one of the three main planes: Horizontal, median, and frontal. Results suggest that traditional HRTF interpolation methods perform better than the proposed GAN-based one when the distance between measurements is smaller than 90°, but for the sparsest conditions (i.e., one measurement every 120°–180°), the proposed approach outperforms the others.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"76 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77188883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Rain Field Retrieval by Ground-Level Sensors of Various Types 利用不同类型地面传感器反演雨场
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-08-17 DOI: 10.3389/frsip.2022.877336
H. Messer, A. Eshel, H. Habi, S. Sagiv, X. Zheng
Rain gauges (RGs) have been utilized as sensors for local rain monitoring dating back to ancient Greece. The use of a network of RGs for 2D rain mapping is based on spatial interpolation that, while presenting good results in limited experimental areas, has limited scalability because of the unrealistic need to install and maintain a large quantity of sensors. Alternatively, commercial microwave links (CMLs), widely spread around the globe, have proven effective as near-ground opportunistic rain sensors. In this study, we study 2D rain field mapping using CMLs and/or RGs from a practical and a theoretical point of view, aiming to understand their inherent performance differences. We study sensor networks of either CMLs or RGs, and also a mixed network of CMLs and RGs. We show that with proper preprocessing, the rain field retrieval performance of the CML network is better than that of RGs. However, depending on the characteristics of the rain field, this performance gain can be negligible, especially when the rain field is smooth (relative to the topology of the sensor network). In other words, for a given network, the advantage of rain retrieval using a network of CMLs is more significant when the rain field is spotty.
雨量计(RGs)被用作监测当地雨量的传感器可以追溯到古希腊。使用RGs网络进行二维降雨测绘是基于空间插值的,虽然在有限的实验区域内呈现出良好的结果,但由于不现实地需要安装和维护大量传感器,因此可扩展性有限。另外,在全球广泛传播的商用微波链路(cml)已被证明是有效的近地机会性降雨传感器。在这项研究中,我们从实践和理论的角度研究了使用cml和/或RGs的2D雨场映射,旨在了解它们内在的性能差异。我们研究了cml或RGs的传感器网络,以及cml和RGs的混合网络。结果表明,通过适当的预处理,CML网络的雨场检索性能优于RGs网络。然而,根据雨场的特性,这种性能增益可以忽略不计,特别是当雨场是平滑的(相对于传感器网络的拓扑结构)。换句话说,对于给定的网络,当雨场是点状的时,使用cml网络进行降雨检索的优势更为显著。
{"title":"Rain Field Retrieval by Ground-Level Sensors of Various Types","authors":"H. Messer, A. Eshel, H. Habi, S. Sagiv, X. Zheng","doi":"10.3389/frsip.2022.877336","DOIUrl":"https://doi.org/10.3389/frsip.2022.877336","url":null,"abstract":"Rain gauges (RGs) have been utilized as sensors for local rain monitoring dating back to ancient Greece. The use of a network of RGs for 2D rain mapping is based on spatial interpolation that, while presenting good results in limited experimental areas, has limited scalability because of the unrealistic need to install and maintain a large quantity of sensors. Alternatively, commercial microwave links (CMLs), widely spread around the globe, have proven effective as near-ground opportunistic rain sensors. In this study, we study 2D rain field mapping using CMLs and/or RGs from a practical and a theoretical point of view, aiming to understand their inherent performance differences. We study sensor networks of either CMLs or RGs, and also a mixed network of CMLs and RGs. We show that with proper preprocessing, the rain field retrieval performance of the CML network is better than that of RGs. However, depending on the characteristics of the rain field, this performance gain can be negligible, especially when the rain field is smooth (relative to the topology of the sensor network). In other words, for a given network, the advantage of rain retrieval using a network of CMLs is more significant when the rain field is spotty.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89567970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Pseudo-doppler aided cancellation of self-interference in full-duplex communications 伪多普勒辅助消除全双工通信中的自干扰
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-08-16 DOI: 10.3389/frsip.2022.965551
Dongsheng Zheng, Yuli Yang
In this work, a novel scheme is proposed to enhance the self-interference (SI) cancellation in full-duplex communications. Beyond conventional SI cancellation schemes that rely on the SI suppression, our proposed scheme exploits periodic antenna switching to generate the pseudo-Doppler effect, thus completely removing the SI at the fundamental frequency. In this way, the desired signal is readily obtained through a low-pass filter. For the purpose of performance evaluation, the SI cancellation capability is defined as the difference between the output signal-to-interference-plus-noise ratio (SINR) and the input SINR. Theoretical formulations and numerical results validate that our pseudo-Doppler aided scheme has higher SI cancellation capability than the conventional SI suppression schemes. Moreover, the impact of the SI suppression achieved by conventional schemes and the influence of antenna switching timing difference on the practical implementation of the proposed scheme are investigated, to further substantiate the validity of our pseudo-Doppler aided SI cancellation.
本文提出了一种在全双工通信中增强自干扰消除的新方案。除了依赖于SI抑制的传统SI抵消方案之外,我们提出的方案利用周期性天线切换来产生伪多普勒效应,从而完全消除基频处的SI。这样,通过低通滤波器就可以很容易地获得所需的信号。出于性能评估的目的,SI抵消能力定义为输出信噪比(SINR)与输入SINR之间的差值。理论公式和数值结果验证了伪多普勒辅助方案比传统的SI抑制方案具有更高的SI抵消能力。此外,研究了传统方案对信号干扰抑制的影响以及天线切换时差对方案实际实施的影响,进一步验证了伪多普勒辅助信号干扰抵消的有效性。
{"title":"Pseudo-doppler aided cancellation of self-interference in full-duplex communications","authors":"Dongsheng Zheng, Yuli Yang","doi":"10.3389/frsip.2022.965551","DOIUrl":"https://doi.org/10.3389/frsip.2022.965551","url":null,"abstract":"In this work, a novel scheme is proposed to enhance the self-interference (SI) cancellation in full-duplex communications. Beyond conventional SI cancellation schemes that rely on the SI suppression, our proposed scheme exploits periodic antenna switching to generate the pseudo-Doppler effect, thus completely removing the SI at the fundamental frequency. In this way, the desired signal is readily obtained through a low-pass filter. For the purpose of performance evaluation, the SI cancellation capability is defined as the difference between the output signal-to-interference-plus-noise ratio (SINR) and the input SINR. Theoretical formulations and numerical results validate that our pseudo-Doppler aided scheme has higher SI cancellation capability than the conventional SI suppression schemes. Moreover, the impact of the SI suppression achieved by conventional schemes and the influence of antenna switching timing difference on the practical implementation of the proposed scheme are investigated, to further substantiate the validity of our pseudo-Doppler aided SI cancellation.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84394723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Editorial: Women in signal processing 社论:信号处理中的女性
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-08-16 DOI: 10.3389/frsip.2022.977475
H. Messer
One of the turn-points in my life was in the mid-90th, during the yearly major conference of the Signal Processing community, the IEEE international conference on acoustic, speech and signal processing (ICASSP). Women were always minorities in these meetings, and if one of them joined a chat in a social gathering, she were naturally considered as the wife of one of the men around. Being young and naïve then, I never saw it as an issue. However, at that specific meeting on 1995 I decided to join, for the first time, a social event, entitled “lunch for women in signal processing.” I found there a small but very diverse group of about 50 women from all around the world, and when each introduced herself, I had a very strong emotional reaction of a sisterhood. For the first time I felt at home in my professional community, and at that very specific moment I became active in the advancement of women in science and engineering, and in particular in my field, i.e., signal processing. An essential question rises is about the quantity and the visibility of women in signal processing today. Such data is hard to trace, but fortunately, the IEEE keeps and publishes statistical records. These records show that while the overall share of women in the IEEE (including students) is still around 10%, in the signal processing society it is a bit but not much better, about 2,300 out of 19,000 (~12%). However, Figure 1 shows a promising trend over the last decade: while the total number of women (non-students) in the IEEE signal processing society has increased by 45%, the number of women in higher-level grades (senior member and fellow) has doubled. Moreover, women take leadership positions in the IEEE signal processing society with the current president Athina P. Petropulu and 11 out of its 23 board members being women. OPEN ACCESS
我生命中的一个转折点是在90年代中期,在信号处理领域的年度主要会议期间,IEEE国际声学,语音和信号处理会议(ICASSP)。在这些会议中,女性总是少数,如果她们中的一个参加社交聚会聊天,她自然会被认为是周围某个男人的妻子。那时我还年轻,还在naïve,我从来没把它当成一个问题。然而,在1995年的那次会议上,我第一次决定参加一个名为“信号处理领域女性午餐”的社会活动。我在那里发现了一个人数不多但非常多样化的团体,大约有50名来自世界各地的女性,当她们自我介绍时,我有一种非常强烈的姐妹情谊的情感反应。那是我第一次在我的专业团体中感到自在,在那个非常特殊的时刻,我开始积极参与推动女性在科学和工程领域的发展,特别是在我的领域,即信号处理领域。一个重要的问题是,今天女性在信号处理领域的数量和知名度。这些数据很难追踪,但幸运的是,IEEE保存并发布了统计记录。这些记录表明,虽然IEEE中女性的总体比例(包括学生)仍在10%左右,但在信号处理领域,这一比例有所提高,在19,000名成员中约有2,300名(约12%)。然而,图1显示了过去十年中一个有希望的趋势:虽然IEEE信号处理协会的女性(非学生)总数增加了45%,但更高级别(高级会员和研究员)的女性人数增加了一倍。此外,女性在IEEE信号处理协会担任领导职务,现任主席Athina P. Petropulu和23名董事会成员中的11名是女性。开放获取
{"title":"Editorial: Women in signal processing","authors":"H. Messer","doi":"10.3389/frsip.2022.977475","DOIUrl":"https://doi.org/10.3389/frsip.2022.977475","url":null,"abstract":"One of the turn-points in my life was in the mid-90th, during the yearly major conference of the Signal Processing community, the IEEE international conference on acoustic, speech and signal processing (ICASSP). Women were always minorities in these meetings, and if one of them joined a chat in a social gathering, she were naturally considered as the wife of one of the men around. Being young and naïve then, I never saw it as an issue. However, at that specific meeting on 1995 I decided to join, for the first time, a social event, entitled “lunch for women in signal processing.” I found there a small but very diverse group of about 50 women from all around the world, and when each introduced herself, I had a very strong emotional reaction of a sisterhood. For the first time I felt at home in my professional community, and at that very specific moment I became active in the advancement of women in science and engineering, and in particular in my field, i.e., signal processing. An essential question rises is about the quantity and the visibility of women in signal processing today. Such data is hard to trace, but fortunately, the IEEE keeps and publishes statistical records. These records show that while the overall share of women in the IEEE (including students) is still around 10%, in the signal processing society it is a bit but not much better, about 2,300 out of 19,000 (~12%). However, Figure 1 shows a promising trend over the last decade: while the total number of women (non-students) in the IEEE signal processing society has increased by 45%, the number of women in higher-level grades (senior member and fellow) has doubled. Moreover, women take leadership positions in the IEEE signal processing society with the current president Athina P. Petropulu and 11 out of its 23 board members being women. OPEN ACCESS","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72998794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptual evaluation of approaches for binaural reproduction of non-spherical microphone array signals 非球形传声器阵列信号双耳再现方法的感知评价
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-08-15 DOI: 10.3389/frsip.2022.883696
Tim Lübeck, Sebastià V. Amengual Garí, P. Calamia, D. Alon, Jeff Crukley, Z. Ben-Hur
Microphone arrays consisting of sensors mounted on the surface of a rigid, spherical scatterer are popular tools for the capture and binaural reproduction of spatial sound scenes. However, microphone arrays with a perfectly spherical body and uniformly distributed microphones are often impractical for the consumer sector, in which microphone arrays are generally mounted on mobile and wearable devices of arbitrary geometries. Therefore, the binaural reproduction of sound fields captured with arbitrarily shaped microphone arrays has become an important field of research. In this work, we present a comparison of methods for the binaural reproduction of sound fields captured with non-spherical microphone arrays. First, we evaluated equatorial microphone arrays (EMAs), where the microphones are distributed on an equatorial contour of a rigid, spherical 1 . Second, we evaluated a microphone array with six microphones mounted on a pair of glasses. Using these two arrays, we conducted two listening experiments comparing four rendering methods based on acoustic scenes captured in different rooms2. The evaluation includes a microphone-based stereo approach (sAB stereo), a beamforming-based stereo approach (sXY stereo), beamforming-based binaural reproduction (BFBR), and BFBR with binaural signal matching (BSM). Additionally, the perceptual evaluation included binaural Ambisonics renderings, which were based on measurements with spherical microphone arrays. In the EMA experiment we included a fourth-order Ambisonics rendering, while in the glasses array experiment we included a second-order Ambisonics rendering. In both listening experiments in which participants compared all approaches with a dummy head recording we applied non-head-tracked binaural synthesis, with sound sources only in the horizontal plane. The perceived differences were rated separately for the attributes timbre and spaciousness. Results suggest that most approaches perform similarly to the Ambisonics rendering. Overall, BSM, and microphone-based stereo were rated the best for EMAs, and BFBR and microphone-based stereo for the glasses array.
由安装在刚性球形散射体表面的传感器组成的麦克风阵列是捕获和双耳再现空间声音场景的常用工具。然而,具有完美球形体和均匀分布麦克风的麦克风阵列通常不适合消费领域,其中麦克风阵列通常安装在任意几何形状的移动和可穿戴设备上。因此,利用任意形状的传声器阵列捕获声场的双耳再现已成为一个重要的研究领域。在这项工作中,我们提出了用非球形麦克风阵列捕获声场的双耳再现方法的比较。首先,我们评估了赤道麦克风阵列(ema),其中麦克风分布在刚性球形的赤道轮廓上。其次,我们评估了一个麦克风阵列,其中六个麦克风安装在一副眼镜上。利用这两种阵列,我们进行了两次听力实验,比较了四种基于不同房间声学场景的渲染方法。评估包括基于麦克风的立体声方法(sAB立体声)、基于波束形成的立体声方法(sXY立体声)、基于波束形成的双耳再现(BFBR)和基于BFBR的双耳信号匹配(BSM)。此外,感知评价包括双耳立体声渲染,这是基于球形麦克风阵列的测量。在EMA实验中,我们包含了一个四阶立体声渲染,而在眼镜阵列实验中,我们包含了一个二阶立体声渲染。在两个听力实验中,参与者将所有方法与假头录音进行比较,我们使用了非头部跟踪的双耳合成,声源仅在水平面上。感知差异分别评估属性音色和空间。结果表明,大多数方法的表现与立体声渲染相似。总体而言,BSM和基于麦克风的立体声被评为ema的最佳选择,BFBR和基于麦克风的立体声被评为眼镜阵列的最佳选择。
{"title":"Perceptual evaluation of approaches for binaural reproduction of non-spherical microphone array signals","authors":"Tim Lübeck, Sebastià V. Amengual Garí, P. Calamia, D. Alon, Jeff Crukley, Z. Ben-Hur","doi":"10.3389/frsip.2022.883696","DOIUrl":"https://doi.org/10.3389/frsip.2022.883696","url":null,"abstract":"Microphone arrays consisting of sensors mounted on the surface of a rigid, spherical scatterer are popular tools for the capture and binaural reproduction of spatial sound scenes. However, microphone arrays with a perfectly spherical body and uniformly distributed microphones are often impractical for the consumer sector, in which microphone arrays are generally mounted on mobile and wearable devices of arbitrary geometries. Therefore, the binaural reproduction of sound fields captured with arbitrarily shaped microphone arrays has become an important field of research. In this work, we present a comparison of methods for the binaural reproduction of sound fields captured with non-spherical microphone arrays. First, we evaluated equatorial microphone arrays (EMAs), where the microphones are distributed on an equatorial contour of a rigid, spherical 1 . Second, we evaluated a microphone array with six microphones mounted on a pair of glasses. Using these two arrays, we conducted two listening experiments comparing four rendering methods based on acoustic scenes captured in different rooms2. The evaluation includes a microphone-based stereo approach (sAB stereo), a beamforming-based stereo approach (sXY stereo), beamforming-based binaural reproduction (BFBR), and BFBR with binaural signal matching (BSM). Additionally, the perceptual evaluation included binaural Ambisonics renderings, which were based on measurements with spherical microphone arrays. In the EMA experiment we included a fourth-order Ambisonics rendering, while in the glasses array experiment we included a second-order Ambisonics rendering. In both listening experiments in which participants compared all approaches with a dummy head recording we applied non-head-tracked binaural synthesis, with sound sources only in the horizontal plane. The perceived differences were rated separately for the attributes timbre and spaciousness. Results suggest that most approaches perform similarly to the Ambisonics rendering. Overall, BSM, and microphone-based stereo were rated the best for EMAs, and BFBR and microphone-based stereo for the glasses array.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"106 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88121800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An investigation of the multi-dimensional (1D vs. 2D vs. 3D) analyses of EEG signals using traditional methods and deep learning-based methods 利用传统方法和基于深度学习的方法研究脑电图信号的多维(1D、2D、3D)分析
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-07-25 DOI: 10.3389/frsip.2022.936790
Darshil Shah, G. Gopan K, N. Sinha
Electroencephalographic (EEG) signals are electrical signals generated in the brain due to cognitive activities. They are non-invasive and are widely used to assess neurodegenerative conditions, mental load, and sleep patterns. In this work, we explore the utility of representing the inherently single dimensional time-series in different dimensions such as 1D-feature vector, 2D-feature maps, and 3D-videos. The proposed methodology is applied to four diverse datasets: 1) EEG baseline, 2) mental arithmetic, 3) Parkinson’s disease, and 4) emotion dataset. For a 1D analysis, popular 1D features hand-crafted from the time-series are utilized for classification. This performance is compared against the data-driven approach of using raw time-series as the input to the deep learning framework. To assess the efficacy of 2D representation, 2D feature maps that utilize a combination of the Feature Pyramid Network (FPN) and Atrous Spatial Pyramid Pooling (ASPP) is proposed. This is compared against an approach utilizing a composite feature set consisting of 2D feature maps and 1D features. However, these approaches do not exploit spatial, spectral, and temporal characteristics simultaneously. To address this, 3D EEG videos are created by stacking spectral feature maps obtained from each sub-band per time frame in a temporal domain. The EEG videos are the input to a combination of the Convolution Neural Network (CNN) and Long–Short Term Memory (LSTM) for classification. Performances obtained using the proposed methodologies have surpassed the state-of-the-art for three of the classification scenarios considered in this work, namely, EEG baselines, mental arithmetic, and Parkinson’s disease. The video analysis resulted in 92.5% and 98.81% peak mean accuracies for the EEG baseline and EEG mental arithmetic, respectively. On the other hand, for distinguishing Parkinson’s disease from controls, a peak mean accuracy of 88.51% is achieved using traditional methods on 1D feature vectors. This illustrates that 3D and 2D feature representations are effective for those EEG data where topographical changes in brain activation regions are observed. However, in scenarios where topographical changes are not consistent across subjects of the same class, these methodologies fail. On the other hand, the 1D analysis proves to be significantly effective in the case involving changes in the overall activation of the brain due to varying degrees of deterioration.
脑电图(EEG)信号是由于认知活动在大脑中产生的电信号。它们是非侵入性的,被广泛用于评估神经退行性疾病、精神负荷和睡眠模式。在这项工作中,我们探索了在不同维度(如1d特征向量、2d特征地图和3d视频)中表示固有单维时间序列的效用。该方法应用于四种不同的数据集:1)EEG基线,2)心算,3)帕金森病和4)情绪数据集。对于一维分析,从时间序列手工制作的流行一维特征被用于分类。将这种性能与使用原始时间序列作为深度学习框架输入的数据驱动方法进行比较。为了评估2D表示的有效性,提出了结合特征金字塔网络(FPN)和空间金字塔池(ASPP)的2D特征映射。这与利用由2D特征图和1D特征组成的复合特征集的方法进行了比较。然而,这些方法不能同时利用空间、光谱和时间特征。为了解决这个问题,3D脑电图视频是通过叠加在时域内每个时间帧从每个子带获得的频谱特征图来创建的。脑电图视频是卷积神经网络(CNN)和长短期记忆(LSTM)组合的输入,用于分类。使用所提出的方法获得的性能在本工作中考虑的三个分类场景中超过了最先进的技术,即脑电图基线、心算和帕金森病。视频分析结果表明,脑电基线和脑电图心算的峰值平均准确率分别为92.5%和98.81%。另一方面,对于帕金森病与对照组的区分,使用传统方法在一维特征向量上的峰值平均准确率为88.51%。这说明3D和2D特征表示对于观察到脑激活区域地形变化的脑电图数据是有效的。然而,在地形变化在同一类科目之间不一致的情况下,这些方法失败了。另一方面,对于由于不同程度的恶化而导致大脑整体激活发生变化的情况,1D分析被证明是非常有效的。
{"title":"An investigation of the multi-dimensional (1D vs. 2D vs. 3D) analyses of EEG signals using traditional methods and deep learning-based methods","authors":"Darshil Shah, G. Gopan K, N. Sinha","doi":"10.3389/frsip.2022.936790","DOIUrl":"https://doi.org/10.3389/frsip.2022.936790","url":null,"abstract":"Electroencephalographic (EEG) signals are electrical signals generated in the brain due to cognitive activities. They are non-invasive and are widely used to assess neurodegenerative conditions, mental load, and sleep patterns. In this work, we explore the utility of representing the inherently single dimensional time-series in different dimensions such as 1D-feature vector, 2D-feature maps, and 3D-videos. The proposed methodology is applied to four diverse datasets: 1) EEG baseline, 2) mental arithmetic, 3) Parkinson’s disease, and 4) emotion dataset. For a 1D analysis, popular 1D features hand-crafted from the time-series are utilized for classification. This performance is compared against the data-driven approach of using raw time-series as the input to the deep learning framework. To assess the efficacy of 2D representation, 2D feature maps that utilize a combination of the Feature Pyramid Network (FPN) and Atrous Spatial Pyramid Pooling (ASPP) is proposed. This is compared against an approach utilizing a composite feature set consisting of 2D feature maps and 1D features. However, these approaches do not exploit spatial, spectral, and temporal characteristics simultaneously. To address this, 3D EEG videos are created by stacking spectral feature maps obtained from each sub-band per time frame in a temporal domain. The EEG videos are the input to a combination of the Convolution Neural Network (CNN) and Long–Short Term Memory (LSTM) for classification. Performances obtained using the proposed methodologies have surpassed the state-of-the-art for three of the classification scenarios considered in this work, namely, EEG baselines, mental arithmetic, and Parkinson’s disease. The video analysis resulted in 92.5% and 98.81% peak mean accuracies for the EEG baseline and EEG mental arithmetic, respectively. On the other hand, for distinguishing Parkinson’s disease from controls, a peak mean accuracy of 88.51% is achieved using traditional methods on 1D feature vectors. This illustrates that 3D and 2D feature representations are effective for those EEG data where topographical changes in brain activation regions are observed. However, in scenarios where topographical changes are not consistent across subjects of the same class, these methodologies fail. On the other hand, the 1D analysis proves to be significantly effective in the case involving changes in the overall activation of the brain due to varying degrees of deterioration.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83877733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Editorial: Video Content Production and Delivery Over IP Networks and Distributed Computing Facilities 社论:基于IP网络和分布式计算设施的视频内容生产和交付
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-07-15 DOI: 10.3389/frsip.2022.975838
M. Naccari, Fan Zhang, Saverio G. Blasi, T. Guionnet
{"title":"Editorial: Video Content Production and Delivery Over IP Networks and Distributed Computing Facilities","authors":"M. Naccari, Fan Zhang, Saverio G. Blasi, T. Guionnet","doi":"10.3389/frsip.2022.975838","DOIUrl":"https://doi.org/10.3389/frsip.2022.975838","url":null,"abstract":"","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83367049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Eyes-Based Siamese Neural Network for the Detection of GAN-Generated Face Images 基于眼睛的Siamese神经网络检测gan生成的人脸图像
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-07-08 DOI: 10.3389/frsip.2022.918725
Jun Wang , B. Tondi, M. Barni
Generative Adversarial Network (GAN) models are nowadays able to generate synthetic images which are visually indistinguishable from the real ones, thus raising serious concerns about the spread of fake news and the need to develop tools to distinguish fake and real images in order to preserve the trustworthiness of digital images. The most powerful current detection methods are based on Deep Learning (DL) technology. While these methods get excellent performance when tested under conditions similar to those considered for training, they often suffer from a lack of robustness and generalization ability, as they fail to detect fake images that are generated by “unseen” GAN models. A possibility to overcome this problem is to develop tools that rely on the semantic attributes of the image. In this paper, we propose a semantic-based method for distinguishing GAN-generated from real faces, that relies on the analysis of inter-eye symmetries and inconsistencies. The method resorts to the superior capabilities of similarity learning of extracting representative and robust features. More specifically, a Siamese Neural Network (SNN) is utilized to extract high-level features characterizing the inter-eye similarity, that can be used to discriminate between real and synthetic pairs of eyes. We carried out extensive experiments to assess the performance of the proposed method in both matched and mismatched conditions pertaining to the GAN type used to generate the synthetic images and the robustness of the method in presence of post-processing. The results we got are comparable, and in some cases superior, to those achieved by the best performing state-of-the-art method leveraging on the analysis of the entire face image.
如今,生成对抗网络(GAN)模型能够生成在视觉上与真实图像无法区分的合成图像,从而引发了对假新闻传播的严重担忧,以及开发区分假图像和真实图像的工具的必要性,以保持数字图像的可信度。目前最强大的检测方法是基于深度学习(DL)技术。虽然这些方法在类似于训练的条件下测试时获得了出色的性能,但它们往往缺乏鲁棒性和泛化能力,因为它们无法检测到由“看不见的”GAN模型生成的假图像。克服这个问题的一种可能性是开发依赖于图像语义属性的工具。在本文中,我们提出了一种基于语义的方法来区分gan生成的人脸和真实人脸,该方法依赖于对眼间对称性和不一致性的分析。该方法利用相似性学习提取代表性和鲁棒性特征的优越能力。更具体地说,使用连体神经网络(SNN)提取表征眼间相似性的高级特征,可用于区分真实和合成的眼睛。我们进行了大量的实验,以评估所提出的方法在与用于生成合成图像的GAN类型相关的匹配和不匹配条件下的性能,以及该方法在后处理中存在的鲁棒性。我们得到的结果是相当的,在某些情况下,优于那些表现最好的最先进的方法,利用整个面部图像的分析。
{"title":"An Eyes-Based Siamese Neural Network for the Detection of GAN-Generated Face Images","authors":"Jun Wang , B. Tondi, M. Barni","doi":"10.3389/frsip.2022.918725","DOIUrl":"https://doi.org/10.3389/frsip.2022.918725","url":null,"abstract":"Generative Adversarial Network (GAN) models are nowadays able to generate synthetic images which are visually indistinguishable from the real ones, thus raising serious concerns about the spread of fake news and the need to develop tools to distinguish fake and real images in order to preserve the trustworthiness of digital images. The most powerful current detection methods are based on Deep Learning (DL) technology. While these methods get excellent performance when tested under conditions similar to those considered for training, they often suffer from a lack of robustness and generalization ability, as they fail to detect fake images that are generated by “unseen” GAN models. A possibility to overcome this problem is to develop tools that rely on the semantic attributes of the image. In this paper, we propose a semantic-based method for distinguishing GAN-generated from real faces, that relies on the analysis of inter-eye symmetries and inconsistencies. The method resorts to the superior capabilities of similarity learning of extracting representative and robust features. More specifically, a Siamese Neural Network (SNN) is utilized to extract high-level features characterizing the inter-eye similarity, that can be used to discriminate between real and synthetic pairs of eyes. We carried out extensive experiments to assess the performance of the proposed method in both matched and mismatched conditions pertaining to the GAN type used to generate the synthetic images and the robustness of the method in presence of post-processing. The results we got are comparable, and in some cases superior, to those achieved by the best performing state-of-the-art method leveraging on the analysis of the entire face image.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82019829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Frontiers in signal processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1