首页 > 最新文献

Frontiers in signal processing最新文献

英文 中文
Deep Reinforcement Learning-Based Optimization for RIS-Based UAV-NOMA Downlink Networks (Invited Paper) 基于ris的无人机- noma下行网络深度强化学习优化(特邀论文)
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-07-07 DOI: 10.3389/frsip.2022.915567
Shiyu Jiao, X. Xie, Zhiguo Ding
This study investigates the application of deep deterministic policy gradient (DDPG) to reconfigurable intelligent surface (RIS)-based unmanned aerial vehicles (UAV)-assisted non-orthogonal multiple access (NOMA) downlink networks. The deployment of UAV equipped with a RIS is important, as the UAV increases the flexibility of the RIS significantly, especially for the case of users who have no line-of-sight (LoS) path to the base station (BS). Therefore, the aim of this study is to maximize the sum-rate by jointly optimizing the power allocation of the BS, the phase shifting of the RIS, and the horizontal position of the UAV. The formulated problem is non-convex, the DDPG algorithm is utilized to solve it. The computer simulation results are provided to show the superior performance of the proposed DDPG-based algorithm.
研究了深度确定性策略梯度(DDPG)在可重构智能地面(RIS)无人机辅助非正交多址(NOMA)下行网络中的应用。配备RIS的UAV的部署是重要的,因为UAV显著地增加了RIS的灵活性,特别是对于没有视线(LoS)路径到基站(BS)的用户的情况。因此,本研究的目的是通过联合优化BS的功率分配、RIS的相移和无人机的水平位置来实现和速率的最大化。该公式问题为非凸问题,采用DDPG算法求解。计算机仿真结果表明了该算法的优越性。
{"title":"Deep Reinforcement Learning-Based Optimization for RIS-Based UAV-NOMA Downlink Networks (Invited Paper)","authors":"Shiyu Jiao, X. Xie, Zhiguo Ding","doi":"10.3389/frsip.2022.915567","DOIUrl":"https://doi.org/10.3389/frsip.2022.915567","url":null,"abstract":"This study investigates the application of deep deterministic policy gradient (DDPG) to reconfigurable intelligent surface (RIS)-based unmanned aerial vehicles (UAV)-assisted non-orthogonal multiple access (NOMA) downlink networks. The deployment of UAV equipped with a RIS is important, as the UAV increases the flexibility of the RIS significantly, especially for the case of users who have no line-of-sight (LoS) path to the base station (BS). Therefore, the aim of this study is to maximize the sum-rate by jointly optimizing the power allocation of the BS, the phase shifting of the RIS, and the horizontal position of the UAV. The formulated problem is non-convex, the DDPG algorithm is utilized to solve it. The computer simulation results are provided to show the superior performance of the proposed DDPG-based algorithm.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88427394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Bayesian Nonparametric Learning and Knowledge Transfer for Object Tracking Under Unknown Time-Varying Conditions 未知时变条件下目标跟踪的贝叶斯非参数学习与知识转移
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-07-06 DOI: 10.3389/frsip.2022.868638
Omar Alotaibi, A. Papandreou-Suppappola
We consider the problem of a primary source tracking a moving object under time-varying and unknown noise conditions. We propose two methods that integrate sequential Bayesian filtering with transfer learning to improve tracking performance. Within the transfer learning framework, multiple sources are assumed to perform the same tracking task as the primary source but under different noise conditions. The first method uses Gaussian mixtures to model the measurement distribution, assuming that the measurement noise intensity at the learning sources is fixed and known a priori and the learning and primary sources are simultaneously tracking the same source. The second tracking method uses Dirichlet process mixtures to model noise parameters, assuming that the learning source measurement noise intensity is unknown. As we demonstrate, the use of Bayesian nonparametric learning does not require all sources to track the same object. The learned information can be stored and transferred to the primary source when needed. Using simulations for both high- and low-signal-to-noise ratio conditions, we demonstrate the improved primary tracking performance as the number of learning sources increases.
我们考虑了在时变和未知噪声条件下,一次源跟踪运动目标的问题。我们提出了两种将顺序贝叶斯滤波与迁移学习相结合的方法来提高跟踪性能。在迁移学习框架中,假设多个源执行与主源相同的跟踪任务,但在不同的噪声条件下。第一种方法使用高斯混合建模测量分布,假设学习源处的测量噪声强度是固定的,并且先验已知,并且学习源和主要源同时跟踪同一源。第二种跟踪方法在假设学习源测量噪声强度未知的情况下,使用Dirichlet过程混合对噪声参数进行建模。正如我们所展示的,贝叶斯非参数学习的使用并不需要所有的源都跟踪同一个对象。在需要时,可以将学习到的信息存储并传输到主源。通过对高信噪比和低信噪比条件的模拟,我们证明了随着学习源数量的增加,主跟踪性能得到了改善。
{"title":"Bayesian Nonparametric Learning and Knowledge Transfer for Object Tracking Under Unknown Time-Varying Conditions","authors":"Omar Alotaibi, A. Papandreou-Suppappola","doi":"10.3389/frsip.2022.868638","DOIUrl":"https://doi.org/10.3389/frsip.2022.868638","url":null,"abstract":"We consider the problem of a primary source tracking a moving object under time-varying and unknown noise conditions. We propose two methods that integrate sequential Bayesian filtering with transfer learning to improve tracking performance. Within the transfer learning framework, multiple sources are assumed to perform the same tracking task as the primary source but under different noise conditions. The first method uses Gaussian mixtures to model the measurement distribution, assuming that the measurement noise intensity at the learning sources is fixed and known a priori and the learning and primary sources are simultaneously tracking the same source. The second tracking method uses Dirichlet process mixtures to model noise parameters, assuming that the learning source measurement noise intensity is unknown. As we demonstrate, the use of Bayesian nonparametric learning does not require all sources to track the same object. The learned information can be stored and transferred to the primary source when needed. Using simulations for both high- and low-signal-to-noise ratio conditions, we demonstrate the improved primary tracking performance as the number of learning sources increases.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86305609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Discrete Motion Control for Mobile Relay Networks 移动中继网络的自适应离散运动控制
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-07-06 DOI: 10.3389/frsip.2022.867388
Spilios Evmorfos, Dionysios S. Kalogerias, A. Petropulu
We consider the problem of joint beamforming and discrete motion control for mobile relaying networks in dynamic channel environments. We assume a single source-destination communication pair. We adopt a general time slotted approach where, during each slot, every relay implements optimal beamforming and estimates its optimal position for the subsequent slot. We assume that the relays move in a 2D compact square region that has been discretized into a fine grid. The goal is to derive discrete motion policies for the relays, in an adaptive fashion, so that they accommodate the dynamic changes of the channel and, therefore, maximize the Signal-to-Interference + Noise Ratio (SINR) at the destination. We present two different approaches for constructing the motion policies. The first approach assumes that the channel evolves as a Gaussian process and exhibits correlation with respect to both time and space. A stochastic programming method is proposed for estimating the relay positions (and the beamforming weights) based on causal information. The stochastic program is equivalent to a set of simple subproblems and the exact evaluation of the objective of each subproblem is impossible. To tackle this we propose a surrogate of the original subproblem that pertains to the Sample Average Approximation method. We denote this approach as model-based because it adopts the assumption that the underlying correlation structure of the channels is completely known. The second method is denoted as model-free, because it adopts no assumption for the channel statistics. For the scope of this approach, we set the problem of discrete relay motion control in a dynamic programming framework. Finally we employ deep Q learning to derive the motion policies. We provide implementation details that are crucial for achieving good performance in terms of the collective SINR at the destination. GRAPHICAL ABSTRACT
研究了动态信道环境下移动中继网络的联合波束形成和离散运动控制问题。我们假设只有一个源-目的通信对。我们采用一般的时隙方法,在每个时隙中,每个中继实现最佳波束形成并估计其后续时隙的最佳位置。我们假设继电器在一个二维紧凑的方形区域内运动,该区域被离散成一个精细的网格。目标是以自适应的方式为继电器导出离散运动策略,以便它们适应信道的动态变化,从而最大化目的地的信噪比(SINR)。我们提出了两种不同的方法来构建运动策略。第一种方法假设信道演变为高斯过程,并表现出与时间和空间的相关性。提出了一种基于因果信息估计中继位置和波束形成权重的随机规划方法。随机规划相当于一组简单的子问题,对每个子问题的目标的精确评价是不可能的。为了解决这个问题,我们提出了一个与样本平均近似方法相关的原始子问题的代理。我们将这种方法称为基于模型的方法,因为它采用的假设是通道的潜在相关结构是完全已知的。第二种方法被称为无模型方法,因为它对信道统计数据不做任何假设。对于这种方法的范围,我们将离散继电器运动控制问题置于动态规划框架中。最后,我们利用深度Q学习来推导运动策略。我们提供了实现细节,这些细节对于在目标处获得良好的总体SINR性能至关重要。图形抽象
{"title":"Adaptive Discrete Motion Control for Mobile Relay Networks","authors":"Spilios Evmorfos, Dionysios S. Kalogerias, A. Petropulu","doi":"10.3389/frsip.2022.867388","DOIUrl":"https://doi.org/10.3389/frsip.2022.867388","url":null,"abstract":"We consider the problem of joint beamforming and discrete motion control for mobile relaying networks in dynamic channel environments. We assume a single source-destination communication pair. We adopt a general time slotted approach where, during each slot, every relay implements optimal beamforming and estimates its optimal position for the subsequent slot. We assume that the relays move in a 2D compact square region that has been discretized into a fine grid. The goal is to derive discrete motion policies for the relays, in an adaptive fashion, so that they accommodate the dynamic changes of the channel and, therefore, maximize the Signal-to-Interference + Noise Ratio (SINR) at the destination. We present two different approaches for constructing the motion policies. The first approach assumes that the channel evolves as a Gaussian process and exhibits correlation with respect to both time and space. A stochastic programming method is proposed for estimating the relay positions (and the beamforming weights) based on causal information. The stochastic program is equivalent to a set of simple subproblems and the exact evaluation of the objective of each subproblem is impossible. To tackle this we propose a surrogate of the original subproblem that pertains to the Sample Average Approximation method. We denote this approach as model-based because it adopts the assumption that the underlying correlation structure of the channels is completely known. The second method is denoted as model-free, because it adopts no assumption for the channel statistics. For the scope of this approach, we set the problem of discrete relay motion control in a dynamic programming framework. Finally we employ deep Q learning to derive the motion policies. We provide implementation details that are crucial for achieving good performance in terms of the collective SINR at the destination. GRAPHICAL ABSTRACT","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"96 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91356544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
How Scalable Are Clade-Specific Marker K-Mer Based Hash Methods for Metagenomic Taxonomic Classification? 基于分支特异性标记K-Mer的哈希方法在宏基因组分类中的可扩展性如何?
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-07-05 DOI: 10.3389/frsip.2022.842513
Melissa M. Gray, Zhengqiao Zhao, G. Rosen
Efficiently and accurately identifying which microbes are present in a biological sample is important to medicine and biology. For example, in medicine, microbe identification allows doctors to better diagnose diseases. Two questions are essential to metagenomic analysis (the analysis of a random sampling of DNA in a patient/environment sample): How to accurately identify the microbes in samples and how to efficiently update the taxonomic classifier as new microbe genomes are sequenced and added to the reference database. To investigate how classifiers change as they train on more knowledge, we made sub-databases composed of genomes that existed in past years that served as “snapshots in time” (1999–2020) of the NCBI reference genome database. We evaluated two classification methods, Kraken 2 and CLARK with these snapshots using a real, experimental metagenomic sample from a human gut. This allowed us to measure how much of a real sample could confidently classify using these methods and as the database grows. Despite not knowing the ground truth, we could measure the concordance between methods and between years of the database within each method using a Bray-Curtis distance. In addition, we also recorded the training times of the classifiers for each snapshot. For all data for Kraken 2, we observed that as more genomes were added, more microbes from the sample were classified. CLARK had a similar trend, but in the final year, this trend reversed with the microbial variation and less unique k-mers. Also, both classifiers, while having different ways of training, generally are linear in time - but Kraken 2 has a significantly lower slope in scaling to more data.
有效和准确地识别生物样品中存在的微生物对医学和生物学都很重要。例如,在医学上,微生物鉴定使医生能够更好地诊断疾病。宏基因组分析(从患者/环境样本中随机取样DNA的分析)有两个关键问题:如何准确识别样本中的微生物,以及如何在新的微生物基因组测序并添加到参考数据库时有效地更新分类分类器。为了研究分类器在接受更多知识训练时是如何变化的,我们制作了由过去几年存在的基因组组成的子数据库,作为NCBI参考基因组数据库的“快照”(1999-2020)。我们评估了两种分类方法,Kraken 2和CLARK,使用这些快照使用来自人类肠道的真实实验性宏基因组样本。这使我们能够测量使用这些方法和随着数据库的增长,真实样本中有多少可以自信地分类。尽管不知道实际情况,但我们可以使用布雷-柯蒂斯距离测量方法之间的一致性以及每种方法中数据库年份之间的一致性。此外,我们还记录了每个快照的分类器的训练次数。对于Kraken 2的所有数据,我们观察到,随着更多的基因组被添加,更多的样本微生物被分类。CLARK也有类似的趋势,但在最后一年,这种趋势随着微生物的变化和较少独特的k-mers而逆转。此外,这两个分类器虽然有不同的训练方式,但通常在时间上是线性的——但Kraken 2在扩展到更多数据方面的斜率明显较低。
{"title":"How Scalable Are Clade-Specific Marker K-Mer Based Hash Methods for Metagenomic Taxonomic Classification?","authors":"Melissa M. Gray, Zhengqiao Zhao, G. Rosen","doi":"10.3389/frsip.2022.842513","DOIUrl":"https://doi.org/10.3389/frsip.2022.842513","url":null,"abstract":"Efficiently and accurately identifying which microbes are present in a biological sample is important to medicine and biology. For example, in medicine, microbe identification allows doctors to better diagnose diseases. Two questions are essential to metagenomic analysis (the analysis of a random sampling of DNA in a patient/environment sample): How to accurately identify the microbes in samples and how to efficiently update the taxonomic classifier as new microbe genomes are sequenced and added to the reference database. To investigate how classifiers change as they train on more knowledge, we made sub-databases composed of genomes that existed in past years that served as “snapshots in time” (1999–2020) of the NCBI reference genome database. We evaluated two classification methods, Kraken 2 and CLARK with these snapshots using a real, experimental metagenomic sample from a human gut. This allowed us to measure how much of a real sample could confidently classify using these methods and as the database grows. Despite not knowing the ground truth, we could measure the concordance between methods and between years of the database within each method using a Bray-Curtis distance. In addition, we also recorded the training times of the classifiers for each snapshot. For all data for Kraken 2, we observed that as more genomes were added, more microbes from the sample were classified. CLARK had a similar trend, but in the final year, this trend reversed with the microbial variation and less unique k-mers. Also, both classifiers, while having different ways of training, generally are linear in time - but Kraken 2 has a significantly lower slope in scaling to more data.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88276451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Identification of the QRS-Complexes in Electrocardiogram Signals Using Ramanujan Filter Bank-Based Periodicity Estimation Technique 基于拉马努金滤波器组周期估计技术的心电图信号qrs复合物鲁棒识别
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-06-29 DOI: 10.3389/frsip.2022.921973
S. Mukhopadhyay, S. Krishnan
Plausibly, the first computerized and automated electrocardiogram (ECG) signal processing algorithm was published in the literature in 1961, and since then, the number of algorithms that have been developed to-date for the detection of the QRS-complexes in ECG signals is countless. Both the digital signal processing and artificial intelligence-based techniques have been tested rigorously in many applications to achieve a high accuracy of the detection of the QRS-complexes in ECG signals. However, since the ECG signals are quasi-periodic in nature, a periodicity analysis-based technique would be an apt approach for the detection its QRS-complexes. Ramanujan filter bank (RFB)-based periodicity estimation technique is used in this research for the identification of the QRS-complexes in ECG signals. An added advantage of the proposed algorithm is that, at the instant of detection of a QRS-complex the algorithm can efficiently indicate whether it is a normal or a premature ventricular contraction or an atrial premature contraction QRS-complex. First, the ECG signal is preprocessed using Butterworth low and highpass filters followed by amplitude normalization. The normalized signal is then passed through a set of Ramanujan filters. Filtered signals from all the filters in the bank are then summed up to obtain a holistic time-domain representation of the ECG signal. Next, a Gaussian-weighted moving average filter is used to smooth the time-period-estimation data. Finally, the QRS-complexes are detected from the smoothed data using a peak-detection-based technique, and the abnormal ones are identified using a period thresholding-based technique. Performance of the proposed algorithm is tested on nine ECG databases (totaling a duration of 48.91 days) and is found to be highly competent compared to that of the state-of-the-art algorithms. To the best of our knowledge, such an RFB-based QRS-complex detection algorithm is reported here for the first time. The proposed algorithm can be adapted for the detection of other ECG waves, and also for the processing of other biomedical signals which exhibit periodic or quasi-periodic nature.
似乎,第一个计算机化和自动化的心电图(ECG)信号处理算法于1961年发表在文献中,从那时起,迄今为止开发的用于检测ECG信号中qrs复合物的算法的数量是无数的。在许多应用中,数字信号处理和基于人工智能的技术都经过了严格的测试,以实现对心电信号中qrs复合物的高精度检测。然而,由于心电信号具有准周期的性质,基于周期分析的技术将是检测其qrs复合物的一种合适方法。本研究采用基于拉马努金滤波器组(RFB)的周期估计技术对心电信号中的qrs -complex进行识别。该算法的另一个优点是,在检测到qrs复合体的瞬间,该算法可以有效地指示它是正常的还是室性早搏或心房早搏qrs复合体。首先,使用巴特沃斯低滤波器和高通滤波器对心电信号进行预处理,然后进行幅度归一化。然后将归一化后的信号通过一组拉马努金滤波器。然后将从所有滤波器中得到的滤波信号进行求和,以获得心电信号的整体时域表示。其次,采用高斯加权移动平均滤波器对周期估计数据进行平滑处理。最后,使用基于峰值检测的技术从平滑数据中检测qrs复合物,并使用基于周期阈值的技术识别异常qrs复合物。该算法在9个ECG数据库(总计48.91天)上进行了性能测试,与最先进的算法相比,该算法的性能非常出色。据我们所知,本文首次报道了这种基于rbf的qrs复合体检测算法。该算法可适用于其他心电波的检测,也适用于其他具有周期或准周期性质的生物医学信号的处理。
{"title":"Robust Identification of the QRS-Complexes in Electrocardiogram Signals Using Ramanujan Filter Bank-Based Periodicity Estimation Technique","authors":"S. Mukhopadhyay, S. Krishnan","doi":"10.3389/frsip.2022.921973","DOIUrl":"https://doi.org/10.3389/frsip.2022.921973","url":null,"abstract":"Plausibly, the first computerized and automated electrocardiogram (ECG) signal processing algorithm was published in the literature in 1961, and since then, the number of algorithms that have been developed to-date for the detection of the QRS-complexes in ECG signals is countless. Both the digital signal processing and artificial intelligence-based techniques have been tested rigorously in many applications to achieve a high accuracy of the detection of the QRS-complexes in ECG signals. However, since the ECG signals are quasi-periodic in nature, a periodicity analysis-based technique would be an apt approach for the detection its QRS-complexes. Ramanujan filter bank (RFB)-based periodicity estimation technique is used in this research for the identification of the QRS-complexes in ECG signals. An added advantage of the proposed algorithm is that, at the instant of detection of a QRS-complex the algorithm can efficiently indicate whether it is a normal or a premature ventricular contraction or an atrial premature contraction QRS-complex. First, the ECG signal is preprocessed using Butterworth low and highpass filters followed by amplitude normalization. The normalized signal is then passed through a set of Ramanujan filters. Filtered signals from all the filters in the bank are then summed up to obtain a holistic time-domain representation of the ECG signal. Next, a Gaussian-weighted moving average filter is used to smooth the time-period-estimation data. Finally, the QRS-complexes are detected from the smoothed data using a peak-detection-based technique, and the abnormal ones are identified using a period thresholding-based technique. Performance of the proposed algorithm is tested on nine ECG databases (totaling a duration of 48.91 days) and is found to be highly competent compared to that of the state-of-the-art algorithms. To the best of our knowledge, such an RFB-based QRS-complex detection algorithm is reported here for the first time. The proposed algorithm can be adapted for the detection of other ECG waves, and also for the processing of other biomedical signals which exhibit periodic or quasi-periodic nature.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74315974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Very Fast Copy-Move Forgery Detection Method for 4K Ultra HD Images 一种4K超高清图像快速复制-移动伪造检测方法
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-06-24 DOI: 10.3389/frsip.2022.906304
Laura Bertojo, C. Néraud, W. Puech
Copy-move forgery detection is a challenging task in digital image forensics. Keypoint-based detection methods have proven to be very efficient to detect copied-moved forged areas in images. Although these methods are effective, the keypoint matching phase has a high complexity, which takes a long time to detect forgeries, especially for very large images such as 4K Ultra HD images. In this paper, we propose a new keypoint-based method with a new fast feature matching algorithm, based on the generalized two nearest-neighbor (g2NN) algorithm allowing us to greatly reduce the complexity and thus the computation time. First, we extract keypoints from the input image. After ordering them, we perform a match search restricted to a window around the current keypoint. To detect the keypoints, we propose not to use a threshold, which allows low intensity keypoint matching and a very efficient detection of copy-move forgery, even in very uniform or weakly textured areas. Then, we apply a new matching algorithm, and finally we compute the cluster thanks to the DBSCAN algorithm. Our experimental results show that the method we propose can detect copied-moved areas in forged images very accurately and with a very short computation time which allows for the fast detection of forgeries on 4K images.
复制-移动伪造检测是数字图像取证中的一项具有挑战性的任务。事实证明,基于关键点的检测方法对于检测图像中的复制移动伪造区域是非常有效的。虽然这些方法都是有效的,但关键点匹配阶段的复杂度较高,检测伪造需要很长时间,特别是对于4K超高清图像等非常大的图像。在本文中,我们提出了一种新的基于关键点的方法和一种新的快速特征匹配算法,该算法基于广义两个最近邻(g2NN)算法,使我们大大降低了复杂度和计算时间。首先,我们从输入图像中提取关键点。在对它们排序之后,我们执行匹配搜索,限制在当前关键点周围的一个窗口内。为了检测关键点,我们建议不使用阈值,它允许低强度的关键点匹配和非常有效的检测复制-移动伪造,即使在非常均匀或弱纹理区域。然后采用一种新的匹配算法,最后利用DBSCAN算法进行聚类计算。实验结果表明,本文提出的方法可以非常准确地检测出伪造图像中的复制移动区域,并且计算时间非常短,可以实现对4K图像的快速检测。
{"title":"A Very Fast Copy-Move Forgery Detection Method for 4K Ultra HD Images","authors":"Laura Bertojo, C. Néraud, W. Puech","doi":"10.3389/frsip.2022.906304","DOIUrl":"https://doi.org/10.3389/frsip.2022.906304","url":null,"abstract":"Copy-move forgery detection is a challenging task in digital image forensics. Keypoint-based detection methods have proven to be very efficient to detect copied-moved forged areas in images. Although these methods are effective, the keypoint matching phase has a high complexity, which takes a long time to detect forgeries, especially for very large images such as 4K Ultra HD images. In this paper, we propose a new keypoint-based method with a new fast feature matching algorithm, based on the generalized two nearest-neighbor (g2NN) algorithm allowing us to greatly reduce the complexity and thus the computation time. First, we extract keypoints from the input image. After ordering them, we perform a match search restricted to a window around the current keypoint. To detect the keypoints, we propose not to use a threshold, which allows low intensity keypoint matching and a very efficient detection of copy-move forgery, even in very uniform or weakly textured areas. Then, we apply a new matching algorithm, and finally we compute the cluster thanks to the DBSCAN algorithm. Our experimental results show that the method we propose can detect copied-moved areas in forged images very accurately and with a very short computation time which allows for the fast detection of forgeries on 4K images.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75467445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Prediction of Treatment Response in Triple Negative Breast Cancer From Whole Slide Images 从全幻灯片图像预测三阴性乳腺癌的治疗反应
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-06-22 DOI: 10.3389/frsip.2022.851809
Peter Naylor, Tristan Lazard, G. Bataillon, M. Laé, A. Vincent-Salomon, A. Hamy, F. Reyal, Thomas Walter
The automatic analysis of stained histological sections is becoming increasingly popular. Deep Learning is today the method of choice for the computational analysis of such data, and has shown spectacular results for large datasets for a large variety of cancer types and prediction tasks. On the other hand, many scientific questions relate to small, highly specific cohorts. Such cohorts pose serious challenges for Deep Learning, typically trained on large datasets. In this article, we propose a modification of the standard nested cross-validation procedure for hyperparameter tuning and model selection, dedicated to the analysis of small cohorts. We also propose a new architecture for the particularly challenging question of treatment prediction, and apply this workflow to the prediction of response to neoadjuvant chemotherapy for Triple Negative Breast Cancer.
染色组织切片的自动分析正变得越来越流行。深度学习是当今对此类数据进行计算分析的首选方法,并在用于各种癌症类型和预测任务的大型数据集上显示出惊人的结果。另一方面,许多科学问题与小的、高度特定的群体有关。这样的群体对深度学习构成了严峻的挑战,深度学习通常是在大数据集上训练的。在本文中,我们提出了对标准嵌套交叉验证程序的修改,用于超参数调整和模型选择,专门用于小队列的分析。我们还为治疗预测这一特别具有挑战性的问题提出了一个新的架构,并将该工作流程应用于三阴性乳腺癌新辅助化疗反应的预测。
{"title":"Prediction of Treatment Response in Triple Negative Breast Cancer From Whole Slide Images","authors":"Peter Naylor, Tristan Lazard, G. Bataillon, M. Laé, A. Vincent-Salomon, A. Hamy, F. Reyal, Thomas Walter","doi":"10.3389/frsip.2022.851809","DOIUrl":"https://doi.org/10.3389/frsip.2022.851809","url":null,"abstract":"The automatic analysis of stained histological sections is becoming increasingly popular. Deep Learning is today the method of choice for the computational analysis of such data, and has shown spectacular results for large datasets for a large variety of cancer types and prediction tasks. On the other hand, many scientific questions relate to small, highly specific cohorts. Such cohorts pose serious challenges for Deep Learning, typically trained on large datasets. In this article, we propose a modification of the standard nested cross-validation procedure for hyperparameter tuning and model selection, dedicated to the analysis of small cohorts. We also propose a new architecture for the particularly challenging question of treatment prediction, and apply this workflow to the prediction of response to neoadjuvant chemotherapy for Triple Negative Breast Cancer.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89652183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automated Discrimination of Cough in Audio Recordings: A Scoping Review 录音中咳嗽的自动识别:范围综述
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-06-03 DOI: 10.3389/frsip.2022.759684
P. Sharan
The COVID-19 virus has irrevocably changed the world since 2020, and its incredible infectivity and severity have sent a majority of countries into lockdown. The virus’s incubation period can reach up to 14 days, enabling asymptomatic hosts to transmit the virus to many others in that period without realizing it, thus making containment difficult. Without actively getting tested each day, which is logistically improbable, it would be very difficult for one to know if they had the virus during the incubation period. The objective of this paper’s systematic review is to compile the different tools used to identify coughs and ascertain how artificial intelligence may be used to discriminate a cough from another type of cough. A systematic search was performed on Google Scholar, PubMed, and MIT library search engines to identify papers relevant to cough detection, discrimination, and epidemiology. A total of 204 papers have been compiled and reviewed and two datasets have been discussed. Cough recording datasets such as the ESC-50 and the FSDKaggle 2018 and 2019 datasets can be used for neural networking and identifying coughs. For cough discrimination techniques, neural networks such as k-NN, Feed Forward Neural Network, and Random Forests are used, as well as Support Vector Machine and naive Bayesian classifiers. Some methods propose hybrids. While there are many proposed ideas for cough discrimination, the method best suited for detecting COVID-19 coughs within this urgent time frame is not known. The main contribution of this review is to compile information on what has been researched on machine learning algorithms and its effectiveness in diagnosing COVID-19, as well as highlight the areas of debate and future areas for research. This review will aid future researchers in taking the best course of action for building a machine learning algorithm to discriminate COVID-19 related coughs with great accuracy and accessibility.
自2020年以来,COVID-19病毒已经不可逆转地改变了世界,其令人难以置信的传染性和严重性已使大多数国家进入封锁状态。该病毒的潜伏期可长达14天,使无症状宿主在此期间不自觉地将病毒传播给许多其他人,从而使遏制变得困难。如果不积极地每天进行检测(这在逻辑上是不可能的),就很难知道自己在潜伏期是否感染了病毒。本文系统综述的目的是汇编用于识别咳嗽的不同工具,并确定如何使用人工智能来区分咳嗽和其他类型的咳嗽。在Google Scholar、PubMed和MIT图书馆搜索引擎上进行系统搜索,以识别与咳嗽检测、歧视和流行病学相关的论文。共汇编和审查了204篇论文,并讨论了两个数据集。咳嗽记录数据集,如ESC-50和FSDKaggle 2018和2019数据集,可用于神经网络和识别咳嗽。对于咳嗽识别技术,使用了k-NN、前馈神经网络和随机森林等神经网络,以及支持向量机和朴素贝叶斯分类器。有些方法提出杂交。虽然有许多关于咳嗽辨别的建议,但在这个紧迫的时间框架内最适合检测COVID-19咳嗽的方法尚不清楚。这篇综述的主要贡献是汇编了关于机器学习算法及其在诊断COVID-19方面的有效性的研究信息,并强调了争论的领域和未来的研究领域。这一综述将有助于未来的研究人员采取最佳行动,建立一种机器学习算法,以极高的准确性和可访问性区分与COVID-19相关的咳嗽。
{"title":"Automated Discrimination of Cough in Audio Recordings: A Scoping Review","authors":"P. Sharan","doi":"10.3389/frsip.2022.759684","DOIUrl":"https://doi.org/10.3389/frsip.2022.759684","url":null,"abstract":"The COVID-19 virus has irrevocably changed the world since 2020, and its incredible infectivity and severity have sent a majority of countries into lockdown. The virus’s incubation period can reach up to 14 days, enabling asymptomatic hosts to transmit the virus to many others in that period without realizing it, thus making containment difficult. Without actively getting tested each day, which is logistically improbable, it would be very difficult for one to know if they had the virus during the incubation period. The objective of this paper’s systematic review is to compile the different tools used to identify coughs and ascertain how artificial intelligence may be used to discriminate a cough from another type of cough. A systematic search was performed on Google Scholar, PubMed, and MIT library search engines to identify papers relevant to cough detection, discrimination, and epidemiology. A total of 204 papers have been compiled and reviewed and two datasets have been discussed. Cough recording datasets such as the ESC-50 and the FSDKaggle 2018 and 2019 datasets can be used for neural networking and identifying coughs. For cough discrimination techniques, neural networks such as k-NN, Feed Forward Neural Network, and Random Forests are used, as well as Support Vector Machine and naive Bayesian classifiers. Some methods propose hybrids. While there are many proposed ideas for cough discrimination, the method best suited for detecting COVID-19 coughs within this urgent time frame is not known. The main contribution of this review is to compile information on what has been researched on machine learning algorithms and its effectiveness in diagnosing COVID-19, as well as highlight the areas of debate and future areas for research. This review will aid future researchers in taking the best course of action for building a machine learning algorithm to discriminate COVID-19 related coughs with great accuracy and accessibility.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82193573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Emerging Immersive Communication Systems: Overview, Taxonomy, and Good Practices for QoE Assessment 新兴沉浸式通信系统:概述、分类和QoE评估的良好实践
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-05-12 DOI: 10.3389/frsip.2022.917684
P. Pérez, E. González-Sosa, Jes'us Guti'errez, Narciso García
Several technological and scientific advances have been achieved recently in the fields of immersive systems (e.g., 360-degree/multiview video systems, augmented/mixed/virtual reality systems, immersive audio-haptic systems, etc.), which are offering new possibilities to applications and services in different communication domains, such as entertainment, virtual conferencing, working meetings, social relations, healthcare, and industry. Users of these immersive technologies can explore and experience the stimuli in a more interactive and personalized way than previous technologies (e.g., 2D video). Thus, considering the new technological challenges related to these systems and the new perceptual dimensions and interaction behaviors involved, a deep understanding of the users’ Quality of Experience (QoE) is required to satisfy their demands and expectations. In this sense, it is essential to foster the research on evaluating the QoE of immersive communication systems, since this will provide useful outcomes to optimize them and to identify the factors that can deteriorate the user experience. With this aim, subjective tests are usually performed following standard methodologies (e.g., ITU recommendations), which are designed for specific technologies and services. Although numerous user studies have been already published, there are no recommendations or standards that define common testing methodologies to be applied to evaluate immersive communication systems, such as those developed for images and video. Taking this into account, a revision of the QoE evaluation methods designed for previous technologies is required to develop robust and reliable methodologies for immersive communication systems. Thus, the objective of this paper is to provide an overview of existing immersive communication systems and related user studies, which can help on the definition of basic guidelines and testing methodologies to be used when performing user tests of immersive communication systems, such as 360-degree video-based telepresence, avatar-based social VR, cooperative AR, etc.
近年来,沉浸式系统(如360度/多视角视频系统、增强/混合/虚拟现实系统、沉浸式视听触觉系统等)领域取得了若干技术和科学进步,为娱乐、虚拟会议、工作会议、社会关系、医疗保健和工业等不同通信领域的应用和服务提供了新的可能性。这些沉浸式技术的用户可以以比以前的技术(例如,2D视频)更具互动性和个性化的方式探索和体验刺激。因此,考虑到与这些系统相关的新技术挑战以及涉及的新感知维度和交互行为,需要深入了解用户的体验质量(QoE)以满足他们的需求和期望。从这个意义上说,促进对沉浸式通信系统的QoE评估的研究是至关重要的,因为这将为优化它们提供有用的结果,并确定可能恶化用户体验的因素。为此目的,通常按照为特定技术和服务设计的标准方法(例如国际电联的建议)进行主观测试。虽然已经发表了许多用户研究,但没有建议或标准定义用于评估沉浸式通信系统的通用测试方法,例如为图像和视频开发的测试方法。考虑到这一点,需要对先前技术设计的QoE评估方法进行修订,以便为沉浸式通信系统开发健壮可靠的方法。因此,本文的目的是提供现有沉浸式通信系统和相关用户研究的概述,这可以帮助定义在执行沉浸式通信系统的用户测试时使用的基本指南和测试方法,例如基于360度视频的远程呈现,基于虚拟化身的社交VR,协作AR等。
{"title":"Emerging Immersive Communication Systems: Overview, Taxonomy, and Good Practices for QoE Assessment","authors":"P. Pérez, E. González-Sosa, Jes'us Guti'errez, Narciso García","doi":"10.3389/frsip.2022.917684","DOIUrl":"https://doi.org/10.3389/frsip.2022.917684","url":null,"abstract":"Several technological and scientific advances have been achieved recently in the fields of immersive systems (e.g., 360-degree/multiview video systems, augmented/mixed/virtual reality systems, immersive audio-haptic systems, etc.), which are offering new possibilities to applications and services in different communication domains, such as entertainment, virtual conferencing, working meetings, social relations, healthcare, and industry. Users of these immersive technologies can explore and experience the stimuli in a more interactive and personalized way than previous technologies (e.g., 2D video). Thus, considering the new technological challenges related to these systems and the new perceptual dimensions and interaction behaviors involved, a deep understanding of the users’ Quality of Experience (QoE) is required to satisfy their demands and expectations. In this sense, it is essential to foster the research on evaluating the QoE of immersive communication systems, since this will provide useful outcomes to optimize them and to identify the factors that can deteriorate the user experience. With this aim, subjective tests are usually performed following standard methodologies (e.g., ITU recommendations), which are designed for specific technologies and services. Although numerous user studies have been already published, there are no recommendations or standards that define common testing methodologies to be applied to evaluate immersive communication systems, such as those developed for images and video. Taking this into account, a revision of the QoE evaluation methods designed for previous technologies is required to develop robust and reliable methodologies for immersive communication systems. Thus, the objective of this paper is to provide an overview of existing immersive communication systems and related user studies, which can help on the definition of basic guidelines and testing methodologies to be used when performing user tests of immersive communication systems, such as 360-degree video-based telepresence, avatar-based social VR, cooperative AR, etc.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79727027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Att-TasNet: Attending to Encodings in Time-Domain Audio Speech Separation of Noisy, Reverberant Speech Mixtures 研究杂声混响混合语音的时域音频分离编码
Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2022-05-11 DOI: 10.3389/frsip.2022.856968
W. Ravenscroft, Stefan Goetze, Thomas Hain
Separation of speech mixtures in noisy and reverberant environments remains a challenging task for state-of-the-art speech separation systems. Time-domain audio speech separation networks (TasNets) are among the most commonly used network architectures for this task. TasNet models have demonstrated strong performance on typical speech separation baselines where speech is not contaminated with noise. When additive or convolutive noise is present, performance of speech separation degrades significantly. TasNets are typically constructed of an encoder network, a mask estimation network and a decoder network. The design of these networks puts the majority of the onus for enhancing the signal on the mask estimation network when used without any pre-processing of the input data or post processing of the separation network output data. Use of multihead attention (MHA) is proposed in this work as an additional layer in the encoder and decoder to help the separation network attend to encoded features that are relevant to the target speakers and conversely suppress noisy disturbances in the encoded features. As shown in this work, incorporating MHA mechanisms into the encoder network in particular leads to a consistent performance improvement across numerous quality and intelligibility metrics on a variety of acoustic conditions using the WHAMR corpus, a data-set of noisy reverberant speech mixtures. The use of MHA is also investigated in the decoder network where it is demonstrated that smaller performance improvements are consistently gained within specific model configurations. The best performing MHA models yield a mean 0.6 dB scale invariant signal-to-distortion (SISDR) improvement on noisy reverberant mixtures over a baseline 1D convolution encoder. A mean 1 dB SISDR improvement is observed on clean speech mixtures.
在嘈杂和混响环境中分离混合语音仍然是一个具有挑战性的任务。时域音频语音分离网络(TasNets)是该任务中最常用的网络架构之一。TasNet模型在典型的语音分离基线上表现出很强的性能,其中语音没有被噪声污染。当存在加性噪声或卷积噪声时,语音分离性能明显下降。tasnet通常由编码器网络、掩码估计网络和解码器网络组成。在这些网络的设计中,在没有对输入数据进行预处理或对分离网络的输出数据进行后处理的情况下,大部分的信号增强工作都放在了掩码估计网络上。在这项工作中,我们提出使用多头注意(MHA)作为编码器和解码器的附加层,以帮助分离网络关注与目标说话者相关的编码特征,并反过来抑制编码特征中的噪声干扰。正如这项工作所示,特别是将MHA机制纳入编码器网络,使用WHAMR语料库(噪声混响语音混合数据集),可以在各种声学条件下的许多质量和可理解性指标上实现一致的性能改进。在解码器网络中也研究了MHA的使用,其中证明了在特定模型配置中始终获得较小的性能改进。性能最好的MHA模型在基线1D卷积编码器上对噪声混响混合产生平均0.6 dB的尺度不变信号失真(SISDR)改进。在干净的语音混合情况下,SISDR平均提高了1 dB。
{"title":"Att-TasNet: Attending to Encodings in Time-Domain Audio Speech Separation of Noisy, Reverberant Speech Mixtures","authors":"W. Ravenscroft, Stefan Goetze, Thomas Hain","doi":"10.3389/frsip.2022.856968","DOIUrl":"https://doi.org/10.3389/frsip.2022.856968","url":null,"abstract":"Separation of speech mixtures in noisy and reverberant environments remains a challenging task for state-of-the-art speech separation systems. Time-domain audio speech separation networks (TasNets) are among the most commonly used network architectures for this task. TasNet models have demonstrated strong performance on typical speech separation baselines where speech is not contaminated with noise. When additive or convolutive noise is present, performance of speech separation degrades significantly. TasNets are typically constructed of an encoder network, a mask estimation network and a decoder network. The design of these networks puts the majority of the onus for enhancing the signal on the mask estimation network when used without any pre-processing of the input data or post processing of the separation network output data. Use of multihead attention (MHA) is proposed in this work as an additional layer in the encoder and decoder to help the separation network attend to encoded features that are relevant to the target speakers and conversely suppress noisy disturbances in the encoded features. As shown in this work, incorporating MHA mechanisms into the encoder network in particular leads to a consistent performance improvement across numerous quality and intelligibility metrics on a variety of acoustic conditions using the WHAMR corpus, a data-set of noisy reverberant speech mixtures. The use of MHA is also investigated in the decoder network where it is demonstrated that smaller performance improvements are consistently gained within specific model configurations. The best performing MHA models yield a mean 0.6 dB scale invariant signal-to-distortion (SISDR) improvement on noisy reverberant mixtures over a baseline 1D convolution encoder. A mean 1 dB SISDR improvement is observed on clean speech mixtures.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83744223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Frontiers in signal processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1