首页 > 最新文献

Array最新文献

英文 中文
Efficient perturbation techniques for preserving privacy of multivariate sensitive data 保护多变量敏感数据隐私的有效摄动技术
Q1 Computer Science Pub Date : 2023-10-06 DOI: 10.1016/j.array.2023.100324
Mahbubur Rahman, Mahit Kumar Paul, A.H.M. Sarowar Sattar

Cloud data is increasing significantly recently because of the advancement of technology which can contain individuals’ sensitive information, such as medical diagnostics reports. While deriving knowledge from such sensitive data, different third party can get their hands on this sensitive information. Therefore, privacy preservation of such sensitive data has become a vital issue. Data perturbation is one of the most often used data mining approaches for safeguarding privacy. A significant challenge in data perturbation is balancing the privacy and utility of data. Securing an individual’s privacy often entails the forfeiture of the data utility, and the contrary is true. Though there exist several approaches to deal with the trade-off between privacy and utility, researchers are always looking for new approaches. In order to address this critical issue, this paper proposes two data perturbation approaches namely NOS2R and NOS2R2. The proposed perturbation techniques are experimented with over ten benchmark UCI data set for analyzing privacy protection, information entropy, attack resistance, data utility, and classification error. The proposed approaches are compared with two existing approaches 3DRT and NRoReM. The thorough experimental analysis exhibits that the best-performing approach NOS2R2 offers 15.48% higher entropy and 15.53% more resistance against ICA attack compared to the best existing approach NRoReM. Furthermore, in terms of utility, the accuracy, f1-score, precision and recall of NOS2R2 perturbed data are 42.32%, 31.22%, 30.77% and 16.15% more close to the original data respectively than the NRoReM perturbed data.

由于技术的进步,云数据最近显著增加,这些技术可以包含个人的敏感信息,如医疗诊断报告。在从这些敏感数据中获取知识的同时,不同的第三方可以获得这些敏感信息。因此,保护此类敏感数据的隐私已成为一个至关重要的问题。数据扰动是保护隐私最常用的数据挖掘方法之一。数据扰动中的一个重大挑战是平衡数据的隐私性和实用性。保护个人隐私往往意味着数据实用程序的丧失,而事实恰恰相反。尽管有几种方法可以处理隐私和效用之间的权衡,但研究人员总是在寻找新的方法。为了解决这一关键问题,本文提出了两种数据扰动方法,即NOS2R和NOS2R2。所提出的扰动技术在十多个基准UCI数据集上进行了实验,用于分析隐私保护、信息熵、抗攻击性、数据效用和分类误差。将所提出的方法与现有的两种方法3DRT和NRoReM进行了比较。全面的实验分析表明,与现有的最佳方法NRoReM相比,性能最佳的方法NOS2R2提供了15.48%的熵和15.53%的抗ICA攻击能力。此外,在效用方面,NOS2R2扰动数据的准确度、f1得分、准确度和召回率分别比NRoReM扰动数据接近原始数据42.32%、31.22%、30.77%和16.15%。
{"title":"Efficient perturbation techniques for preserving privacy of multivariate sensitive data","authors":"Mahbubur Rahman,&nbsp;Mahit Kumar Paul,&nbsp;A.H.M. Sarowar Sattar","doi":"10.1016/j.array.2023.100324","DOIUrl":"https://doi.org/10.1016/j.array.2023.100324","url":null,"abstract":"<div><p>Cloud data is increasing significantly recently because of the advancement of technology which can contain individuals’ sensitive information, such as medical diagnostics reports. While deriving knowledge from such sensitive data, different third party can get their hands on this sensitive information. Therefore, privacy preservation of such sensitive data has become a vital issue. Data perturbation is one of the most often used data mining approaches for safeguarding privacy. A significant challenge in data perturbation is balancing the privacy and utility of data. Securing an individual’s privacy often entails the forfeiture of the data utility, and the contrary is true. Though there exist several approaches to deal with the trade-off between privacy and utility, researchers are always looking for new approaches. In order to address this critical issue, this paper proposes two data perturbation approaches namely NOS2R and NOS2R2. The proposed perturbation techniques are experimented with over ten benchmark UCI data set for analyzing privacy protection, information entropy, attack resistance, data utility, and classification error. The proposed approaches are compared with two existing approaches 3DRT and NRoReM. The thorough experimental analysis exhibits that the best-performing approach NOS2R2 offers 15.48% higher entropy and 15.53% more resistance against ICA attack compared to the best existing approach NRoReM. Furthermore, in terms of utility, the accuracy, f1-score, precision and recall of NOS2R2 perturbed data are 42.32%, 31.22%, 30.77% and 16.15% more close to the original data respectively than the NRoReM perturbed data.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49750417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancements in spiking neural network communication and synchronization techniques for event-driven neuromorphic systems 事件驱动神经形态系统的脉冲神经网络通信与同步技术研究进展
Q1 Computer Science Pub Date : 2023-10-05 DOI: 10.1016/j.array.2023.100323
Mahyar Shahsavari , David Thomas , Marcel van Gerven , Andrew Brown , Wayne Luk

Neuromorphic event-driven systems emulate the computational mechanisms of the brain through the utilization of spiking neural networks (SNN). Neuromorphic systems serve two primary application domains: simulating neural information processing in neuroscience and acting as accelerators for cognitive computing in engineering applications. A distinguishing characteristic of neuromorphic systems is their asynchronous or event-driven nature, but even event-driven systems require some synchronous time management of the neuron populations to guarantee sufficient time for the proper delivery of spiking messages. In this study, we assess three distinct algorithms proposed for adding a synchronization capability to asynchronous event-driven compute systems. We run these algorithms on POETS (Partially Ordered Event-Triggered Systems), a custom-built FPGA-based hardware platform, as a neuromorphic architecture. This study presents the simulation speed of SNNs of various sizes. We explore essential aspects of event-driven neuromorphic system design that contribute to efficient computation and communication. These aspects include varying degrees of connectivity, routing methods, mapping techniques onto hardware components, and firing rates. The hardware mapping and simulation of up to eight million neurons, where each neuron is connected to up to one thousand other neurons, are presented in this work using 3072 reconfigurable processing cores, each of which has 16 hardware threads. Using the best synchronization and communication methods, our architecture design demonstrates 20-fold and 16-fold speedups over the Brian simulator and one 48-chip SpiNNaker node, respectively. We conclude with a brief comparison between our platform and existing large-scale neuromorphic systems in terms of synchronization, routing, and communication methods, to guide the development of future event-driven neuromorphic systems.

神经形态事件驱动系统通过利用尖峰神经网络(SNN)来模拟大脑的计算机制。神经形态系统有两个主要应用领域:在神经科学中模拟神经信息处理和在工程应用中充当认知计算的加速器。神经形态系统的一个显著特征是其异步或事件驱动的性质,但即使是事件驱动的系统也需要对神经元群体进行一些同步的时间管理,以确保有足够的时间来正确传递尖峰信息。在这项研究中,我们评估了为异步事件驱动计算系统添加同步功能而提出的三种不同算法。我们在POETS(部分有序事件触发系统)上运行这些算法,POETS是一个定制的基于FPGA的硬件平台,作为一种神经形态架构。本研究展示了不同大小SNN的模拟速度。我们探索了事件驱动的神经形态系统设计的基本方面,这些方面有助于高效的计算和通信。这些方面包括不同程度的连接、路由方法、到硬件组件的映射技术以及发射率。这项工作使用3072个可重新配置的处理核心,每个处理核心有16个硬件线程,对多达800万个神经元进行了硬件映射和模拟,每个神经元与多达1000个其他神经元相连。使用最佳的同步和通信方法,我们的架构设计分别比Brian模拟器和一个48芯片SpiNNaker节点实现了20倍和16倍的加速。最后,我们在同步、路由和通信方法方面将我们的平台与现有的大型神经形态系统进行了简要比较,以指导未来事件驱动的神经形态系统的发展。
{"title":"Advancements in spiking neural network communication and synchronization techniques for event-driven neuromorphic systems","authors":"Mahyar Shahsavari ,&nbsp;David Thomas ,&nbsp;Marcel van Gerven ,&nbsp;Andrew Brown ,&nbsp;Wayne Luk","doi":"10.1016/j.array.2023.100323","DOIUrl":"https://doi.org/10.1016/j.array.2023.100323","url":null,"abstract":"<div><p>Neuromorphic event-driven systems emulate the computational mechanisms of the brain through the utilization of spiking neural networks (SNN). Neuromorphic systems serve two primary application domains: simulating neural information processing in neuroscience and acting as accelerators for cognitive computing in engineering applications. A distinguishing characteristic of neuromorphic systems is their asynchronous or event-driven nature, but even event-driven systems require some synchronous time management of the neuron populations to guarantee sufficient time for the proper delivery of spiking messages. In this study, we assess three distinct algorithms proposed for adding a synchronization capability to asynchronous event-driven compute systems. We run these algorithms on <em>POETS (Partially Ordered Event-Triggered Systems)</em>, a custom-built FPGA-based hardware platform, as a neuromorphic architecture. This study presents the simulation speed of SNNs of various sizes. We explore essential aspects of event-driven neuromorphic system design that contribute to efficient computation and communication. These aspects include varying degrees of connectivity, routing methods, mapping techniques onto hardware components, and firing rates. The hardware mapping and simulation of up to eight million neurons, where each neuron is connected to up to one thousand other neurons, are presented in this work using 3072 reconfigurable processing cores, each of which has 16 hardware threads. Using the best synchronization and communication methods, our architecture design demonstrates 20-fold and 16-fold speedups over the Brian simulator and one 48-chip SpiNNaker node, respectively. We conclude with a brief comparison between our platform and existing large-scale neuromorphic systems in terms of synchronization, routing, and communication methods, to guide the development of future event-driven neuromorphic systems.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49750415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extractive social media text summarization based on MFMMR-BertSum 基于MFMMR-BertSum的提取社交媒体文本摘要
Q1 Computer Science Pub Date : 2023-10-04 DOI: 10.1016/j.array.2023.100322
Junqing Fan , Xiaorong Tian , Chengyao Lv , Simin Zhang , Yuewei Wang , Junfeng Zhang

The advancement of computer technology has led to an overwhelming amount of textual information, hindering the efficiency of knowledge intake. To address this issue, various text summarization techniques have been developed, including statistics, graph sorting, machine learning, and deep learning. However, the rich semantic features of text often interfere with the abstract effects and lack effective processing of redundant information. In this paper, we propose the Multi-Features Maximal Marginal Relevance BERT (MFMMR-BertSum) model for Extractive Summarization, which utilizes the pre-trained model BERT to tackle the text summarization task. The model incorporates a classification layer for extractive summarization. Additionally, the Maximal Marginal Relevance (MMR) component is utilized to remove information redundancy and optimize the summary results. The proposed method outperforms other sentence-level extractive summarization baseline methods on the CNN/DailyMail dataset, thus verifying its effectiveness.

计算机技术的进步导致了大量的文本信息,阻碍了知识获取的效率。为了解决这个问题,已经开发了各种文本摘要技术,包括统计、图形排序、机器学习和深度学习。然而,文本丰富的语义特征往往会干扰抽象效果,缺乏对冗余信息的有效处理。在本文中,我们提出了用于提取摘要的多特征最大边际相关BERT(MFMMR-BertSum)模型,该模型利用预先训练的模型BERT来处理文本摘要任务。该模型结合了一个用于提取摘要的分类层。此外,最大边际相关性(MMR)组件用于消除信息冗余并优化汇总结果。在CNN/DaylyMail数据集上,该方法优于其他句子级提取摘要基线方法,从而验证了其有效性。
{"title":"Extractive social media text summarization based on MFMMR-BertSum","authors":"Junqing Fan ,&nbsp;Xiaorong Tian ,&nbsp;Chengyao Lv ,&nbsp;Simin Zhang ,&nbsp;Yuewei Wang ,&nbsp;Junfeng Zhang","doi":"10.1016/j.array.2023.100322","DOIUrl":"https://doi.org/10.1016/j.array.2023.100322","url":null,"abstract":"<div><p>The advancement of computer technology has led to an overwhelming amount of textual information, hindering the efficiency of knowledge intake. To address this issue, various text summarization techniques have been developed, including statistics, graph sorting, machine learning, and deep learning. However, the rich semantic features of text often interfere with the abstract effects and lack effective processing of redundant information. In this paper, we propose the Multi-Features Maximal Marginal Relevance BERT (MFMMR-BertSum) model for Extractive Summarization, which utilizes the pre-trained model BERT to tackle the text summarization task. The model incorporates a classification layer for extractive summarization. Additionally, the Maximal Marginal Relevance (MMR) component is utilized to remove information redundancy and optimize the summary results. The proposed method outperforms other sentence-level extractive summarization baseline methods on the CNN/DailyMail dataset, thus verifying its effectiveness.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49750419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Learning remaining useful life with incomplete health information: A case study on battery deterioration assessment 使用不完整的健康信息了解剩余使用寿命:电池劣化评估案例研究
Q1 Computer Science Pub Date : 2023-09-29 DOI: 10.1016/j.array.2023.100321
Luciano Sánchez , Nahuel Costa , José Otero , David Anseán , Inés Couso

This study proposes a method for developing equipment lifespan estimators that combine physical information and numerical data, both of which may be incomplete. Physical information may not have a uniform fit to all experimental data, and health information may only be available at the initial and final periods. To address these issues, a procedure is defined to adjust the model to different subsets of available data, constrained by feasible trajectories in the health status space. Additionally, a new health model for rechargeable lithium batteries is proposed, and a use case is presented to demonstrate its efficacy. The optimistic (max–max) strategy is found to be the most suitable for diagnosing battery lifetime, based on the results.

这项研究提出了一种开发设备寿命估算器的方法,该方法结合了物理信息和数值数据,这两种数据可能都是不完整的。身体信息可能不适合所有实验数据,健康信息可能只在最初和最后阶段可用。为了解决这些问题,定义了一个程序,以根据可用数据的不同子集调整模型,并受健康状态空间中可行轨迹的约束。此外,还提出了一种新的可充电锂电池健康模型,并给出了一个用例来证明其有效性。根据研究结果,乐观(最大-最大)策略被认为是最适合诊断电池寿命的策略。
{"title":"Learning remaining useful life with incomplete health information: A case study on battery deterioration assessment","authors":"Luciano Sánchez ,&nbsp;Nahuel Costa ,&nbsp;José Otero ,&nbsp;David Anseán ,&nbsp;Inés Couso","doi":"10.1016/j.array.2023.100321","DOIUrl":"https://doi.org/10.1016/j.array.2023.100321","url":null,"abstract":"<div><p>This study proposes a method for developing equipment lifespan estimators that combine physical information and numerical data, both of which may be incomplete. Physical information may not have a uniform fit to all experimental data, and health information may only be available at the initial and final periods. To address these issues, a procedure is defined to adjust the model to different subsets of available data, constrained by feasible trajectories in the health status space. Additionally, a new health model for rechargeable lithium batteries is proposed, and a use case is presented to demonstrate its efficacy. The optimistic (max–max) strategy is found to be the most suitable for diagnosing battery lifetime, based on the results.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49765441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple robust approaches for EEG-based driving fatigue detection and classification 基于脑电图的驾驶疲劳检测与分类的多种鲁棒方法
Q1 Computer Science Pub Date : 2023-09-01 DOI: 10.1016/j.array.2023.100320
Sunil Kumar Prabhakar, Dong-Ok Won

Electroencephalography (EEG) signals are used to evaluate the activities of the brain. For the accidents occurring on the road, one of the primary reasons is driver fatigueness and it can be easily identified by the EEG. In this work, five efficient and robust approaches for the EEG-based driving fatigue detection and classification are proposed. In the first proposed strategy, the concept of Multi-Dimensional Scaling (MDS) and Singular Value Decomposition (SVD) are merged and then the Fuzzy C Means based Support Vector Regression (FCM-SVR) classification module is utilized to get the output. In the second proposed strategy, the Marginal Fisher Analysis (MFA) is implemented and the concepts of conditional feature mapping and cross domain transfer learning are implemented and classified with machine learning classifiers. In the third proposed strategy, the concepts of Flexible Analytic Wavelet Transform (FAWT) and Tunable Q Wavelet Transform (TQWT) are implemented and merged and then it is classified with Extreme Learning Machine (ELM), Kernel ELM and Adaptive Neuro Fuzzy Inference System (ANFIS) classifiers. In the fourth proposed strategy, the concepts of Correntropy spectral density and Lyapunov exponent with Rosenstein algorithm is implemented and then the multi distance signal level difference is computed followed by the calculation of the Geodesic minimum distance to the Riemannian means and finally tangent space mapping is implemented to it before feeding it to classification. In the fifth or final proposed strategy, the Hilbert Huang Transform (HHT) is implemented and then the Hilbert marginal spectrum is computed. Then using the Blackhole optimization algorithm, the features are selected and finally it is classified with Cascade Adaboost classifier. The proposed techniques are applied on publicly available EEG datasets and the best result of 99.13% is obtained when the proposed Correntropy spectral density and Lyapunov exponent with Rosenstein algorithm is implemented with the multi distance signal level difference followed by the calculation of the Geodesic minimum distance to the Riemannian means and finally tangent space mapping is implemented with Support Vector Machine (SVM) classifier.

脑电图(EEG)信号被用来评估大脑的活动。对于发生在道路上的交通事故,驾驶员疲劳是造成事故的主要原因之一,通过脑电图可以很容易地识别驾驶员疲劳。本文提出了五种高效鲁棒的基于脑电图的驾驶疲劳检测与分类方法。在第一种策略中,将多维尺度(MDS)和奇异值分解(SVD)的概念融合,然后利用基于模糊C均值的支持向量回归(FCM-SVR)分类模块得到输出。在第二种策略中,实现了边际费雪分析(MFA),实现了条件特征映射和跨域迁移学习的概念,并使用机器学习分类器进行分类。在第三种策略中,将柔性解析小波变换(FAWT)和可调Q小波变换(TQWT)的概念实现并合并,然后使用极限学习机(ELM)、核ELM和自适应神经模糊推理系统(ANFIS)分类器对其进行分类。在第四种策略中,采用罗森斯坦算法实现了熵谱密度和李雅普诺夫指数的概念,然后计算多距离信号水平差,然后计算到黎曼均值的测地线最小距离,最后对其进行切空间映射,然后将其输入分类。在第五种或最后一种提出的策略中,实现希尔伯特黄变换(HHT),然后计算希尔伯特边际谱。然后使用黑洞优化算法对特征进行选择,最后使用Cascade Adaboost分类器进行分类。将所提出的方法应用于公开的脑电数据集,利用多距离信号水平差实现相关熵谱密度和Lyapunov指数与Rosenstein算法,然后计算测地最小距离到黎曼均值,最后利用支持向量机(SVM)分类器实现切空间映射,得到了99.13%的最佳结果。
{"title":"Multiple robust approaches for EEG-based driving fatigue detection and classification","authors":"Sunil Kumar Prabhakar,&nbsp;Dong-Ok Won","doi":"10.1016/j.array.2023.100320","DOIUrl":"https://doi.org/10.1016/j.array.2023.100320","url":null,"abstract":"<div><p>Electroencephalography (EEG) signals are used to evaluate the activities of the brain. For the accidents occurring on the road, one of the primary reasons is driver fatigueness and it can be easily identified by the EEG. In this work, five efficient and robust approaches for the EEG-based driving fatigue detection and classification are proposed. In the first proposed strategy, the concept of Multi-Dimensional Scaling (MDS) and Singular Value Decomposition (SVD) are merged and then the Fuzzy C Means based Support Vector Regression (FCM-SVR) classification module is utilized to get the output. In the second proposed strategy, the Marginal Fisher Analysis (MFA) is implemented and the concepts of conditional feature mapping and cross domain transfer learning are implemented and classified with machine learning classifiers. In the third proposed strategy, the concepts of Flexible Analytic Wavelet Transform (FAWT) and Tunable Q Wavelet Transform (TQWT) are implemented and merged and then it is classified with Extreme Learning Machine (ELM), Kernel ELM and Adaptive Neuro Fuzzy Inference System (ANFIS) classifiers. In the fourth proposed strategy, the concepts of Correntropy spectral density and Lyapunov exponent with Rosenstein algorithm is implemented and then the multi distance signal level difference is computed followed by the calculation of the Geodesic minimum distance to the Riemannian means and finally tangent space mapping is implemented to it before feeding it to classification. In the fifth or final proposed strategy, the Hilbert Huang Transform (HHT) is implemented and then the Hilbert marginal spectrum is computed. Then using the Blackhole optimization algorithm, the features are selected and finally it is classified with Cascade Adaboost classifier. The proposed techniques are applied on publicly available EEG datasets and the best result of 99.13% is obtained when the proposed Correntropy spectral density and Lyapunov exponent with Rosenstein algorithm is implemented with the multi distance signal level difference followed by the calculation of the Geodesic minimum distance to the Riemannian means and finally tangent space mapping is implemented with Support Vector Machine (SVM) classifier.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49899312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A real-time application-based convolutional neural network approach for tomato leaf disease classification 基于实时应用的卷积神经网络方法在番茄叶病分类中的应用
Q1 Computer Science Pub Date : 2023-09-01 DOI: 10.1016/j.array.2023.100313
Showmick Guha Paul , Al Amin Biswas , Arpa Saha , Md. Sabab Zulfiker , Nadia Afrin Ritu , Ifrat Zahan , Mushfiqur Rahman , Mohammad Ashraful Islam

Early diagnosis and treatment of tomato leaf diseases increase a plant's production volume, efficiency, and quality. Misdiagnosis of disease by farmers can lead to an inadequate treatment strategy that hurts the tomato plants and agroecosystem. Therefore, it is crucial to detect the disease precisely. Finding a rapid, accurate approach to take care of the issue of misdiagnosis and early disease identification will be advantageous to the farmers. This study proposed a lightweight custom convolutional neural network (CNN) model and utilized transfer learning (TL)-based models VGG-16 and VGG-19 to classify tomato leaf diseases. In this study, eleven classes, one of which is healthy, are used to simulate various tomato leaf diseases. In addition, an ablation study has been performed in order to find the optimal parameters for the proposed model. Furthermore, evaluation metrics have been used to analyze and compare the performance of the proposed model with the TL-based model. The proposed model, by applying data augmentation techniques, has achieved the highest accuracy and recall of 95.00% among all the models. Finally, the best-performing model has been utilized in order to construct a Web-based and Android-based end-to-end (E2E) system for tomato cultivators to classify tomato leaf disease.

番茄叶片病害的早期诊断和治疗可提高植株的产量、效率和质量。农民对疾病的误诊可能导致不适当的治疗策略,从而伤害番茄植株和农业生态系统。因此,准确检测疾病是至关重要的。找到一种快速、准确的方法来处理误诊和早期发现疾病的问题,将有利于农民。本研究提出了一种轻量级的自定义卷积神经网络(CNN)模型,并利用基于迁移学习(TL)的模型VGG-16和VGG-19对番茄叶片病害进行分类。在本研究中,采用11个班级,其中一个是健康的,模拟各种番茄叶片疾病。此外,为了找到模型的最佳参数,还进行了烧蚀研究。此外,评估指标已被用于分析和比较所提出的模型与基于语言的模型的性能。该模型采用数据增强技术,在所有模型中准确率最高,召回率为95.00%。最后,利用表现最好的模型构建基于web和android的端到端(E2E)系统,供番茄种植者对番茄叶片病害进行分类。
{"title":"A real-time application-based convolutional neural network approach for tomato leaf disease classification","authors":"Showmick Guha Paul ,&nbsp;Al Amin Biswas ,&nbsp;Arpa Saha ,&nbsp;Md. Sabab Zulfiker ,&nbsp;Nadia Afrin Ritu ,&nbsp;Ifrat Zahan ,&nbsp;Mushfiqur Rahman ,&nbsp;Mohammad Ashraful Islam","doi":"10.1016/j.array.2023.100313","DOIUrl":"10.1016/j.array.2023.100313","url":null,"abstract":"<div><p>Early diagnosis and treatment of tomato leaf diseases increase a plant's production volume, efficiency, and quality. Misdiagnosis of disease by farmers can lead to an inadequate treatment strategy that hurts the tomato plants and agroecosystem. Therefore, it is crucial to detect the disease precisely. Finding a rapid, accurate approach to take care of the issue of misdiagnosis and early disease identification will be advantageous to the farmers. This study proposed a lightweight custom convolutional neural network (CNN) model and utilized transfer learning (TL)-based models VGG-16 and VGG-19 to classify tomato leaf diseases. In this study, eleven classes, one of which is healthy, are used to simulate various tomato leaf diseases. In addition, an ablation study has been performed in order to find the optimal parameters for the proposed model. Furthermore, evaluation metrics have been used to analyze and compare the performance of the proposed model with the TL-based model. The proposed model, by applying data augmentation techniques, has achieved the highest accuracy and recall of 95.00% among all the models. Finally, the best-performing model has been utilized in order to construct a Web-based and Android-based end-to-end (E2E) system for tomato cultivators to classify tomato leaf disease.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47092792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The study of the hyper-parameter modelling the decision rule of the cautious classifiers based on the Fβ measure 基于Fβ测度的谨慎分类器决策规则的超参数建模研究
Q1 Computer Science Pub Date : 2023-09-01 DOI: 10.1016/j.array.2023.100310
Abdelhak Imoussaten

In some sensitive domains where data imperfections are present, standard classification techniques reach their limits. To avoid misclassifications that have serious consequences, recent works propose cautious classification algorithms to handle this problem. Despite of the presence of uncertainty and/or imprecision, a point prediction classifier is forced to bet on a single class. While a cautious classifier proposes the appropriate subset of candidate classes that can be assigned to the sample in the presence of imperfect information. On the other hand, cautiousness should not be at the expense of precision and a trade-off has to be made between these two criteria. Among the existing cautious classifiers, two classifiers propose to manage this trade-off in the decision step by the mean of a parametrized objective function. The first one is the non-deterministic classifier (ndc) proposed within the framework of probability theory and the second one is “evidential classifier based on imprecise relabelling” (eclair) proposed within the framework of belief functions. The theoretical aim of the mentioned hyper-parameters is to control the size of predictions for both classifiers. This paper proposes to study this hyper-parameter in order to select the “best” value in a classification task. First the utility for each candidate subset is studied related to the values of the hyper-parameter and some thresholds are proposed to control the size of the predictions. Then two illustrations are proposed where a method to choose this hyper-parameters based on the calibration data is proposed. The first illustration concerns randomly generated data and the second one concerns the images data of fashion mnist. These illustrations show how to control the size of the predictions and give a comparison between the performances of the two classifiers for a tuning based on our proposition and the one based on grid search method.

在一些存在数据缺陷的敏感领域,标准分类技术达到了极限。为了避免产生严重后果的错误分类,最近的工作提出了谨慎的分类算法来处理这个问题。尽管存在不确定性和/或不精确性,点预测分类器还是被迫将赌注押在单个类别上。而谨慎的分类器提出了在存在不完美信息的情况下可以分配给样本的候选类的适当子集。另一方面,谨慎不应以牺牲准确性为代价,必须在这两个标准之间进行权衡。在现有的谨慎分类器中,有两个分类器提出通过参数化的目标函数来管理决策步骤中的这种权衡。第一种是在概率论框架内提出的非确定性分类器(ndc),第二种是在置信函数框架下提出的“基于不精确重新标记的证据分类器”(eclair)。上述超参数的理论目的是控制两个分类器的预测大小。本文提出研究这个超参数,以便在分类任务中选择“最佳”值。首先,研究了每个候选子集与超参数值相关的效用,并提出了一些阈值来控制预测的大小。然后给出了两个例子,其中提出了一种基于校准数据选择该超参数的方法。第一个图示涉及随机生成的数据,第二个图示涉及时尚mnist的图像数据。这些插图展示了如何控制预测的大小,并比较了基于我们的命题和基于网格搜索方法的两个分类器的性能。
{"title":"The study of the hyper-parameter modelling the decision rule of the cautious classifiers based on the Fβ measure","authors":"Abdelhak Imoussaten","doi":"10.1016/j.array.2023.100310","DOIUrl":"https://doi.org/10.1016/j.array.2023.100310","url":null,"abstract":"<div><p>In some sensitive domains where data imperfections are present, standard classification techniques reach their limits. To avoid misclassifications that have serious consequences, recent works propose cautious classification algorithms to handle this problem. Despite of the presence of uncertainty and/or imprecision, a point prediction classifier is forced to bet on a single class. While a cautious classifier proposes the appropriate subset of candidate classes that can be assigned to the sample in the presence of imperfect information. On the other hand, cautiousness should not be at the expense of precision and a trade-off has to be made between these two criteria. Among the existing cautious classifiers, two classifiers propose to manage this trade-off in the decision step by the mean of a parametrized objective function. The first one is the non-deterministic classifier (ndc) proposed within the framework of probability theory and the second one is “evidential classifier based on imprecise relabelling” (eclair) proposed within the framework of belief functions. The theoretical aim of the mentioned hyper-parameters is to control the size of predictions for both classifiers. This paper proposes to study this hyper-parameter in order to select the “best” value in a classification task. First the utility for each candidate subset is studied related to the values of the hyper-parameter and some thresholds are proposed to control the size of the predictions. Then two illustrations are proposed where a method to choose this hyper-parameters based on the calibration data is proposed. The first illustration concerns randomly generated data and the second one concerns the images data of fashion mnist. These illustrations show how to control the size of the predictions and give a comparison between the performances of the two classifiers for a tuning based on our proposition and the one based on grid search method.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49749466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differential privacy in edge computing-based smart city Applications:Security issues, solutions and future directions 基于边缘计算的智慧城市应用中的差异隐私:安全问题、解决方案和未来方向
Q1 Computer Science Pub Date : 2023-09-01 DOI: 10.1016/j.array.2023.100293
Aiting Yao , Gang Li , Xuejun Li , Frank Jiang , Jia Xu , Xiao Liu

Fast-growing smart city applications, such as smart delivery, smart community, and smart health, are generating big data that are widely distributed on the internet. IoT (Internet of Things) systems are at the centre of smart city applications, as traditional cloud computing is insufficient for satisfying the critical requirements of smart IoT systems. Due to the nature of smart city applications, massive IoT data may contain sensitive information; hence, various privacy-preserving methods, such as anonymity, federated learning, and homomorphic encryption, have been utilised over the years. Furthermore, limited concern has been given to the resource consumption for data privacy-preserving in edge computing environments, which are resource-constrained when compared with cloud data centres. In particular, differential privacy (DP) has been an effective privacy-preserving method in the edge computing environment. However, there is no dedicated study on DP technology with a focus on smart city applications in the edge computing environment.

To fill this gap, this paper provides a comprehensive study on DP in edge computing-based smart city applications, covering various aspects, such as privacy models, research methods, mechanisms, and applications. Our study focuses on five areas of data privacy, including data transmitting privacy, data processing privacy, data model training privacy, data publishing privacy, and location privacy. In addition, we investigate many potential applications of DP in smart city application scenarios. Finally, future directions of DP in edge computing are envisaged. We hope this study can be a useful roadmap for researchers and practitioners in edge computing enable smart city applications.

智慧交付、智慧社区、智慧健康等智慧城市应用快速发展,产生的大数据在互联网上广泛分布。物联网系统是智慧城市应用的核心,传统的云计算不足以满足智能物联网系统的关键需求。由于智慧城市应用的性质,海量物联网数据可能包含敏感信息;因此,多年来使用了各种隐私保护方法,例如匿名、联邦学习和同态加密。此外,对边缘计算环境中保护数据隐私的资源消耗的关注有限,与云数据中心相比,边缘计算环境资源受限。差分隐私(DP)是边缘计算环境下一种有效的隐私保护方法。然而,目前还没有专门针对边缘计算环境下智慧城市应用的DP技术的研究。为了填补这一空白,本文对基于边缘计算的智慧城市应用中的数据保护进行了全面的研究,涵盖了隐私模型、研究方法、机制和应用等各个方面。我们的研究重点关注数据隐私的五个方面,包括数据传输隐私、数据处理隐私、数据模型训练隐私、数据发布隐私和位置隐私。此外,我们还研究了DP在智慧城市应用场景中的许多潜在应用。最后,展望了DP在边缘计算中的未来发展方向。我们希望这项研究可以为边缘计算实现智慧城市应用的研究人员和实践者提供有用的路线图。
{"title":"Differential privacy in edge computing-based smart city Applications:Security issues, solutions and future directions","authors":"Aiting Yao ,&nbsp;Gang Li ,&nbsp;Xuejun Li ,&nbsp;Frank Jiang ,&nbsp;Jia Xu ,&nbsp;Xiao Liu","doi":"10.1016/j.array.2023.100293","DOIUrl":"10.1016/j.array.2023.100293","url":null,"abstract":"<div><p>Fast-growing smart city applications, such as smart delivery, smart community, and smart health, are generating big data that are widely distributed on the internet. IoT (Internet of Things) systems are at the centre of smart city applications, as traditional cloud computing is insufficient for satisfying the critical requirements of smart IoT systems. Due to the nature of smart city applications, massive IoT data may contain sensitive information; hence, various privacy-preserving methods, such as anonymity, federated learning, and homomorphic encryption, have been utilised over the years. Furthermore, limited concern has been given to the resource consumption for data privacy-preserving in edge computing environments, which are resource-constrained when compared with cloud data centres. In particular, differential privacy (DP) has been an effective privacy-preserving method in the edge computing environment. However, there is no dedicated study on DP technology with a focus on smart city applications in the edge computing environment.</p><p>To fill this gap, this paper provides a comprehensive study on DP in edge computing-based smart city applications, covering various aspects, such as privacy models, research methods, mechanisms, and applications. Our study focuses on five areas of data privacy, including data transmitting privacy, data processing privacy, data model training privacy, data publishing privacy, and location privacy. In addition, we investigate many potential applications of DP in smart city application scenarios. Finally, future directions of DP in edge computing are envisaged. We hope this study can be a useful roadmap for researchers and practitioners in edge computing enable smart city applications.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49197802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A hybrid recommendation scheme for delay-tolerant networks: The case of digital marketplaces 容忍延迟网络的混合推荐方案:以数字市场为例
Q1 Computer Science Pub Date : 2023-09-01 DOI: 10.1016/j.array.2023.100299
Victor M. Romero II, Bea D. Santiago, Jay Martin Z. Nuevo

Recommender systems are widely-adopted by numerous popular e-commerce sites, such as Amazon and E-bay, to help users find products that they might like. Although much has been achieved in the area, most recommender systems are designed to work on top of centralized platforms that are traditionally supported by fixed infrastructure like the Internet. Hence, additional work is warranted to examine the applicability and performance of recommender systems in challenging environments that are characterized by dynamic network topology and variable transmission delays. This study deals with the design of a recommender system that is compatible in a delay-tolerant network where communication is supported by opportunistic encounters between participating nodes. The proposed approach combines collaborative filtering and content-based filtering techniques to generate rating predictions for users. To make the system more tolerant against interruptions, each node maintains a local recommender that generates predictions using user profiles that are obtained through opportunistic exchanges over a clustered topology. Simulation results indicate that the proposed approach is able to improve coverage while alleviating the cold-start problem.

推荐系统被许多流行的电子商务网站广泛采用,如亚马逊和E-bay,以帮助用户找到他们可能喜欢的产品。尽管在这一领域已经取得了很大的成就,但大多数推荐系统都是设计在传统上由互联网等固定基础设施支持的集中式平台上工作的。因此,有必要进一步研究推荐系统在具有挑战性的环境中的适用性和性能,这些环境以动态网络拓扑和可变传输延迟为特征。本研究涉及一个推荐系统的设计,该系统兼容于延迟容忍网络,其中参与节点之间的机会相遇支持通信。该方法结合了协同过滤和基于内容的过滤技术,为用户生成评级预测。为了使系统对中断的容忍度更高,每个节点维护一个本地推荐器,该推荐器使用通过集群拓扑上的机会交换获得的用户配置文件生成预测。仿真结果表明,该方法在改善冷启动问题的同时提高了覆盖范围。
{"title":"A hybrid recommendation scheme for delay-tolerant networks: The case of digital marketplaces","authors":"Victor M. Romero II,&nbsp;Bea D. Santiago,&nbsp;Jay Martin Z. Nuevo","doi":"10.1016/j.array.2023.100299","DOIUrl":"10.1016/j.array.2023.100299","url":null,"abstract":"<div><p>Recommender systems are widely-adopted by numerous popular e-commerce sites, such as Amazon and E-bay, to help users find products that they might like. Although much has been achieved in the area, most recommender systems are designed to work on top of centralized platforms that are traditionally supported by fixed infrastructure like the Internet. Hence, additional work is warranted to examine the applicability and performance of recommender systems in challenging environments that are characterized by dynamic network topology and variable transmission delays. This study deals with the design of a recommender system that is compatible in a delay-tolerant network where communication is supported by opportunistic encounters between participating nodes. The proposed approach combines collaborative filtering and content-based filtering techniques to generate rating predictions for users. To make the system more tolerant against interruptions, each node maintains a local recommender that generates predictions using user profiles that are obtained through opportunistic exchanges over a clustered topology. Simulation results indicate that the proposed approach is able to improve coverage while alleviating the cold-start problem.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45412416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid weakly supervised learning with deep learning technique for detection of fake news from cyber propaganda 弱监督学习与深度学习相结合的网络宣传假新闻检测技术
Q1 Computer Science Pub Date : 2023-09-01 DOI: 10.1016/j.array.2023.100309
Liyakathunisa Syed , Abdullah Alsaeedi , Lina A. Alhuri , Hutaf R. Aljohani

Due to the emergence of social networking sites and social media platforms, there is faster information dissemination to the public. Unverified information is widely disseminated across social media platforms without any apprehension about the accuracy of the information. The propagation of false news has imposed significant challenges on governments and society and has several adverse effects on many aspects of human life. Fake News is inaccurate information deliberately created and spread to the public. Accurate detection of fake news from cyber propagation is thus a significant and challenging issue that can be addressed through deep learning techniques. It is impossible to manually annotate large volumes of social media-generated data. In this research, a hybrid approach is proposed to detect fake news, novel weakly supervised learning is applied to provide labels to the unlabeled data, and detection of fake news is performed using Bi- GRU and Bi-LSTM deep learning techniques. Feature extraction was performed by utilizing TF-IDF and Count Vectorizers techniques. Bi-LSTM and Bi-GRU deep learning techniques with Weakly supervised SVM techniques provided an accuracy of 90% in detecting fake news. This approach of labeling large amounts of unlabeled data with weakly supervised learning and deep learning techniques for the detection of fake and real news is highly effective and efficient when there exist no labels to the data.

由于社交网站和社交媒体平台的出现,信息向公众传播的速度更快。未经证实的信息在社交媒体平台上广泛传播,而不担心信息的准确性。虚假新闻的传播给政府和社会带来了重大挑战,并对人类生活的许多方面产生了不利影响。假新闻是故意制造并传播给公众的不准确信息。因此,从网络传播中准确检测假新闻是一个重要而具有挑战性的问题,可以通过深度学习技术来解决。手动注释大量社交媒体生成的数据是不可能的。在本研究中,提出了一种混合方法来检测假新闻,采用新颖的弱监督学习方法为未标记的数据提供标签,并使用Bi- GRU和Bi- lstm深度学习技术进行假新闻检测。利用TF-IDF和计数矢量技术进行特征提取。结合弱监督支持向量机技术的Bi-LSTM和Bi-GRU深度学习技术在检测假新闻方面提供了90%的准确率。这种使用弱监督学习和深度学习技术标记大量未标记数据的方法用于假新闻和真实新闻的检测,在数据没有标签的情况下是非常有效和高效的。
{"title":"Hybrid weakly supervised learning with deep learning technique for detection of fake news from cyber propaganda","authors":"Liyakathunisa Syed ,&nbsp;Abdullah Alsaeedi ,&nbsp;Lina A. Alhuri ,&nbsp;Hutaf R. Aljohani","doi":"10.1016/j.array.2023.100309","DOIUrl":"10.1016/j.array.2023.100309","url":null,"abstract":"<div><p>Due to the emergence of social networking sites and social media platforms, there is faster information dissemination to the public. Unverified information is widely disseminated across social media platforms without any apprehension about the accuracy of the information. The propagation of false news has imposed significant challenges on governments and society and has several adverse effects on many aspects of human life. Fake News is inaccurate information deliberately created and spread to the public. Accurate detection of fake news from cyber propagation is thus a significant and challenging issue that can be addressed through deep learning techniques. It is impossible to manually annotate large volumes of social media-generated data. In this research, a hybrid approach is proposed to detect fake news, novel weakly supervised learning is applied to provide labels to the unlabeled data, and detection of fake news is performed using Bi- GRU and Bi-LSTM deep learning techniques. Feature extraction was performed by utilizing TF-IDF and Count Vectorizers techniques. Bi-LSTM and Bi-GRU deep learning techniques with Weakly supervised SVM techniques provided an accuracy of 90% in detecting fake news. This approach of labeling large amounts of unlabeled data with weakly supervised learning and deep learning techniques for the detection of fake and real news is highly effective and efficient when there exist no labels to the data.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42306787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Array
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1