首页 > 最新文献

PeerJ Computer Science最新文献

英文 中文
An enhanced algorithm for semantic-based feature reduction in spam filtering 垃圾邮件过滤中基于语义减少特征的增强算法
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-31 DOI: 10.7717/peerj-cs.2206
María Novo-Lourés, Reyes Pavón, Rosalía Laza, José R. Méndez, David Ruano-Ordás
With the advent and improvement of ontological dictionaries (WordNet, Babelnet), the use of synsets-based text representations is gaining popularity in classification tasks. More recently, ontological dictionaries were used for reducing dimensionality in this kind of representation (e.g., Semantic Dimensionality Reduction System (SDRS) (Vélez de Mendizabal et al., 2020)). These approaches are based on the combination of semantically related columns by taking advantage of semantic information extracted from ontological dictionaries. Their main advantage is that they not only eliminate features but can also combine them, minimizing (low-loss) or avoiding (lossless) the loss of information. The most recent (and accurate) techniques included in this group are based on using evolutionary algorithms to find how many features can be grouped to reduce false positive (FP) and false negative (FN) errors obtained. The main limitation of these evolutionary-based schemes is the computational requirements derived from the use of optimization algorithms. The contribution of this study is a new lossless feature reduction scheme exploiting information from ontological dictionaries, which achieves slightly better accuracy (specially in FP errors) than optimization-based approaches but using far fewer computational resources. Instead of using computationally expensive evolutionary algorithms, our proposal determines whether two columns (synsets) can be combined by observing whether the instances included in a dataset (e.g., training dataset) containing these synsets are mostly of the same class. The study includes experiments using three datasets and a detailed comparison with two previous optimization-based approaches.
随着本体字典(WordNet、Babelnet)的出现和改进,基于同义词集的文本表示法在分类任务中越来越受欢迎。最近,本体词典被用于降低此类表示的维度(例如语义维度降低系统(SDRS)(Vélez de Mendizabal 等人,2020 年))。这些方法的基础是利用从本体字典中提取的语义信息,将语义相关的列进行组合。它们的主要优势在于不仅能消除特征,还能组合特征,最大限度地减少(低损耗)或避免(无损耗)信息损失。这类技术中最新(也是最准确)的技术是基于进化算法,找出有多少特征可以通过分组来减少假阳性(FP)和假阴性(FN)误差。这些基于进化算法的方案的主要局限性在于使用优化算法所带来的计算要求。本研究的贡献在于利用本体字典中的信息,提出了一种新的无损特征缩减方案,与基于优化的方法相比,该方案的准确率(尤其是在 FP 错误方面)略高,但使用的计算资源却要少得多。我们的方案不使用计算成本高昂的进化算法,而是通过观察包含这些同义词集的数据集(如训练数据集)中的实例是否大多属于同一类别,来确定是否可以合并两列(同义词集)。研究包括使用三个数据集进行的实验,以及与之前两种基于优化的方法进行的详细比较。
{"title":"An enhanced algorithm for semantic-based feature reduction in spam filtering","authors":"María Novo-Lourés, Reyes Pavón, Rosalía Laza, José R. Méndez, David Ruano-Ordás","doi":"10.7717/peerj-cs.2206","DOIUrl":"https://doi.org/10.7717/peerj-cs.2206","url":null,"abstract":"With the advent and improvement of ontological dictionaries (WordNet, Babelnet), the use of synsets-based text representations is gaining popularity in classification tasks. More recently, ontological dictionaries were used for reducing dimensionality in this kind of representation (e.g., Semantic Dimensionality Reduction System (SDRS) (Vélez de Mendizabal et al., 2020)). These approaches are based on the combination of semantically related columns by taking advantage of semantic information extracted from ontological dictionaries. Their main advantage is that they not only eliminate features but can also combine them, minimizing (low-loss) or avoiding (lossless) the loss of information. The most recent (and accurate) techniques included in this group are based on using evolutionary algorithms to find how many features can be grouped to reduce false positive (FP) and false negative (FN) errors obtained. The main limitation of these evolutionary-based schemes is the computational requirements derived from the use of optimization algorithms. The contribution of this study is a new lossless feature reduction scheme exploiting information from ontological dictionaries, which achieves slightly better accuracy (specially in FP errors) than optimization-based approaches but using far fewer computational resources. Instead of using computationally expensive evolutionary algorithms, our proposal determines whether two columns (synsets) can be combined by observing whether the instances included in a dataset (e.g., training dataset) containing these synsets are mostly of the same class. The study includes experiments using three datasets and a detailed comparison with two previous optimization-based approaches.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"46 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141867286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A knowledge graph algorithm enabled deep recommendation system 支持知识图谱算法的深度推荐系统
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-30 DOI: 10.7717/peerj-cs.2010
Yan Wang, Xiao Feng Ma, Miao Zhu
Personalized learning resource recommendations may help resolve the difficulties of online education that include learning mazes and information overload. However, existing personalized learning resource recommendation algorithms have shortcomings such as low accuracy and low efficiency. This study proposes a deep recommendation system algorithm based on a knowledge graph (D-KGR) that includes four data processing units. These units are the recommendation unit (RS unit), the knowledge graph feature representation unit (KGE unit), the cross compression unit (CC unit), and the feature extraction unit (FE unit). This model integrates technologies including the knowledge graph, deep learning, neural network, and data mining. It introduces cross compression in the feature learning process of the knowledge graph and predicts user attributes. Multimodal technology is used to optimize the process of project attribute processing; text type attributes, multivalued type attributes, and other type attributes are processed separately to reconstruct the knowledge graph. A convolutional neural network algorithm is introduced in the reconstruction process to optimize the data feature qualities. Experimental analysis was conducted from two aspects of algorithm efficiency and accuracy, and the particle swarm optimization, neural network, and knowledge graph algorithms were compared. Several tests showed that the deep recommendation system algorithm had obvious advantages when the number of learning resources and users exceeded 1,000. It has the ability to integrate systems such as the particle swarm optimization iterative classification, neural network intelligent simulation, and low resource consumption. It can quickly process massive amounts of information data, reduce algorithm complexity and requires less time and had lower costs. Our algorithm also has better efficiency and accuracy.
个性化学习资源推荐有助于解决在线教育中的学习迷宫和信息过载等难题。然而,现有的个性化学习资源推荐算法存在准确率低、效率低等缺点。本研究提出了一种基于知识图谱(D-KGR)的深度推荐系统算法,包括四个数据处理单元。这些单元分别是推荐单元(RS 单元)、知识图谱特征表示单元(KGE 单元)、交叉压缩单元(CC 单元)和特征提取单元(FE 单元)。该模型集成了知识图谱、深度学习、神经网络和数据挖掘等技术。它在知识图谱的特征学习过程中引入了交叉压缩,并预测用户属性。采用多模态技术优化项目属性处理流程,分别处理文本类型属性、多值类型属性和其他类型属性,重构知识图谱。在重构过程中引入了卷积神经网络算法,以优化数据特征质量。实验分析从算法效率和准确性两个方面对粒子群优化算法、神经网络算法和知识图谱算法进行了比较。多项测试表明,当学习资源和用户数量超过 1000 个时,深度推荐系统算法优势明显。它具有粒子群优化迭代分类、神经网络智能模拟等系统集成能力,资源消耗低。它能快速处理海量信息数据,降低算法复杂度,所需时间更短,成本更低。我们的算法还具有更高的效率和准确性。
{"title":"A knowledge graph algorithm enabled deep recommendation system","authors":"Yan Wang, Xiao Feng Ma, Miao Zhu","doi":"10.7717/peerj-cs.2010","DOIUrl":"https://doi.org/10.7717/peerj-cs.2010","url":null,"abstract":"Personalized learning resource recommendations may help resolve the difficulties of online education that include learning mazes and information overload. However, existing personalized learning resource recommendation algorithms have shortcomings such as low accuracy and low efficiency. This study proposes a deep recommendation system algorithm based on a knowledge graph (D-KGR) that includes four data processing units. These units are the recommendation unit (RS unit), the knowledge graph feature representation unit (KGE unit), the cross compression unit (CC unit), and the feature extraction unit (FE unit). This model integrates technologies including the knowledge graph, deep learning, neural network, and data mining. It introduces cross compression in the feature learning process of the knowledge graph and predicts user attributes. Multimodal technology is used to optimize the process of project attribute processing; text type attributes, multivalued type attributes, and other type attributes are processed separately to reconstruct the knowledge graph. A convolutional neural network algorithm is introduced in the reconstruction process to optimize the data feature qualities. Experimental analysis was conducted from two aspects of algorithm efficiency and accuracy, and the particle swarm optimization, neural network, and knowledge graph algorithms were compared. Several tests showed that the deep recommendation system algorithm had obvious advantages when the number of learning resources and users exceeded 1,000. It has the ability to integrate systems such as the particle swarm optimization iterative classification, neural network intelligent simulation, and low resource consumption. It can quickly process massive amounts of information data, reduce algorithm complexity and requires less time and had lower costs. Our algorithm also has better efficiency and accuracy.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"20 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141873064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time infectious disease endurance indicator system for scientific decisions using machine learning and rapid data processing 利用机器学习和快速数据处理进行科学决策的实时传染病耐力指标系统
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-30 DOI: 10.7717/peerj-cs.2062
Shivendra Dubey, Dinesh Kumar Verma, Mahesh Kumar
The SARS-CoV-2 virus, which induces an acute respiratory illness commonly referred to as COVID-19, had been designated as a pandemic by the World Health Organization due to its highly infectious nature and the associated public health risks it poses globally. Identifying the critical factors for predicting mortality is essential for improving patient therapy. Unlike other data types, such as computed tomography scans, x-radiation, and ultrasounds, basic blood test results are widely accessible and can aid in predicting mortality. The present research advocates the utilization of machine learning (ML) methodologies for predicting the likelihood of infectious disease like COVID-19 mortality by leveraging blood test data. Age, LDH (lactate dehydrogenase), lymphocytes, neutrophils, and hs-CRP (high-sensitivity C-reactive protein) are five extremely potent characteristics that, when combined, can accurately predict mortality in 96% of cases. By combining XGBoost feature importance with neural network classification, the optimal approach can predict mortality with exceptional accuracy from infectious disease, along with achieving a precision rate of 90% up to 16 days before the event. The studies suggested model’s excellent predictive performance and practicality were confirmed through testing with three instances that depended on the days to the outcome. By carefully analyzing and identifying patterns in these significant biomarkers insightful information has been obtained for simple application. This study offers potential remedies that could accelerate decision-making for targeted medical treatments within healthcare systems, utilizing a timely, accurate, and reliable method.
SARS-CoV-2 病毒可诱发急性呼吸道疾病,通常被称为 COVID-19,由于其高度传染性及其在全球范围内造成的相关公共卫生风险,已被世界卫生组织定为大流行病。确定预测死亡率的关键因素对于改善患者治疗至关重要。与计算机断层扫描、X 射线和超声波等其他数据类型不同,基本血液检测结果可广泛获取,并有助于预测死亡率。本研究提倡利用机器学习(ML)方法,通过血液检测数据来预测 COVID-19 等传染病的死亡率。年龄、LDH(乳酸脱氢酶)、淋巴细胞、中性粒细胞和 hs-CRP(高敏 C 反应蛋白)是五个极其有效的特征,将它们结合在一起可准确预测 96% 病例的死亡率。通过将 XGBoost 特征重要性与神经网络分类相结合,最佳方法可以非常准确地预测传染病的死亡率,同时在事件发生前 16 天内的准确率可达 90%。研究结果表明,该模型的出色预测性能和实用性已通过对三个取决于结果发生天数的实例的测试得到证实。通过仔细分析和识别这些重要生物标志物的模式,我们获得了可供简单应用的具有洞察力的信息。这项研究提供了潜在的补救措施,可以利用一种及时、准确和可靠的方法,加速医疗保健系统内有针对性的医疗决策。
{"title":"Real-time infectious disease endurance indicator system for scientific decisions using machine learning and rapid data processing","authors":"Shivendra Dubey, Dinesh Kumar Verma, Mahesh Kumar","doi":"10.7717/peerj-cs.2062","DOIUrl":"https://doi.org/10.7717/peerj-cs.2062","url":null,"abstract":"The SARS-CoV-2 virus, which induces an acute respiratory illness commonly referred to as COVID-19, had been designated as a pandemic by the World Health Organization due to its highly infectious nature and the associated public health risks it poses globally. Identifying the critical factors for predicting mortality is essential for improving patient therapy. Unlike other data types, such as computed tomography scans, x-radiation, and ultrasounds, basic blood test results are widely accessible and can aid in predicting mortality. The present research advocates the utilization of machine learning (ML) methodologies for predicting the likelihood of infectious disease like COVID-19 mortality by leveraging blood test data. Age, LDH (lactate dehydrogenase), lymphocytes, neutrophils, and hs-CRP (high-sensitivity C-reactive protein) are five extremely potent characteristics that, when combined, can accurately predict mortality in 96% of cases. By combining XGBoost feature importance with neural network classification, the optimal approach can predict mortality with exceptional accuracy from infectious disease, along with achieving a precision rate of 90% up to 16 days before the event. The studies suggested model’s excellent predictive performance and practicality were confirmed through testing with three instances that depended on the days to the outcome. By carefully analyzing and identifying patterns in these significant biomarkers insightful information has been obtained for simple application. This study offers potential remedies that could accelerate decision-making for targeted medical treatments within healthcare systems, utilizing a timely, accurate, and reliable method.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"61 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141867402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive adjacent context negotiation network for object detection in remote sensing imagery 用于遥感图像中物体检测的自适应相邻上下文协商网络
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-29 DOI: 10.7717/peerj-cs.2199
Yan Dong, Yundong Liu, Yuhua Cheng, Guangshuai Gao, Kai Chen, Chunlei Li
Accurate localization of objects of interest in remote sensing images (RSIs) is of great significance for object identification, resource management, decision-making and disaster relief response. However, many difficulties, like complex backgrounds, dense target quantities, large-scale variations, and small-scale objects, which make the detection accuracy unsatisfactory. To improve the detection accuracy, we propose an Adaptive Adjacent Context Negotiation Network (A2CN-Net). Firstly, the composite fast Fourier convolution (CFFC) module is given to reduce the information loss of small objects, which is inserted into the backbone network to obtain spectral global context information. Then, the Global Context Information Enhancement (GCIE) module is given to capture and aggregate global spatial features, which is beneficial for locating objects of different scales. Furthermore, to alleviate the aliasing effect caused by the fusion of adjacent feature layers, a novel Adaptive Adjacent Context Negotiation network (A2CN) is given to adaptive integration of multi-level features, which consists of local and adjacent branches, with the local branch adaptively highlighting feature information and the adjacent branch introducing global information at the adjacent level to enhance feature representation. In the meantime, considering the variability in the focus of feature layers in different dimensions, learnable weights are applied to the local and adjacent branches for adaptive feature fusion. Finally, extensive experiments are performed in several available public datasets, including DIOR and DOTA-v1.0. Experimental studies show that A2CN-Net can significantly boost detection performance, with mAP increasing to 74.2% and 79.2%, respectively.
在遥感图像(RSIs)中对感兴趣的目标进行精确定位,对于目标识别、资源管理、决策和救灾响应具有重要意义。然而,复杂的背景、密集的目标数量、大尺度的变化和小尺度的物体等诸多困难使得检测精度不尽如人意。为了提高检测精度,我们提出了自适应相邻上下文协商网络(A2CN-Net)。首先,给出复合快速傅立叶卷积(CFFC)模块以减少小物体的信息损失,并将其插入主干网络以获取频谱全局上下文信息。然后,给出全局上下文信息增强(GCIE)模块,以捕捉和聚合全局空间特征,这有利于定位不同尺度的物体。此外,为了减轻相邻特征层融合时产生的混叠效应,还给出了一个新颖的自适应相邻上下文协商网络(A2CN)来自适应地整合多层次特征,该网络由局部分支和相邻分支组成,局部分支自适应地突出特征信息,相邻分支则在相邻层引入全局信息以增强特征表示。同时,考虑到不同维度的特征层侧重点存在差异,可学习权重被应用于局部和相邻分支,以实现自适应特征融合。最后,在 DIOR 和 DOTA-v1.0 等几个可用的公共数据集上进行了广泛的实验。实验研究表明,A2CN-Net 能显著提高检测性能,mAP 分别提高到 74.2% 和 79.2%。
{"title":"Adaptive adjacent context negotiation network for object detection in remote sensing imagery","authors":"Yan Dong, Yundong Liu, Yuhua Cheng, Guangshuai Gao, Kai Chen, Chunlei Li","doi":"10.7717/peerj-cs.2199","DOIUrl":"https://doi.org/10.7717/peerj-cs.2199","url":null,"abstract":"Accurate localization of objects of interest in remote sensing images (RSIs) is of great significance for object identification, resource management, decision-making and disaster relief response. However, many difficulties, like complex backgrounds, dense target quantities, large-scale variations, and small-scale objects, which make the detection accuracy unsatisfactory. To improve the detection accuracy, we propose an Adaptive Adjacent Context Negotiation Network (A2CN-Net). Firstly, the composite fast Fourier convolution (CFFC) module is given to reduce the information loss of small objects, which is inserted into the backbone network to obtain spectral global context information. Then, the Global Context Information Enhancement (GCIE) module is given to capture and aggregate global spatial features, which is beneficial for locating objects of different scales. Furthermore, to alleviate the aliasing effect caused by the fusion of adjacent feature layers, a novel Adaptive Adjacent Context Negotiation network (A2CN) is given to adaptive integration of multi-level features, which consists of local and adjacent branches, with the local branch adaptively highlighting feature information and the adjacent branch introducing global information at the adjacent level to enhance feature representation. In the meantime, considering the variability in the focus of feature layers in different dimensions, learnable weights are applied to the local and adjacent branches for adaptive feature fusion. Finally, extensive experiments are performed in several available public datasets, including DIOR and DOTA-v1.0. Experimental studies show that A2CN-Net can significantly boost detection performance, with mAP increasing to 74.2% and 79.2%, respectively.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"156 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141867367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient hybrid differential evolution-golden jackal optimization algorithm for multilevel thresholding image segmentation 用于多级阈值图像分割的高效混合微分进化-金豺优化算法
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-29 DOI: 10.7717/peerj-cs.2121
Xianmeng Meng, Linglong Tan, Yueqin Wang
Image segmentation is a crucial process in the field of image processing. Multilevel threshold segmentation is an effective image segmentation method, where an image is segmented into different regions based on multilevel thresholds for information analysis. However, the complexity of multilevel thresholding increases dramatically as the number of thresholds increases. To address this challenge, this article proposes a novel hybrid algorithm, termed differential evolution-golden jackal optimizer (DEGJO), for multilevel thresholding image segmentation using the minimum cross-entropy (MCE) as a fitness function. The DE algorithm is combined with the GJO algorithm for iterative updating of position, which enhances the search capacity of the GJO algorithm. The performance of the DEGJO algorithm is assessed on the CEC2021 benchmark function and compared with state-of-the-art optimization algorithms. Additionally, the efficacy of the proposed algorithm is evaluated by performing multilevel segmentation experiments on benchmark images. The experimental results demonstrate that the DEGJO algorithm achieves superior performance in terms of fitness values compared to other metaheuristic algorithms. Moreover, it also yields good results in quantitative performance metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and feature similarity index (FSIM) measurements.
图像分割是图像处理领域的一个重要过程。多级阈值分割是一种有效的图像分割方法,根据多级阈值将图像分割成不同的区域进行信息分析。然而,随着阈值数量的增加,多级阈值分割的复杂性也急剧增加。为了应对这一挑战,本文提出了一种新颖的混合算法,即微分进化-金豺优化器(DEGJO),用于以最小交叉熵(MCE)为适配函数的多级阈值图像分割。DE 算法与 GJO 算法相结合,进行位置迭代更新,从而增强了 GJO 算法的搜索能力。在 CEC2021 基准函数上评估了 DEGJO 算法的性能,并与最先进的优化算法进行了比较。此外,通过对基准图像进行多级分割实验,评估了所提算法的功效。实验结果表明,与其他元启发式算法相比,DEGJO 算法在适配值方面表现出色。此外,该算法在峰值信噪比(PSNR)、结构相似性指数(SSIM)和特征相似性指数(FSIM)等定量性能指标上也取得了良好的结果。
{"title":"An efficient hybrid differential evolution-golden jackal optimization algorithm for multilevel thresholding image segmentation","authors":"Xianmeng Meng, Linglong Tan, Yueqin Wang","doi":"10.7717/peerj-cs.2121","DOIUrl":"https://doi.org/10.7717/peerj-cs.2121","url":null,"abstract":"Image segmentation is a crucial process in the field of image processing. Multilevel threshold segmentation is an effective image segmentation method, where an image is segmented into different regions based on multilevel thresholds for information analysis. However, the complexity of multilevel thresholding increases dramatically as the number of thresholds increases. To address this challenge, this article proposes a novel hybrid algorithm, termed differential evolution-golden jackal optimizer (DEGJO), for multilevel thresholding image segmentation using the minimum cross-entropy (MCE) as a fitness function. The DE algorithm is combined with the GJO algorithm for iterative updating of position, which enhances the search capacity of the GJO algorithm. The performance of the DEGJO algorithm is assessed on the CEC2021 benchmark function and compared with state-of-the-art optimization algorithms. Additionally, the efficacy of the proposed algorithm is evaluated by performing multilevel segmentation experiments on benchmark images. The experimental results demonstrate that the DEGJO algorithm achieves superior performance in terms of fitness values compared to other metaheuristic algorithms. Moreover, it also yields good results in quantitative performance metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and feature similarity index (FSIM) measurements.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"86 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141867384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing infectious disease prediction model selection with multi-objective optimization: an empirical study 利用多目标优化加强传染病预测模型选择:实证研究
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-29 DOI: 10.7717/peerj-cs.2217
Deren Xu, Weng Howe Chan, Habibollah Haron
As the pandemic continues to pose challenges to global public health, developing effective predictive models has become an urgent research topic. This study aims to explore the application of multi-objective optimization methods in selecting infectious disease prediction models and evaluate their impact on improving prediction accuracy, generalizability, and computational efficiency. In this study, the NSGA-II algorithm was used to compare models selected by multi-objective optimization with those selected by traditional single-objective optimization. The results indicate that decision tree (DT) and extreme gradient boosting regressor (XGBoost) models selected through multi-objective optimization methods outperform those selected by other methods in terms of accuracy, generalizability, and computational efficiency. Compared to the ridge regression model selected through single-objective optimization methods, the decision tree (DT) and XGBoost models demonstrate significantly lower root mean square error (RMSE) on real datasets. This finding highlights the potential advantages of multi-objective optimization in balancing multiple evaluation metrics. However, this study’s limitations suggest future research directions, including algorithm improvements, expanded evaluation metrics, and the use of more diverse datasets. The conclusions of this study emphasize the theoretical and practical significance of multi-objective optimization methods in public health decision support systems, indicating their wide-ranging potential applications in selecting predictive models.
随着大流行病不断给全球公共卫生带来挑战,开发有效的预测模型已成为一个紧迫的研究课题。本研究旨在探索多目标优化方法在选择传染病预测模型中的应用,并评估其对提高预测准确性、普适性和计算效率的影响。本研究采用 NSGA-II 算法,比较了多目标优化与传统单目标优化所选择的模型。结果表明,通过多目标优化方法选择的决策树(DT)和极梯度提升回归模型(XGBoost)在准确性、泛化能力和计算效率方面都优于其他方法选择的模型。与通过单目标优化方法选择的脊回归模型相比,决策树(DT)和 XGBoost 模型在实际数据集上的均方根误差(RMSE)明显更低。这一发现凸显了多目标优化在平衡多个评价指标方面的潜在优势。不过,本研究的局限性也提出了未来的研究方向,包括改进算法、扩展评价指标以及使用更多样化的数据集。本研究的结论强调了多目标优化方法在公共卫生决策支持系统中的理论和实践意义,表明其在选择预测模型方面具有广泛的应用潜力。
{"title":"Enhancing infectious disease prediction model selection with multi-objective optimization: an empirical study","authors":"Deren Xu, Weng Howe Chan, Habibollah Haron","doi":"10.7717/peerj-cs.2217","DOIUrl":"https://doi.org/10.7717/peerj-cs.2217","url":null,"abstract":"As the pandemic continues to pose challenges to global public health, developing effective predictive models has become an urgent research topic. This study aims to explore the application of multi-objective optimization methods in selecting infectious disease prediction models and evaluate their impact on improving prediction accuracy, generalizability, and computational efficiency. In this study, the NSGA-II algorithm was used to compare models selected by multi-objective optimization with those selected by traditional single-objective optimization. The results indicate that decision tree (DT) and extreme gradient boosting regressor (XGBoost) models selected through multi-objective optimization methods outperform those selected by other methods in terms of accuracy, generalizability, and computational efficiency. Compared to the ridge regression model selected through single-objective optimization methods, the decision tree (DT) and XGBoost models demonstrate significantly lower root mean square error (RMSE) on real datasets. This finding highlights the potential advantages of multi-objective optimization in balancing multiple evaluation metrics. However, this study’s limitations suggest future research directions, including algorithm improvements, expanded evaluation metrics, and the use of more diverse datasets. The conclusions of this study emphasize the theoretical and practical significance of multi-objective optimization methods in public health decision support systems, indicating their wide-ranging potential applications in selecting predictive models.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"51 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141867366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CropGCNN: color space-based crop disease classification using group convolutional neural network CropGCNN:利用群卷积神经网络进行基于色彩空间的作物病害分类
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-29 DOI: 10.7717/peerj-cs.2136
Naeem Ahmad, Shubham Singh, Mohamed Fahad AlAjmi, Afzal Hussain, Khalid Raza
Classifying images is one of the most important tasks in computer vision. Recently, the best performance for image classification tasks has been shown by networks that are both deep and well-connected. These days, most datasets are made up of a fixed number of color images. The input images are taken in red green blue (RGB) format and classified without any changes being made to the original. It is observed that color spaces (basically changing original RGB images) have a major impact on classification accuracy, and we delve into the significance of color spaces. Moreover, datasets with a highly variable number of classes, such as the PlantVillage dataset utilizing a model that incorporates numerous color spaces inside the same model, achieve great levels of accuracy, and different classes of images are better represented in different color spaces. Furthermore, we demonstrate that this type of model, in which the input is preprocessed into many color spaces simultaneously, requires significantly fewer parameters to achieve high accuracy for classification. The proposed model basically takes an RGB image as input, turns it into seven separate color spaces at once, and then feeds each of those color spaces into its own Convolutional Neural Network (CNN) model. To lessen the load on the computer and the number of hyperparameters needed, we employ group convolutional layers in the proposed CNN model. We achieve substantial gains over the present state-of-the-art methods for the classification of crop disease.
图像分类是计算机视觉领域最重要的任务之一。最近,在图像分类任务中表现最佳的是既有深度又有良好连接的网络。如今,大多数数据集都由固定数量的彩色图像组成。输入图像采用红绿蓝(RGB)格式,并在不对原始图像做任何改动的情况下进行分类。据观察,色彩空间(基本上是改变原始 RGB 图像)对分类准确性有重大影响,我们将深入探讨色彩空间的重要性。此外,类别数量变化很大的数据集(如植物村数据集)利用在同一模型中包含多种色彩空间的模型,实现了很高的准确率,而且不同类别的图像在不同色彩空间中表现得更好。此外,我们还证明,这种同时将输入预处理为多个色彩空间的模型,只需较少的参数就能达到很高的分类准确率。所提出的模型基本上是将 RGB 图像作为输入,将其一次性转换成七个独立的色彩空间,然后将每个色彩空间输入到各自的卷积神经网络(CNN)模型中。为了减轻计算机的负荷和所需超参数的数量,我们在拟议的 CNN 模型中采用了分组卷积层。与目前最先进的农作物病害分类方法相比,我们取得了巨大的进步。
{"title":"CropGCNN: color space-based crop disease classification using group convolutional neural network","authors":"Naeem Ahmad, Shubham Singh, Mohamed Fahad AlAjmi, Afzal Hussain, Khalid Raza","doi":"10.7717/peerj-cs.2136","DOIUrl":"https://doi.org/10.7717/peerj-cs.2136","url":null,"abstract":"Classifying images is one of the most important tasks in computer vision. Recently, the best performance for image classification tasks has been shown by networks that are both deep and well-connected. These days, most datasets are made up of a fixed number of color images. The input images are taken in red green blue (RGB) format and classified without any changes being made to the original. It is observed that color spaces (basically changing original RGB images) have a major impact on classification accuracy, and we delve into the significance of color spaces. Moreover, datasets with a highly variable number of classes, such as the PlantVillage dataset utilizing a model that incorporates numerous color spaces inside the same model, achieve great levels of accuracy, and different classes of images are better represented in different color spaces. Furthermore, we demonstrate that this type of model, in which the input is preprocessed into many color spaces simultaneously, requires significantly fewer parameters to achieve high accuracy for classification. The proposed model basically takes an RGB image as input, turns it into seven separate color spaces at once, and then feeds each of those color spaces into its own Convolutional Neural Network (CNN) model. To lessen the load on the computer and the number of hyperparameters needed, we employ group convolutional layers in the proposed CNN model. We achieve substantial gains over the present state-of-the-art methods for the classification of crop disease.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"361 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141867368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pashto script and graphics detection in camera captured Pashto document images using deep learning model 使用深度学习模型在摄像头捕捉的普什图文件图像中检测普什图文字和图形
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-26 DOI: 10.7717/peerj-cs.2089
Khan Bahadar, Riaz Ahmad, Khursheed Aurangzeb, Siraj Muhammad, Khalil Ullah, Ibrar Hussain, Ikram Syed, Muhammad Shahid Anwar
Layout analysis is the main component of a typical Document Image Analysis (DIA) system and plays an important role in pre-processing. However, regarding the Pashto language, the document images have not been explored so far. This research, for the first time, examines Pashto text along with graphics and proposes a deep learning-based classifier that can detect Pashto text and graphics per document. Another notable contribution of this research is the creation of a real dataset, which contains more than 1,000 images of the Pashto documents captured by a camera. For this dataset, we applied the convolution neural network (CNN) following a deep learning technique. Our intended method is based on the development of the advanced and classical variant of Faster R-CNN called Single-Shot Detector (SSD). The evaluation was performed by examining the 300 images from the test set. Through this way, we achieved a mean average precision (mAP) of 84.90%.
布局分析是典型的文档图像分析(DIA)系统的主要组成部分,在预处理中发挥着重要作用。然而,关于普什图语的文档图像,迄今为止尚未进行过研究。本研究首次对普什图语文本和图形进行了研究,并提出了一种基于深度学习的分类器,可以检测每份文档中的普什图语文本和图形。本研究的另一个显著贡献是创建了一个真实的数据集,其中包含由相机拍摄的 1,000 多张普什图语文档图像。对于这个数据集,我们采用了深度学习技术的卷积神经网络(CNN)。我们打算采用的方法是基于快速 R-CNN 高级经典变体的开发,该变体被称为 "单枪检测器"(SSD)。评估是通过检查测试集中的 300 幅图像进行的。通过这种方法,我们的平均精确度(mAP)达到了 84.90%。
{"title":"Pashto script and graphics detection in camera captured Pashto document images using deep learning model","authors":"Khan Bahadar, Riaz Ahmad, Khursheed Aurangzeb, Siraj Muhammad, Khalil Ullah, Ibrar Hussain, Ikram Syed, Muhammad Shahid Anwar","doi":"10.7717/peerj-cs.2089","DOIUrl":"https://doi.org/10.7717/peerj-cs.2089","url":null,"abstract":"Layout analysis is the main component of a typical Document Image Analysis (DIA) system and plays an important role in pre-processing. However, regarding the Pashto language, the document images have not been explored so far. This research, for the first time, examines Pashto text along with graphics and proposes a deep learning-based classifier that can detect Pashto text and graphics per document. Another notable contribution of this research is the creation of a real dataset, which contains more than 1,000 images of the Pashto documents captured by a camera. For this dataset, we applied the convolution neural network (CNN) following a deep learning technique. Our intended method is based on the development of the advanced and classical variant of Faster R-CNN called Single-Shot Detector (SSD). The evaluation was performed by examining the 300 images from the test set. Through this way, we achieved a mean average precision (mAP) of 84.90%.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"12 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141773078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepCorr: a novel error correction method for 3GS long reads based on deep learning DeepCorr:基于深度学习的新型 3GS 长读数纠错方法
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-26 DOI: 10.7717/peerj-cs.2160
Rongshu Wang, Jianhua Chen
Long reads generated by third-generation sequencing (3GS) technologies are involved in many biological analyses and play a vital role due to their ultra-long read length. However, the high error rate affects the downstream process. DeepCorr, a novel error correction algorithm for data from both PacBio and ONT platforms based on deep learning is proposed. The core algorithm adopts a recurrent neural network to capture the long-term dependencies in the long reads to convert the problem of long-read error correction to a multi-classification task. It first aligns the high-precision short reads to long reads to generate the corresponding feature vectors and labels, then feeds these vectors to the neural network, and finally trains the model for prediction and error correction. DeepCorr produces untrimmed corrected long reads and improves the alignment identity while maintaining the length advantage. It can capture and make full use of the dependencies to polish those bases that are not aligned by any short read. DeepCorr achieves better performance than that of the state-of-the-art error correction methods on real-world PacBio and ONT benchmark data sets and consumes fewer computing resources. It is a comprehensive deep learning-based tool that enables one to correct long reads accurately.
第三代测序(3GS)技术产生的长读数因其超长读数长度而在许多生物分析中发挥着重要作用。然而,高错误率会影响下游流程。DeepCorr 是一种基于深度学习的新型纠错算法,适用于 PacBio 和 ONT 平台的数据。其核心算法采用递归神经网络捕捉长读数中的长期依赖关系,将长读数纠错问题转化为多分类任务。它首先将高精度短读与长读对齐,生成相应的特征向量和标签,然后将这些向量输入神经网络,最后训练模型进行预测和纠错。DeepCorr 可以生成未经修剪的校正长读数,并在保持长度优势的同时提高比对识别率。它可以捕捉并充分利用依赖关系,打磨那些没有被任何短读数配准的碱基。在实际的 PacBio 和 ONT 基准数据集上,DeepCorr 比最先进的纠错方法取得了更好的性能,而且消耗的计算资源更少。它是一种基于深度学习的综合工具,能准确校正长读数。
{"title":"DeepCorr: a novel error correction method for 3GS long reads based on deep learning","authors":"Rongshu Wang, Jianhua Chen","doi":"10.7717/peerj-cs.2160","DOIUrl":"https://doi.org/10.7717/peerj-cs.2160","url":null,"abstract":"Long reads generated by third-generation sequencing (3GS) technologies are involved in many biological analyses and play a vital role due to their ultra-long read length. However, the high error rate affects the downstream process. DeepCorr, a novel error correction algorithm for data from both PacBio and ONT platforms based on deep learning is proposed. The core algorithm adopts a recurrent neural network to capture the long-term dependencies in the long reads to convert the problem of long-read error correction to a multi-classification task. It first aligns the high-precision short reads to long reads to generate the corresponding feature vectors and labels, then feeds these vectors to the neural network, and finally trains the model for prediction and error correction. DeepCorr produces untrimmed corrected long reads and improves the alignment identity while maintaining the length advantage. It can capture and make full use of the dependencies to polish those bases that are not aligned by any short read. DeepCorr achieves better performance than that of the state-of-the-art error correction methods on real-world PacBio and ONT benchmark data sets and consumes fewer computing resources. It is a comprehensive deep learning-based tool that enables one to correct long reads accurately.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"58 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141773079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hybrid approach based on k-means and SVM algorithms in selection of appropriate risk assessment methods for sectors 基于 k-means 和 SVM 算法的混合方法,用于选择适当的部门风险评估方法
IF 3.8 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-26 DOI: 10.7717/peerj-cs.2198
Fatih Topaloglu
Every work environment contains different types of risks and interactions between risks. Therefore, the method to be used when making a risk assessment is very important. When determining which risk assessment method (RAM) to use, there are many factors such as the types of risks in the work environment, the interactions of these risks with each other, and their distance from the employees. Although there are many RAMs available, there is no RAM that will suit all workplaces and which method to choose is the biggest question. There is no internationally accepted scale or trend on this subject. In the study, 26 sectors, 10 different RAMs and 10 criteria were determined. A hybrid approach has been designed to determine the most suitable RAMs for sectors by using k-means clustering and support vector machine (SVM) classification algorithms, which are machine learning (ML) algorithms. First, the data set was divided into subsets with the k-means algorithm. Then, the SVM algorithm was run on all subsets with different characteristics. Finally, the results of all subsets were combined to obtain the result of the entire dataset. Thus, instead of the threshold value determined for a single and large cluster affecting the entire cluster and being made mandatory for all of them, a flexible structure was created by determining separate threshold values for each sub-cluster according to their characteristics. In this way, machine support was provided by selecting the most suitable RAMs for the sectors and eliminating the administrative and software problems in the selection phase from the manpower. The first comparison result of the proposed method was found to be the hybrid method: 96.63%, k-means: 90.63 and SVM: 94.68%. In the second comparison made with five different ML algorithms, the results of the artificial neural networks (ANN): 87.44%, naive bayes (NB): 91.29%, decision trees (DT): 89.25%, random forest (RF): 81.23% and k-nearest neighbours (KNN): 85.43% were found.
每个工作环境都包含不同类型的风险和风险之间的相互作用。因此,进行风险评估时使用的方法非常重要。在确定使用哪种风险评估方法(RAM)时,需要考虑很多因素,如工作环境中的风险类型、这些风险之间的相互作用以及它们与员工之间的距离。虽然有许多 RAM 可供使用,但没有一种 RAM 适合所有工作场所,选择哪种方法是最大的问题。在这个问题上,还没有一个国际公认的尺度或趋势。在这项研究中,确定了 26 个部门、10 种不同的记录和档案管理方法以及 10 项标准。我们设计了一种混合方法,通过使用 k-means 聚类和支持向量机(SVM)分类算法,即机器学习(ML)算法,来确定各部门最合适的记录和档案管理方法。首先,使用 k-means 算法将数据集划分为若干子集。然后,在所有具有不同特征的子集中运行 SVM 算法。最后,合并所有子集的结果,得出整个数据集的结果。这样,就不再是为单一的大集群确定的阈值会影响整个集群,也不再是所有集群都必须使用的阈值,而是根据每个子集群的特点为其确定单独的阈值,从而创建了一个灵活的结构。这样,通过为各部门选择最合适的记录和档案管理 员,提供了机器支持,并从人力方面消除了选择阶段的行政和软件问题。建议方法的第一次比较结果是:混合方法:96.63%,k-means:90.63%,SVM:94.68%。在与五种不同 ML 算法的第二次比较中,人工神经网络(ANN)的结果是:87.44%;奈夫贝叶斯(NB)的结果是:91.29%;决策树(DT)的结果是:89.25%;随机森林(RF)的结果是:94.68%:89.25%,随机森林 (RF)81.23% 和 k-nearest neighbours (KNN): 85.43%。
{"title":"A hybrid approach based on k-means and SVM algorithms in selection of appropriate risk assessment methods for sectors","authors":"Fatih Topaloglu","doi":"10.7717/peerj-cs.2198","DOIUrl":"https://doi.org/10.7717/peerj-cs.2198","url":null,"abstract":"Every work environment contains different types of risks and interactions between risks. Therefore, the method to be used when making a risk assessment is very important. When determining which risk assessment method (RAM) to use, there are many factors such as the types of risks in the work environment, the interactions of these risks with each other, and their distance from the employees. Although there are many RAMs available, there is no RAM that will suit all workplaces and which method to choose is the biggest question. There is no internationally accepted scale or trend on this subject. In the study, 26 sectors, 10 different RAMs and 10 criteria were determined. A hybrid approach has been designed to determine the most suitable RAMs for sectors by using k-means clustering and support vector machine (SVM) classification algorithms, which are machine learning (ML) algorithms. First, the data set was divided into subsets with the k-means algorithm. Then, the SVM algorithm was run on all subsets with different characteristics. Finally, the results of all subsets were combined to obtain the result of the entire dataset. Thus, instead of the threshold value determined for a single and large cluster affecting the entire cluster and being made mandatory for all of them, a flexible structure was created by determining separate threshold values for each sub-cluster according to their characteristics. In this way, machine support was provided by selecting the most suitable RAMs for the sectors and eliminating the administrative and software problems in the selection phase from the manpower. The first comparison result of the proposed method was found to be the hybrid method: 96.63%, k-means: 90.63 and SVM: 94.68%. In the second comparison made with five different ML algorithms, the results of the artificial neural networks (ANN): 87.44%, naive bayes (NB): 91.29%, decision trees (DT): 89.25%, random forest (RF): 81.23% and k-nearest neighbours (KNN): 85.43% were found.","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"67 1","pages":""},"PeriodicalIF":3.8,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141773084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
PeerJ Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1