首页 > 最新文献

PeerJ Computer Science最新文献

英文 中文
A big data analysis algorithm for massive sensor medical images. 海量传感器医学图像的大数据分析算法。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-26 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2464
Sarah A Alzakari, Nuha Alruwais, Shaymaa Sorour, Shouki A Ebad, Asma Abbas Hassan Elnour, Ahmed Sayed

Big data analytics for clinical decision-making has been proposed for various clinical sectors because clinical decisions are more evidence-based and promising. Healthcare data is so vast and readily available that big data analytics has completely transformed this sector and opened up many new prospects. The smart sensor-based big data analysis recommendation system has significant privacy and security concerns when using sensor medical images for suggestions and monitoring. The danger of security breaches and unauthorized access, which might lead to identity theft and privacy violations, increases when sending and storing sensitive medical data on the cloud. Our effort will improve patient care and well-being by creating an anomaly detection system based on machine learning specifically for medical images and providing timely treatments and notifications. Current anomaly detection methods in healthcare systems, such as artificial intelligence and big data analytics-intracerebral hemorrhage (AIBDA-ICH) and parallel conformer neural network (PCNN), face several challenges, including high resource consumption, inefficient feature selection, and an inability to handle temporal data effectively for real-time monitoring. Techniques like support vector machines (SVM) and the hidden Markov model (HMM) struggle with computational overhead and scalability in large datasets, limiting their performance in critical healthcare applications. Additionally, existing methods often fail to provide accurate anomaly detection with low latency, making them unsuitable for time-sensitive environments. We infer the extraction, feature selection, attack detection, and data collection and processing procedures to anticipate anomaly inpatient data. We transfer the data, take care of missing values, and sanitize it using the pre-processing mechanism. We employed the recursive feature elimination (RFE) and dynamic principal component analysis (DPCA) algorithms for feature selection and extraction. In addition, we applied the Auto-encoded genetic recurrent neural network (AGRNN) approach to identify abnormalities. Data arrival rate, resource consumption, propagation delay, transaction epoch, true positive rate, false alarm rate, and root mean square error (RMSE) are some metrics used to evaluate the proposed task.

临床决策的大数据分析已被提出用于临床各个领域,因为临床决策更具循证性和前景。医疗保健数据是如此庞大和容易获得,大数据分析已经完全改变了这个行业,并开辟了许多新的前景。基于智能传感器的大数据分析推荐系统在使用传感器医学图像进行建议和监控时存在明显的隐私和安全问题。在云上发送和存储敏感医疗数据时,安全漏洞和未经授权访问的危险(可能导致身份盗窃和隐私侵犯)会增加。我们的努力将通过创建一个专门针对医学图像的基于机器学习的异常检测系统,并提供及时的治疗和通知,来改善患者的护理和福祉。当前医疗保健系统中的异常检测方法,如人工智能和大数据分析-脑出血(AIBDA-ICH)和并行共形神经网络(PCNN),面临着一些挑战,包括资源消耗高,特征选择效率低,以及无法有效处理实时监测的时间数据。支持向量机(SVM)和隐马尔可夫模型(HMM)等技术在大型数据集中的计算开销和可扩展性方面存在问题,限制了它们在关键医疗保健应用中的性能。此外,现有的方法往往不能提供准确的低延迟异常检测,使得它们不适合时间敏感的环境。我们推断提取、特征选择、攻击检测、数据收集和处理过程,以预测异常住院患者数据。我们传输数据,处理丢失的值,并使用预处理机制对其进行清理。采用递归特征消除(RFE)和动态主成分分析(DPCA)算法进行特征选择和提取。此外,我们应用自编码遗传递归神经网络(AGRNN)方法来识别异常。数据到达率、资源消耗、传播延迟、事务历元、真阳性率、虚警率和均方根误差(RMSE)是用来评估提议任务的一些指标。
{"title":"A big data analysis algorithm for massive sensor medical images.","authors":"Sarah A Alzakari, Nuha Alruwais, Shaymaa Sorour, Shouki A Ebad, Asma Abbas Hassan Elnour, Ahmed Sayed","doi":"10.7717/peerj-cs.2464","DOIUrl":"10.7717/peerj-cs.2464","url":null,"abstract":"<p><p>Big data analytics for clinical decision-making has been proposed for various clinical sectors because clinical decisions are more evidence-based and promising. Healthcare data is so vast and readily available that big data analytics has completely transformed this sector and opened up many new prospects. The smart sensor-based big data analysis recommendation system has significant privacy and security concerns when using sensor medical images for suggestions and monitoring. The danger of security breaches and unauthorized access, which might lead to identity theft and privacy violations, increases when sending and storing sensitive medical data on the cloud. Our effort will improve patient care and well-being by creating an anomaly detection system based on machine learning specifically for medical images and providing timely treatments and notifications. Current anomaly detection methods in healthcare systems, such as artificial intelligence and big data analytics-intracerebral hemorrhage (AIBDA-ICH) and parallel conformer neural network (PCNN), face several challenges, including high resource consumption, inefficient feature selection, and an inability to handle temporal data effectively for real-time monitoring. Techniques like support vector machines (SVM) and the hidden Markov model (HMM) struggle with computational overhead and scalability in large datasets, limiting their performance in critical healthcare applications. Additionally, existing methods often fail to provide accurate anomaly detection with low latency, making them unsuitable for time-sensitive environments. We infer the extraction, feature selection, attack detection, and data collection and processing procedures to anticipate anomaly inpatient data. We transfer the data, take care of missing values, and sanitize it using the pre-processing mechanism. We employed the recursive feature elimination (RFE) and dynamic principal component analysis (DPCA) algorithms for feature selection and extraction. In addition, we applied the Auto-encoded genetic recurrent neural network (AGRNN) approach to identify abnormalities. Data arrival rate, resource consumption, propagation delay, transaction epoch, true positive rate, false alarm rate, and root mean square error (RMSE) are some metrics used to evaluate the proposed task.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2464"},"PeriodicalIF":3.5,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623192/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142802927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
General retrieval network model for multi-class plant leaf diseases based on hashing. 基于哈希的多类植物叶片病害通用检索网络模型。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-26 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2545
Zhanpeng Yang, Jun Wu, Xianju Yuan, Yaxiong Chen, Yanxin Guo

Traditional disease retrieval and localization for plant leaves typically demand substantial human resources and time. In this study, an intelligent approach utilizing deep hash convolutional neural networks (DHCNN) is presented to address these challenges and enhance retrieval performance. By integrating a collision-resistant hashing technique, this method demonstrates an improved ability to distinguish highly similar disease features, achieving over 98.4% in both precision and true positive rate (TPR) for single-plant disease retrieval on crops like apple, corn and tomato. For multi-plant disease retrieval, the approach further achieves impressive Precision of 99.5%, TPR of 99.6% and F-score of 99.58% on the augmented PlantVillage dataset, confirming its robustness in handling diverse plant diseases. This method ensures precise disease retrieval in demanding conditions, whether for single or multiple plant scenarios.

传统的植物叶片病害检索和定位通常需要大量的人力资源和时间。在本研究中,提出了一种利用深度哈希卷积神经网络(DHCNN)的智能方法来解决这些挑战并提高检索性能。通过集成抗碰撞哈希技术,该方法提高了识别高度相似病害特征的能力,在苹果、玉米和番茄等作物的单株病害检索中,准确率和真阳性率(TPR)均超过98.4%。对于多植物病害检索,该方法在增强的PlantVillage数据集上进一步达到了令人印象深刻的99.5%的精度,99.6%的TPR和99.58%的F-score,证实了其在处理多种植物病害方面的鲁棒性。这种方法确保在苛刻的条件下精确的疾病检索,无论是单个还是多个植物场景。
{"title":"General retrieval network model for multi-class plant leaf diseases based on hashing.","authors":"Zhanpeng Yang, Jun Wu, Xianju Yuan, Yaxiong Chen, Yanxin Guo","doi":"10.7717/peerj-cs.2545","DOIUrl":"10.7717/peerj-cs.2545","url":null,"abstract":"<p><p>Traditional disease retrieval and localization for plant leaves typically demand substantial human resources and time. In this study, an intelligent approach utilizing deep hash convolutional neural networks (DHCNN) is presented to address these challenges and enhance retrieval performance. By integrating a collision-resistant hashing technique, this method demonstrates an improved ability to distinguish highly similar disease features, achieving over 98.4% in both precision and true positive rate (TPR) for single-plant disease retrieval on crops like apple, corn and tomato. For multi-plant disease retrieval, the approach further achieves impressive Precision of 99.5%, TPR of 99.6% and F-score of 99.58% on the augmented PlantVillage dataset, confirming its robustness in handling diverse plant diseases. This method ensures precise disease retrieval in demanding conditions, whether for single or multiple plant scenarios.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2545"},"PeriodicalIF":3.5,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11622960/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classifying software security requirements into confidentiality, integrity, and availability using machine learning approaches. 使用机器学习方法将软件安全需求分类为机密性、完整性和可用性。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-26 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2554
Taghreed Bagies

Security requirements are considered one of the most important non-functional requirements of software. The CIA (confidentiality, integrity, and availability) triad forms the basis for the development of security systems. Each dimension is expressed as having many security requirements that should be designed, implemented, and tested. However, requirements are written in a natural language and may suffer from ambiguity and inconsistency, which makes it harder to distinguish between different security dimensions. Recognizing the security dimensions in a requirements document should facilitate tracing the requirements and ensuring that a dimension has been implemented in a software system. This process should be automated to reduce time and effort for software engineers. In this paper, we propose to classify the security requirements into CIA triads using Term frequency-inverse document frequency and sentence-transformer embedding as two different technologies for feature extraction. For both techniques, we developed five models by using five well-known machine learning algorithms: (1) support vector machine (SVM), (2) K-nearest neighbors (KNN), (3) Random Forest (RF), (4) gradient boosting (GB), and (5) Bernoulli Naive Bayes (BNB). Also, we developed a web interface that facilitates real-time analysis and classifies security requirements into CIA triads. Our results revealed that SVM with the sentence-transformer technique outperformed all classifiers by 87% accuracy in predicting a type of security dimension.

安全性需求被认为是软件最重要的非功能需求之一。CIA(机密性、完整性和可用性)是开发安全系统的基础。每个维度都表示为具有许多应该设计、实现和测试的安全需求。然而,需求是用自然语言编写的,可能存在歧义和不一致,这使得区分不同的安全维度变得更加困难。识别需求文档中的安全维度应该有助于跟踪需求,并确保在软件系统中实现了一个维度。这个过程应该是自动化的,以减少软件工程师的时间和努力。在本文中,我们建议使用术语频率逆文档频率和句子转换器嵌入作为两种不同的特征提取技术,将安全需求分类为CIA三元组。对于这两种技术,我们使用五种著名的机器学习算法开发了五个模型:(1)支持向量机(SVM), (2) k近邻(KNN),(3)随机森林(RF),(4)梯度增强(GB)和(5)伯努利朴素贝叶斯(BNB)。此外,我们还开发了一个网络界面,便于实时分析,并将安全需求分类为CIA三合会。我们的研究结果表明,支持向量机与句子转换技术比所有分类器在预测一种类型的安全维度的准确率高出87%。
{"title":"Classifying software security requirements into confidentiality, integrity, and availability using machine learning approaches.","authors":"Taghreed Bagies","doi":"10.7717/peerj-cs.2554","DOIUrl":"10.7717/peerj-cs.2554","url":null,"abstract":"<p><p>Security requirements are considered one of the most important non-functional requirements of software. The CIA (confidentiality, integrity, and availability) triad forms the basis for the development of security systems. Each dimension is expressed as having many security requirements that should be designed, implemented, and tested. However, requirements are written in a natural language and may suffer from ambiguity and inconsistency, which makes it harder to distinguish between different security dimensions. Recognizing the security dimensions in a requirements document should facilitate tracing the requirements and ensuring that a dimension has been implemented in a software system. This process should be automated to reduce time and effort for software engineers. In this paper, we propose to classify the security requirements into CIA triads using Term frequency-inverse document frequency and sentence-transformer embedding as two different technologies for feature extraction. For both techniques, we developed five models by using five well-known machine learning algorithms: (1) support vector machine (SVM), (2) K-nearest neighbors (KNN), (3) Random Forest (RF), (4) gradient boosting (GB), and (5) Bernoulli Naive Bayes (BNB). Also, we developed a web interface that facilitates real-time analysis and classifies security requirements into CIA triads. Our results revealed that SVM with the sentence-transformer technique outperformed all classifiers by 87% accuracy in predicting a type of security dimension.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2554"},"PeriodicalIF":3.5,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623117/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of collaborative filtering algorithm based on time decay function in music teaching recommendation model. 基于时间衰减函数的协同过滤算法在音乐教学推荐模型中的应用。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-26 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2533
Yina Zhao, Xiang Hua

To address the issues of data sparsity, scalability, and cold start in the traditional teaching resource recommendation process, this paper presents an enhanced collaborative filtering (CF) recommendation algorithm incorporating a time decay (TD) function. By aligning with the human memory forgetting curve, the TD function is employed as a weighting factor, enabling the calculation of similarity and user preferences constrained by the TD, thus amplifying the weight of user interest over a short period and achieving the integration of short-term and long-term interests. The results indicate that the RMSE of the proposed combined recommendation algorithm (TD-CF) is only 8.95 when the number of recommendations reaches 100, which is significantly lower than the comparison model, which exhibits higher accuracy across different recommended items, effectively utilizing music teaching resources and user characteristics to deliver more precise recommendations.

为解决传统教学资源推荐过程中的数据稀疏性、可扩展性和冷启动等问题,本文提出了一种结合时间衰减(TD)函数的增强型协同过滤(CF)推荐算法。通过与人的记忆遗忘曲线相吻合,采用时间衰减函数作为加权因子,使相似度和用户偏好的计算受到时间衰减函数的约束,从而放大了用户短期兴趣的权重,实现了短期兴趣和长期兴趣的融合。结果表明,当推荐数量达到 100 个时,所提出的组合推荐算法(TD-CF)的均方根误差(RMSE)仅为 8.95,明显低于对比模型,在不同推荐项目中表现出更高的准确性,有效地利用了音乐教学资源和用户特点,提供了更精准的推荐。
{"title":"Application of collaborative filtering algorithm based on time decay function in music teaching recommendation model.","authors":"Yina Zhao, Xiang Hua","doi":"10.7717/peerj-cs.2533","DOIUrl":"10.7717/peerj-cs.2533","url":null,"abstract":"<p><p>To address the issues of data sparsity, scalability, and cold start in the traditional teaching resource recommendation process, this paper presents an enhanced collaborative filtering (CF) recommendation algorithm incorporating a time decay (TD) function. By aligning with the human memory forgetting curve, the TD function is employed as a weighting factor, enabling the calculation of similarity and user preferences constrained by the TD, thus amplifying the weight of user interest over a short period and achieving the integration of short-term and long-term interests. The results indicate that the RMSE of the proposed combined recommendation algorithm (TD-CF) is only 8.95 when the number of recommendations reaches 100, which is significantly lower than the comparison model, which exhibits higher accuracy across different recommended items, effectively utilizing music teaching resources and user characteristics to deliver more precise recommendations.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2533"},"PeriodicalIF":3.5,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11639207/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142830802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel deep learning model for predicting marine pollution for sustainable ocean management. 基于可持续海洋管理的海洋污染预测深度学习模型。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-25 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2482
Michael Onyema Edeh, Surjeet Dalal, Musaed Alhussein, Khursheed Aurangzeb, Bijeta Seth, Kuldeep Kumar

Climate change has become a major source of concern to the global community. The steady pollution of the environment including our waters is gradually increasing the effects of climate change. The disposal of plastics in the seas alters aquatic life. Marine plastic pollution poses a grave danger to the marine environment and the long-term health of the ocean. Though technology is also seen as one of the contributors to climate change many aspects of it are being applied to combat climate-related disasters and to raise awareness about the need to protect the planet. This study investigated the amount of pollution in marine and undersea leveraging the power of artificial intelligence to identify and categorise marine and undersea plastic wastes. The classification was done using two types of machine learning algorithms: two-step clustering and a fully convolutional network (FCN). The models were trained using Kaggle's plastic location data, which was acquired in situ. An experimental test was conducted to validate the accuracy and performance of the trained models and the results were promising when compared to other conventional approaches and models. The model was used to create and test an automated floating plastic detection system in the required timeframe. In both cases, the trained model was able to correctly identify the floating plastic and achieved an accuracy of 98.38%. The technique presented in this study can be a crucial instrument for automatic detection of plastic garbage in the ocean thereby enhancing the war against marine pollution.

气候变化已成为国际社会关注的重大问题。包括我们的水域在内的环境的持续污染正在逐渐增加气候变化的影响。在海洋中处理塑料会改变水生生物。海洋塑料污染对海洋环境和海洋的长期健康构成严重威胁。尽管科技也被视为造成气候变化的因素之一,但它的许多方面正被应用于应对与气候有关的灾害,并提高人们对保护地球必要性的认识。这项研究调查了海洋和海底的污染量,利用人工智能的力量来识别和分类海洋和海底的塑料垃圾。分类使用两种类型的机器学习算法:两步聚类和全卷积网络(FCN)。这些模型是使用Kaggle的塑料定位数据进行训练的,这些数据是在现场获得的。通过实验验证了训练模型的准确性和性能,与其他传统方法和模型相比,结果是有希望的。该模型用于在规定的时间内创建和测试自动浮动塑料检测系统。在这两种情况下,训练后的模型都能够正确识别漂浮塑料,准确率达到98.38%。本研究中提出的技术可以成为海洋塑料垃圾自动检测的关键工具,从而加强对海洋污染的战争。
{"title":"A novel deep learning model for predicting marine pollution for sustainable ocean management.","authors":"Michael Onyema Edeh, Surjeet Dalal, Musaed Alhussein, Khursheed Aurangzeb, Bijeta Seth, Kuldeep Kumar","doi":"10.7717/peerj-cs.2482","DOIUrl":"10.7717/peerj-cs.2482","url":null,"abstract":"<p><p>Climate change has become a major source of concern to the global community. The steady pollution of the environment including our waters is gradually increasing the effects of climate change. The disposal of plastics in the seas alters aquatic life. Marine plastic pollution poses a grave danger to the marine environment and the long-term health of the ocean. Though technology is also seen as one of the contributors to climate change many aspects of it are being applied to combat climate-related disasters and to raise awareness about the need to protect the planet. This study investigated the amount of pollution in marine and undersea leveraging the power of artificial intelligence to identify and categorise marine and undersea plastic wastes. The classification was done using two types of machine learning algorithms: two-step clustering and a fully convolutional network (FCN). The models were trained using Kaggle's plastic location data, which was acquired <i>in situ</i>. An experimental test was conducted to validate the accuracy and performance of the trained models and the results were promising when compared to other conventional approaches and models. The model was used to create and test an automated floating plastic detection system in the required timeframe. In both cases, the trained model was able to correctly identify the floating plastic and achieved an accuracy of 98.38%. The technique presented in this study can be a crucial instrument for automatic detection of plastic garbage in the ocean thereby enhancing the war against marine pollution.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2482"},"PeriodicalIF":3.5,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11622943/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introducing ProsperNN-a Python package for forecasting with neural networks. 介绍prospernn -一个用于神经网络预测的Python包。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-25 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2481
Nico Beck, Julia Schemm, Claudia Ehrig, Benedikt Sonnleitner, Ursula Neumann, Hans Georg Zimmermann

We present the package prosper_nn, that provides four neural network architectures dedicated to time series forecasting, implemented in PyTorch. In addition, prosper_nn contains the first sensitivity analysis suitable for recurrent neural networks (RNN) and a heatmap to visualize forecasting uncertainty, which was previously only available in Java. These models and methods have successfully been in use in industry for two decades and were used and referenced in several scientific publications. However, only now we make them publicly available on GitHub, allowing researchers and practitioners to benchmark and further develop them. The package is designed to make the models easily accessible, thereby enabling research and application in various fields like demand and macroeconomic forecasting.

我们介绍了prosper_nn包,它提供了四个专用于时间序列预测的神经网络架构,在PyTorch中实现。此外,prosper_nn包含了第一个适用于循环神经网络(RNN)的敏感性分析和可视化预测不确定性的热图,这在以前只在Java中可用。这些模型和方法已经成功地在工业中使用了二十年,并在一些科学出版物中使用和引用。然而,直到现在,我们才在GitHub上公开提供它们,允许研究人员和从业者对它们进行基准测试和进一步开发。该软件包旨在使模型易于访问,从而能够在需求和宏观经济预测等各个领域进行研究和应用。
{"title":"Introducing ProsperNN-a Python package for forecasting with neural networks.","authors":"Nico Beck, Julia Schemm, Claudia Ehrig, Benedikt Sonnleitner, Ursula Neumann, Hans Georg Zimmermann","doi":"10.7717/peerj-cs.2481","DOIUrl":"10.7717/peerj-cs.2481","url":null,"abstract":"<p><p>We present the package prosper_nn, that provides four neural network architectures dedicated to time series forecasting, implemented in PyTorch. In addition, prosper_nn contains the first sensitivity analysis suitable for recurrent neural networks (RNN) and a heatmap to visualize forecasting uncertainty, which was previously only available in Java. These models and methods have successfully been in use in industry for two decades and were used and referenced in several scientific publications. However, only now we make them publicly available on GitHub, allowing researchers and practitioners to benchmark and further develop them. The package is designed to make the models easily accessible, thereby enabling research and application in various fields like demand and macroeconomic forecasting.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2481"},"PeriodicalIF":3.5,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623060/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142802950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IMU-aided adaptive mesh-grid based video motion deblurring. 基于imu辅助的自适应网格的视频运动去模糊。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-25 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2540
Ahmet Arslan, Gokhan Koray Gultekin, Afsar Saranli

Motion blur is a problem that degrades the visual quality of images for human perception and also challenges computer vision tasks. While existing studies mostly focus on deblurring algorithms to remove uniform blur due to their computational efficiency, such approaches fail when faced with non-uniform blur. In this study, we propose a novel algorithm for motion deblurring that utilizes an adaptive mesh-grid approach to manage non-uniform motion blur with a focus on reducing the computational cost. The proposed method divides the image into a mesh-grid and estimates the blur point spread function (PSF) using an inertial sensor. For each video frame, the size of the grid cells is determined adaptively according to the in-frame spatial variance of blur magnitude which is a proposed metric for the blur non-uniformity in the video frame. The adaptive mesh-size takes smaller values for higher variances, increasing the spatial accuracy of the PSF estimation. Two versions of the adaptive mesh-size algorithm are studied, optimized for either best quality or balanced performance and computation cost. Also, a trade-off parameter is defined for changing the mesh-size according to application requirements. The experiments, using real-life motion data combined with simulated motion blur demonstrate that the proposed adaptive mesh-size algorithm can achieve 5% increase in PSNR quality gain together with a 19% decrease in computation time on the average when compared to the constant mesh-size method.

运动模糊是一个降低人类感知图像视觉质量的问题,也是计算机视觉任务的挑战。现有的研究主要集中在去除均匀模糊的算法上,由于计算效率的原因,这些方法在面对非均匀模糊时失败。在本研究中,我们提出了一种新的运动去模糊算法,该算法利用自适应网格方法来管理非均匀运动模糊,重点是降低计算成本。该方法将图像划分为网格,利用惯性传感器估计模糊点扩散函数(PSF)。对于每个视频帧,根据帧内模糊幅度的空间方差自适应确定网格单元的大小,模糊幅度是视频帧内模糊不均匀性的度量。自适应网格尺寸对于较大的方差取较小的值,提高了PSF估计的空间精度。研究了两种版本的自适应网格大小算法,以达到最佳质量或平衡性能和计算成本。此外,还定义了一个权衡参数,用于根据应用程序需求更改网格大小。使用真实运动数据和模拟运动模糊进行的实验表明,与固定网格大小的方法相比,所提出的自适应网格大小算法可使PSNR质量增益提高5%,计算时间平均减少19%。
{"title":"IMU-aided adaptive mesh-grid based video motion deblurring.","authors":"Ahmet Arslan, Gokhan Koray Gultekin, Afsar Saranli","doi":"10.7717/peerj-cs.2540","DOIUrl":"10.7717/peerj-cs.2540","url":null,"abstract":"<p><p>Motion blur is a problem that degrades the visual quality of images for human perception and also challenges computer vision tasks. While existing studies mostly focus on deblurring algorithms to remove uniform blur due to their computational efficiency, such approaches fail when faced with non-uniform blur. In this study, we propose a novel algorithm for motion deblurring that utilizes an adaptive mesh-grid approach to manage non-uniform motion blur with a focus on reducing the computational cost. The proposed method divides the image into a mesh-grid and estimates the blur point spread function (PSF) using an inertial sensor. For each video frame, the size of the grid cells is determined adaptively according to the in-frame spatial variance of blur magnitude which is a proposed metric for the blur non-uniformity in the video frame. The adaptive mesh-size takes smaller values for higher variances, increasing the spatial accuracy of the PSF estimation. Two versions of the adaptive mesh-size algorithm are studied, optimized for either best quality or balanced performance and computation cost. Also, a trade-off parameter is defined for changing the mesh-size according to application requirements. The experiments, using real-life motion data combined with simulated motion blur demonstrate that the proposed adaptive mesh-size algorithm can achieve 5% increase in PSNR quality gain together with a 19% decrease in computation time on the average when compared to the constant mesh-size method.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2540"},"PeriodicalIF":3.5,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11622971/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced related-key differential neural distinguishers for SIMON and SIMECK block ciphers. SIMON和SIMECK分组密码的增强相关密钥差分神经区分器。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-25 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2566
Gao Wang, Gaoli Wang

At CRYPTO 2019, Gohr pioneered the application of deep learning to differential cryptanalysis and successfully attacked the 11-round NSA block cipher Speck32/64 with a 7-round and an 8-round single-key differential neural distinguisher. Subsequently, Lu et al. (DOI 10.1093/comjnl/bxac195) presented the improved related-key differential neural distinguishers against the SIMON and SIMECK. Following this work, we provide a framework to construct the enhanced related-key differential neural distinguisher for SIMON and SIMECK. In order to select input differences efficiently, we introduce a method that leverages weighted bias scores to approximate the suitability of various input differences. Building on the principles of the basic related-key differential neural distinguisher, we further propose an improved scheme to construct the enhanced related-key differential neural distinguisher by utilizing two input differences, and obtain superior accuracy than Lu et al. for both SIMON and SIMECK. Specifically, our meticulous selection of input differences yields significant accuracy improvements of 3% and 1.9% for the 12-round and 13-round basic related-key differential neural distinguishers of SIMON32/64. Moreover, our enhanced related-key differential neural distinguishers surpass the basic related-key differential neural distinguishers. For 13-round SIMON32/64, 13-round SIMON48/96, and 14-round SIMON64/128, the accuracy of their related-key differential neural distinguishers increases from 0.545, 0.650, and 0.580 to 0.567, 0.696, and 0.618, respectively. For 15-round SIMECK32/64, 19-round SIMECK48/96, and 22-round SIMECK64/128, the accuracy of their neural distinguishers is improved from 0.547, 0.516, and 0.519 to 0.568, 0.523, and 0.526, respectively.

在CRYPTO 2019上,Gohr率先将深度学习应用于差分密码分析,并成功使用7轮和8轮单键差分神经区分器攻击了11轮NSA分组密码Speck32/64。随后,Lu等人(DOI 10.1093/comjnl/bxac195)提出了针对SIMON和SIMECK的改进的相关关键差分神经区分器。在此基础上,我们提供了一个框架来构建SIMON和SIMECK的增强型相关键差分神经区分器。为了有效地选择输入差异,我们引入了一种利用加权偏差分数来近似各种输入差异的适用性的方法。在基本关联键差分神经区分器原理的基础上,我们进一步提出了一种利用两个输入差构建增强关联键差分神经区分器的改进方案,在SIMON和SIMECK上都获得了优于Lu等人的准确率。具体来说,我们对输入差异的细致选择使得SIMON32/64的12轮和13轮基本相关键差分神经区分器的准确率分别提高了3%和1.9%。此外,我们的改进的相关键差分神经区分器优于基本的相关键差分神经区分器。对于13轮的SIMON32/64、13轮的SIMON48/96和14轮的SIMON64/128,其相关键差分神经区分器的准确率分别从0.545、0.650和0.580提高到0.567、0.696和0.618。对于15轮SIMECK32/64、19轮SIMECK48/96和22轮SIMECK64/128,它们的神经分类准确率分别从0.547、0.516和0.519提高到0.568、0.523和0.526。
{"title":"Enhanced related-key differential neural distinguishers for SIMON and SIMECK block ciphers.","authors":"Gao Wang, Gaoli Wang","doi":"10.7717/peerj-cs.2566","DOIUrl":"https://doi.org/10.7717/peerj-cs.2566","url":null,"abstract":"<p><p>At CRYPTO 2019, Gohr pioneered the application of deep learning to differential cryptanalysis and successfully attacked the 11-round NSA block cipher Speck32/64 with a 7-round and an 8-round single-key differential neural distinguisher. Subsequently, Lu et al. (DOI 10.1093/comjnl/bxac195) presented the improved related-key differential neural distinguishers against the SIMON and SIMECK. Following this work, we provide a framework to construct the enhanced related-key differential neural distinguisher for SIMON and SIMECK. In order to select input differences efficiently, we introduce a method that leverages weighted bias scores to approximate the suitability of various input differences. Building on the principles of the basic related-key differential neural distinguisher, we further propose an improved scheme to construct the enhanced related-key differential neural distinguisher by utilizing two input differences, and obtain superior accuracy than Lu et al. for both SIMON and SIMECK. Specifically, our meticulous selection of input differences yields significant accuracy improvements of 3% and 1.9% for the 12-round and 13-round basic related-key differential neural distinguishers of SIMON32/64. Moreover, our enhanced related-key differential neural distinguishers surpass the basic related-key differential neural distinguishers. For 13-round SIMON32/64, 13-round SIMON48/96, and 14-round SIMON64/128, the accuracy of their related-key differential neural distinguishers increases from 0.545, 0.650, and 0.580 to 0.567, 0.696, and 0.618, respectively. For 15-round SIMECK32/64, 19-round SIMECK48/96, and 22-round SIMECK64/128, the accuracy of their neural distinguishers is improved from 0.547, 0.516, and 0.519 to 0.568, 0.523, and 0.526, respectively.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2566"},"PeriodicalIF":3.5,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623040/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An enhanced integrated fuzzy logic-based deep learning techniques (EIFL-DL) for the recommendation system on industrial applications. 基于集成模糊逻辑的深度学习技术在工业推荐系统中的应用。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-22 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2529
Yasir Rafique, Jue Wu, Abdul Wahab Muzaffar, Bilal Rafique

Industrial organizations are turning to recommender systems (RSs) to provide more personalized experiences to customers. This technology provides an efficient solution to the over-choice problem by quickly combing through large amounts of information and supplying recommendations that fit each user's individual preferences. It is quickly becoming an integral part of operations, as it yields successful and convenient results. This research provides an enhanced integrated fuzzy logic-based deep learning technique (EIFL-DL) for recent industrial challenges. Extracting useful insights and making appropriate suggestions in industrial settings is difficult due to the fast development of data. Traditional RSs often struggle to handle the complexity and uncertainty inherent in industrial data. To address these limitations, we propose an EIFL-DL framework that combines fuzzy logic and deep learning techniques to enhance recommendation accuracy and interpretability. The EIFL-DL framework leverages fuzzy logic to handle uncertainty and vagueness in industrial data. Fuzzy logic enables the modelling of imprecise and uncertain information, and the system is able to capture nuanced relationships and make more accurate recommendations. Deep learning techniques, on the other hand, excel at extracting complex patterns and features from large-scale data. By integrating fuzzy logic with deep learning, the EIFL-DL framework harnesses the strengths of both approaches to overcome the limitations of traditional RSs. The proposed framework consists of three main stages: data preprocessing, feature extraction, and recommendation generation. In the data preprocessing stage, industrial data is cleaned, normalized, and transformed into fuzzy sets to handle uncertainty. The feature extraction stage employs deep learning techniques, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), to extract meaningful features from the preprocessed data. Finally, the recommendation generation stage utilizes fuzzy logic-based rules and a hybrid recommendation algorithm to generate accurate and interpretable recommendations for industrial applications.

工业组织正在转向推荐系统(RSs),为客户提供更加个性化的体验。该技术通过快速梳理大量信息并提供适合每个用户个人偏好的推荐,为过度选择问题提供了有效的解决方案。它正迅速成为业务的一个组成部分,因为它产生了成功和方便的结果。本研究为最近的工业挑战提供了一种增强的基于模糊逻辑的集成深度学习技术(EIFL-DL)。由于数据的快速发展,很难在工业环境中提取有用的见解并提出适当的建议。传统RSs通常难以处理工业数据中固有的复杂性和不确定性。为了解决这些限制,我们提出了一个结合模糊逻辑和深度学习技术的EIFL-DL框架,以提高推荐的准确性和可解释性。EIFL-DL框架利用模糊逻辑来处理工业数据中的不确定性和模糊性。模糊逻辑可以对不精确和不确定的信息进行建模,系统能够捕捉细微的关系,并做出更准确的建议。另一方面,深度学习技术擅长从大规模数据中提取复杂的模式和特征。通过将模糊逻辑与深度学习相结合,EIFL-DL框架利用了这两种方法的优势来克服传统RSs的局限性。该框架包括三个主要阶段:数据预处理、特征提取和推荐生成。在数据预处理阶段,对工业数据进行清洗、归一化,并将其转化为模糊集来处理不确定性。特征提取阶段采用深度学习技术,如卷积神经网络(cnn)和循环神经网络(rnn),从预处理数据中提取有意义的特征。最后,推荐生成阶段利用基于模糊逻辑的规则和混合推荐算法为工业应用生成准确且可解释的推荐。
{"title":"An enhanced integrated fuzzy logic-based deep learning techniques (EIFL-DL) for the recommendation system on industrial applications.","authors":"Yasir Rafique, Jue Wu, Abdul Wahab Muzaffar, Bilal Rafique","doi":"10.7717/peerj-cs.2529","DOIUrl":"10.7717/peerj-cs.2529","url":null,"abstract":"<p><p>Industrial organizations are turning to recommender systems (RSs) to provide more personalized experiences to customers. This technology provides an efficient solution to the over-choice problem by quickly combing through large amounts of information and supplying recommendations that fit each user's individual preferences. It is quickly becoming an integral part of operations, as it yields successful and convenient results. This research provides an enhanced integrated fuzzy logic-based deep learning technique (EIFL-DL) for recent industrial challenges. Extracting useful insights and making appropriate suggestions in industrial settings is difficult due to the fast development of data. Traditional RSs often struggle to handle the complexity and uncertainty inherent in industrial data. To address these limitations, we propose an EIFL-DL framework that combines fuzzy logic and deep learning techniques to enhance recommendation accuracy and interpretability. The EIFL-DL framework leverages fuzzy logic to handle uncertainty and vagueness in industrial data. Fuzzy logic enables the modelling of imprecise and uncertain information, and the system is able to capture nuanced relationships and make more accurate recommendations. Deep learning techniques, on the other hand, excel at extracting complex patterns and features from large-scale data. By integrating fuzzy logic with deep learning, the EIFL-DL framework harnesses the strengths of both approaches to overcome the limitations of traditional RSs. The proposed framework consists of three main stages: data preprocessing, feature extraction, and recommendation generation. In the data preprocessing stage, industrial data is cleaned, normalized, and transformed into fuzzy sets to handle uncertainty. The feature extraction stage employs deep learning techniques, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), to extract meaningful features from the preprocessed data. Finally, the recommendation generation stage utilizes fuzzy logic-based rules and a hybrid recommendation algorithm to generate accurate and interpretable recommendations for industrial applications.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2529"},"PeriodicalIF":3.5,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623200/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cloud-based configurable data stream processing architecture in rural economic development. 农村经济发展中基于云的可配置数据流处理架构。
IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-22 eCollection Date: 2024-01-01 DOI: 10.7717/peerj-cs.2547
Haohao Chen, Fadi Al-Turjman

Purpose: This study aims to address the limitations of traditional data processing methods in predicting agricultural product prices, which is essential for advancing rural informatization to enhance agricultural efficiency and support rural economic growth.

Methodology: The RL-CNN-GRU framework combines reinforcement learning (RL), convolutional neural network (CNN), and gated recurrent unit (GRU) to improve agricultural price predictions using multidimensional time series data, including historical prices, weather, soil conditions, and other influencing factors. Initially, the model employs a 1D-CNN for feature extraction, followed by GRUs to capture temporal patterns in the data. Reinforcement learning further optimizes the model, enhancing the analysis and accuracy of multidimensional data inputs for more reliable price predictions.

Results: Testing on public and proprietary datasets shows that the RL-CNN-GRU framework significantly outperforms traditional models in predicting prices, with lower mean squared error (MSE) and mean absolute error (MAE) metrics.

Conclusion: The RL-CNN-GRU framework contributes to rural informatization by offering a more accurate prediction tool, thereby supporting improved decision-making in agricultural processes and fostering rural economic development.

目的:解决传统数据处理方法在农产品价格预测中的局限性,这对推进农村信息化,提高农业效率,支持农村经济增长至关重要。方法:RL-CNN-GRU框架结合了强化学习(RL)、卷积神经网络(CNN)和门控循环单元(GRU),利用包括历史价格、天气、土壤条件和其他影响因素在内的多维时间序列数据来改进农产品价格预测。最初,该模型使用1D-CNN进行特征提取,然后使用gru捕获数据中的时间模式。强化学习进一步优化了模型,增强了多维数据输入的分析和准确性,从而实现更可靠的价格预测。结果:在公共和专有数据集上的测试表明,RL-CNN-GRU框架在预测价格方面明显优于传统模型,具有更低的均方误差(MSE)和平均绝对误差(MAE)指标。结论:RL-CNN-GRU框架通过提供更准确的预测工具来促进农村信息化,从而支持改进农业过程决策,促进农村经济发展。
{"title":"Cloud-based configurable data stream processing architecture in rural economic development.","authors":"Haohao Chen, Fadi Al-Turjman","doi":"10.7717/peerj-cs.2547","DOIUrl":"10.7717/peerj-cs.2547","url":null,"abstract":"<p><strong>Purpose: </strong>This study aims to address the limitations of traditional data processing methods in predicting agricultural product prices, which is essential for advancing rural informatization to enhance agricultural efficiency and support rural economic growth.</p><p><strong>Methodology: </strong>The RL-CNN-GRU framework combines reinforcement learning (RL), convolutional neural network (CNN), and gated recurrent unit (GRU) to improve agricultural price predictions using multidimensional time series data, including historical prices, weather, soil conditions, and other influencing factors. Initially, the model employs a 1D-CNN for feature extraction, followed by GRUs to capture temporal patterns in the data. Reinforcement learning further optimizes the model, enhancing the analysis and accuracy of multidimensional data inputs for more reliable price predictions.</p><p><strong>Results: </strong>Testing on public and proprietary datasets shows that the RL-CNN-GRU framework significantly outperforms traditional models in predicting prices, with lower mean squared error (MSE) and mean absolute error (MAE) metrics.</p><p><strong>Conclusion: </strong>The RL-CNN-GRU framework contributes to rural informatization by offering a more accurate prediction tool, thereby supporting improved decision-making in agricultural processes and fostering rural economic development.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2547"},"PeriodicalIF":3.5,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623176/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
PeerJ Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1