首页 > 最新文献

Computing最新文献

英文 中文
Deep learning-based classification and application test of multiple crop leaf diseases using transfer learning and the attention mechanism 利用迁移学习和注意力机制对多种作物叶片病害进行基于深度学习的分类和应用测试
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-08 DOI: 10.1007/s00607-024-01308-8
Yifu Zhang, Qian Sun, Ji Chen, Huini Zhou

Crop diseases are among the major natural disasters in agricultural production that seriously restrict the growth and development of crops, threatening food security. Timely classification, accurate identification, and the application of methods suitable for the situation can effectively prevent and control crop diseases, improving the quality of agricultural products. Considering the huge variety of crops, diseases, and differences in the characteristics of diseases during each stage, the current convolutional neural network models based on deep learning need to meet the higher requirement of classifying crop diseases accurately. It is necessary to introduce a new architecture scheme to improve the recognition effect. Therefore, in this study, we optimized the deep learning-based classification model for multiple crop leaf diseases using combined transfer learning and the attention mechanism, the modified model was deployed in the smartphone for testing. Dataset that containing 10 types of crops, 61 types of diseases, and different degrees was established, the algorithm structure based on ResNet50 was designed using transfer learning and the SE attention mechanism. The classification performances of different improvement methods were compared by model training. Result indicates that the average accuracy of the proposed TL-SE-ResNet50 model is increased by 7.7%, reaching 96.32%. The model was also integrated and implemented in the smartphone and the test result of the application reaches 94.8%, and the average response time is 882 ms. The improved model proposed has a good effect on the identification of diseases and their condition of multiple crops, and the application can meet the portable usage needs of farmers. This study can provide reference for more crop disease management research in agricultural production.

农作物病害是农业生产中的主要自然灾害之一,严重制约着农作物的生长发育,威胁着粮食安全。及时分类、准确识别、因势利导,可以有效防治农作物病害,提高农产品质量。考虑到农作物种类繁多、病害种类繁多、各阶段病害特征存在差异,目前基于深度学习的卷积神经网络模型需要满足农作物病害精准分类的更高要求。这就需要引入新的架构方案来提高识别效果。因此,在本研究中,我们结合迁移学习和注意力机制,优化了基于深度学习的多种作物叶片病害分类模型,并将修改后的模型部署到智能手机中进行测试。建立了包含 10 种作物、61 种病害和不同程度病害的数据集,利用迁移学习和 SE 注意机制设计了基于 ResNet50 的算法结构。通过模型训练比较了不同改进方法的分类性能。结果表明,所提出的 TL-SE-ResNet50 模型的平均准确率提高了 7.7%,达到 96.32%。该模型还被集成并应用于智能手机,应用的测试结果达到 94.8%,平均响应时间为 882 毫秒。所提出的改进模型对多种作物的病害及其病情识别具有良好的效果,该应用可满足农民的便携使用需求。本研究可为农业生产中更多的作物病害管理研究提供参考。
{"title":"Deep learning-based classification and application test of multiple crop leaf diseases using transfer learning and the attention mechanism","authors":"Yifu Zhang, Qian Sun, Ji Chen, Huini Zhou","doi":"10.1007/s00607-024-01308-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01308-8","url":null,"abstract":"<p>Crop diseases are among the major natural disasters in agricultural production that seriously restrict the growth and development of crops, threatening food security. Timely classification, accurate identification, and the application of methods suitable for the situation can effectively prevent and control crop diseases, improving the quality of agricultural products. Considering the huge variety of crops, diseases, and differences in the characteristics of diseases during each stage, the current convolutional neural network models based on deep learning need to meet the higher requirement of classifying crop diseases accurately. It is necessary to introduce a new architecture scheme to improve the recognition effect. Therefore, in this study, we optimized the deep learning-based classification model for multiple crop leaf diseases using combined transfer learning and the attention mechanism, the modified model was deployed in the smartphone for testing. Dataset that containing 10 types of crops, 61 types of diseases, and different degrees was established, the algorithm structure based on ResNet50 was designed using transfer learning and the SE attention mechanism. The classification performances of different improvement methods were compared by model training. Result indicates that the average accuracy of the proposed TL-SE-ResNet50 model is increased by 7.7%, reaching 96.32%. The model was also integrated and implemented in the smartphone and the test result of the application reaches 94.8%, and the average response time is 882 ms. The improved model proposed has a good effect on the identification of diseases and their condition of multiple crops, and the application can meet the portable usage needs of farmers. This study can provide reference for more crop disease management research in agricultural production.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"37 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A clarity and fairness aware framework for selecting workers in competitive crowdsourcing tasks 在竞争性众包任务中选择工人的清晰度和公平性意识框架
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-06 DOI: 10.1007/s00607-024-01316-8
Seyyed Javad Bozorg Zadeh Razavi, Haleh Amintoosi, Mohammad Allahbakhsh

Crowdsourcing is a powerful technique for accomplishing tasks that are difficult for machines but easy for humans. However, ensuring the quality of the workers who participate in the task is a major challenge. Most of the existing studies have focused on selecting suitable workers based on their attributes and the task requirements, while neglecting the requesters’ characteristics as a key factor in the crowdsourcing process. In this paper, we address this gap by considering the requesters’ preferences and behavior in crowdsourcing systems with competition, where the requester chooses only one worker’s contribution as the final answer. A model is proposed in which the requesters’ characteristics are taken into consideration when finding suitable workers. Also, we propose new definitions for clarity and the fairness of requesters and propose models and formulations to employ them, alongside task and workers’ attributes, to find more suitable workers. We have evaluated the efficacy of our proposed model by analyzing a real-world dataset and compared it with two current state-of-the-art approaches. Our results demonstrate the superiority of our proposed method in assigning the most suitable workers.

众包是一种强大的技术,可以完成对机器来说困难但对人类来说容易的任务。然而,确保参与任务的工人的质量是一项重大挑战。现有的大多数研究都侧重于根据工人的属性和任务要求来选择合适的工人,而忽略了请求者的特征这一众包过程中的关键因素。在本文中,我们通过考虑请求者在具有竞争性的众包系统中的偏好和行为来弥补这一不足,在这种系统中,请求者只选择一个工作者的贡献作为最终答案。我们提出了一个模型,在这个模型中,请求者的特征会在寻找合适的工作者时被考虑在内。此外,我们还为请求者的清晰度和公平性提出了新的定义,并提出了使用这些定义的模型和公式,以及任务和工作者的属性,以找到更合适的工作者。我们通过分析现实世界的数据集评估了我们提出的模型的功效,并将其与当前两种最先进的方法进行了比较。结果表明,我们提出的方法在分配最合适的工人方面更具优势。
{"title":"A clarity and fairness aware framework for selecting workers in competitive crowdsourcing tasks","authors":"Seyyed Javad Bozorg Zadeh Razavi, Haleh Amintoosi, Mohammad Allahbakhsh","doi":"10.1007/s00607-024-01316-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01316-8","url":null,"abstract":"<p>Crowdsourcing is a powerful technique for accomplishing tasks that are difficult for machines but easy for humans. However, ensuring the quality of the workers who participate in the task is a major challenge. Most of the existing studies have focused on selecting suitable workers based on their attributes and the task requirements, while neglecting the requesters’ characteristics as a key factor in the crowdsourcing process. In this paper, we address this gap by considering the requesters’ preferences and behavior in crowdsourcing systems with competition, where the requester chooses only one worker’s contribution as the final answer. A model is proposed in which the requesters’ characteristics are taken into consideration when finding suitable workers. Also, we propose new definitions for clarity and the fairness of requesters and propose models and formulations to employ them, alongside task and workers’ attributes, to find more suitable workers. We have evaluated the efficacy of our proposed model by analyzing a real-world dataset and compared it with two current state-of-the-art approaches. Our results demonstrate the superiority of our proposed method in assigning the most suitable workers.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"35 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new approach for service activation management in fog computing using Cat Swarm Optimization algorithm 使用猫群优化算法的雾计算服务激活管理新方法
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-04 DOI: 10.1007/s00607-024-01302-0
Sayed Mohsen Hashemi, Amir Sahafi, Amir Masoud Rahmani, Mahdi Bohlouli

Today, with the increasing expansion of IoT devices and the growing number of user requests, processing their demands in computational environments has become increasingly challenging.The large volume of user requests and the appropriate distribution of tasks among computational resources often result in disordered energy consumption and increased latency. The correct allocation of resources and reducing energy consumption in fog computing are still significant challenges in this field. Improving resource management methods can provide better services for users. In this article, with the aim of more efficient allocation of resources and service activation management, the metaheuristic algorithm CSO (Cat Swarm Optimization) is used. User requests are received by a request evaluator, prioritized, and efficiently executed using the container live migration technique on fog resources. The container live migration technique leads to the migration of services and their better placement on fog resources, avoiding unnecessary activation of physical resources. The proposed method uses a resource manager to identify and classify available resources, aiming to determine the initial capacity of physical fog resources. The performance of the proposed method has been tested and evaluated using six metaheuristic algorithms, namely Particle Swarm Optimization (PSO), Ant Colony Optimization, Grasshopper Optimization algorithm, Genetic algorithm, Cuckoo Optimization algorithm, and Gray Wolf Optimization, within iFogSim. The proposed method has shown superior efficiency in energy consumption, execution time, latency, and network lifetime compared to other algorithms.

如今,随着物联网设备的日益扩展和用户请求数量的不断增加,在计算环境中处理用户需求变得越来越具有挑战性。大量的用户请求和计算资源之间任务的合理分配往往会导致能源消耗紊乱和延迟增加。如何在雾计算中正确分配资源并降低能耗,仍是该领域面临的重大挑战。改进资源管理方法可以为用户提供更好的服务。本文采用元启发式算法 CSO(猫群优化)来实现更高效的资源分配和服务激活管理。用户请求由请求评估器接收,经过优先排序后,使用容器实时迁移技术在雾资源上高效执行。容器实时迁移技术可以迁移服务并将其更好地放置在雾资源上,避免不必要地激活物理资源。建议的方法使用资源管理器来识别和分类可用资源,旨在确定物理雾资源的初始容量。在 iFogSim 中,使用六种元启发式算法,即粒子群优化算法(PSO)、蚁群优化算法、蚱蜢优化算法、遗传算法、布谷鸟优化算法和灰狼优化算法,对所提方法的性能进行了测试和评估。与其他算法相比,所提出的方法在能源消耗、执行时间、延迟和网络寿命方面都表现出更高的效率。
{"title":"A new approach for service activation management in fog computing using Cat Swarm Optimization algorithm","authors":"Sayed Mohsen Hashemi, Amir Sahafi, Amir Masoud Rahmani, Mahdi Bohlouli","doi":"10.1007/s00607-024-01302-0","DOIUrl":"https://doi.org/10.1007/s00607-024-01302-0","url":null,"abstract":"<p>Today, with the increasing expansion of IoT devices and the growing number of user requests, processing their demands in computational environments has become increasingly challenging.The large volume of user requests and the appropriate distribution of tasks among computational resources often result in disordered energy consumption and increased latency. The correct allocation of resources and reducing energy consumption in fog computing are still significant challenges in this field. Improving resource management methods can provide better services for users. In this article, with the aim of more efficient allocation of resources and service activation management, the metaheuristic algorithm CSO (Cat Swarm Optimization) is used. User requests are received by a request evaluator, prioritized, and efficiently executed using the container live migration technique on fog resources. The container live migration technique leads to the migration of services and their better placement on fog resources, avoiding unnecessary activation of physical resources. The proposed method uses a resource manager to identify and classify available resources, aiming to determine the initial capacity of physical fog resources. The performance of the proposed method has been tested and evaluated using six metaheuristic algorithms, namely Particle Swarm Optimization (PSO), Ant Colony Optimization, Grasshopper Optimization algorithm, Genetic algorithm, Cuckoo Optimization algorithm, and Gray Wolf Optimization, within iFogSim. The proposed method has shown superior efficiency in energy consumption, execution time, latency, and network lifetime compared to other algorithms.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"58 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing computation reuse efficiency in ICN-based edge computing by modifying content store table structure 通过修改内容存储表结构提高基于 ICN 的边缘计算中的计算重用效率
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-03 DOI: 10.1007/s00607-024-01312-y
Atiyeh Javaheri, Ali Bohlooli, Kamal Jamshidi

In edge computing, repetitive computations are a common occurrence. However, the traditional TCP/IP architecture used in edge computing fails to identify these repetitions, resulting in redundant computations being recomputed by edge resources. To address this issue and enhance the efficiency of edge computing, Information-Centric Networking (ICN)-based edge computing is employed. The ICN architecture leverages its forwarding and naming convention features to recognize repetitive computations and direct them to the appropriate edge resources, thereby promoting “computation reuse”. This approach significantly improves the overall effectiveness of edge computing. In the realm of edge computing, dynamically generated computations often experience prolonged response times. To establish and track connections between input requests and the edge, naming conventions become crucial. By incorporating unique IDs within these naming conventions, each computing request with identical input data is treated as distinct, rendering ICN’s aggregation feature unusable. In this study, we propose a novel approach that modifies the Content Store (CS) table, treating computing requests with the same input data and unique IDs, resulting in identical outcomes, as equivalent. The benefits of this approach include reducing distance and completion time, and increasing hit ratio, as duplicate computations are no longer routed to edge resources or utilized cache. Through simulations, we demonstrate that our method significantly enhances cache reuse compared to the default method with no reuse, achieving an average improvement of over 57%. Furthermore, the speed up ratio of enhancement amounts to 15%. Notably, our method surpasses previous approaches by exhibiting the lowest average completion time, particularly when dealing with lower request frequencies. These findings highlight the efficacy and potential of our proposed method in optimizing edge computing performance.

在边缘计算中,重复计算是一种常见现象。然而,边缘计算中使用的传统 TCP/IP 架构无法识别这些重复计算,导致边缘资源重新计算冗余计算。为了解决这个问题并提高边缘计算的效率,我们采用了基于信息中心网络(ICN)的边缘计算。ICN 架构利用其转发和命名约定功能识别重复计算,并将其引导到适当的边缘资源,从而促进 "计算重用"。这种方法大大提高了边缘计算的整体效率。在边缘计算领域,动态生成的计算往往会经历较长的响应时间。为了建立和跟踪输入请求与边缘计算之间的连接,命名约定变得至关重要。通过在这些命名约定中加入唯一 ID,每个具有相同输入数据的计算请求都会被视为不同的请求,从而导致 ICN 的聚合功能无法使用。在本研究中,我们提出了一种修改内容存储(CS)表的新方法,将具有相同输入数据和唯一 ID 的计算请求视为等价请求,从而产生相同的结果。这种方法的好处包括减少距离和完成时间,并提高命中率,因为重复计算不再被路由到边缘资源或使用缓存。通过模拟,我们证明,与没有重复使用的默认方法相比,我们的方法显著提高了缓存重复使用率,平均提高了 57% 以上。此外,提升的速度比达到了 15%。值得注意的是,我们的方法超越了之前的方法,表现出最低的平均完成时间,尤其是在处理较低请求频率时。这些发现凸显了我们提出的方法在优化边缘计算性能方面的功效和潜力。
{"title":"Enhancing computation reuse efficiency in ICN-based edge computing by modifying content store table structure","authors":"Atiyeh Javaheri, Ali Bohlooli, Kamal Jamshidi","doi":"10.1007/s00607-024-01312-y","DOIUrl":"https://doi.org/10.1007/s00607-024-01312-y","url":null,"abstract":"<p>In edge computing, repetitive computations are a common occurrence. However, the traditional TCP/IP architecture used in edge computing fails to identify these repetitions, resulting in redundant computations being recomputed by edge resources. To address this issue and enhance the efficiency of edge computing, Information-Centric Networking (ICN)-based edge computing is employed. The ICN architecture leverages its forwarding and naming convention features to recognize repetitive computations and direct them to the appropriate edge resources, thereby promoting “computation reuse”. This approach significantly improves the overall effectiveness of edge computing. In the realm of edge computing, dynamically generated computations often experience prolonged response times. To establish and track connections between input requests and the edge, naming conventions become crucial. By incorporating unique IDs within these naming conventions, each computing request with identical input data is treated as distinct, rendering ICN’s aggregation feature unusable. In this study, we propose a novel approach that modifies the Content Store (CS) table, treating computing requests with the same input data and unique IDs, resulting in identical outcomes, as equivalent. The benefits of this approach include reducing distance and completion time, and increasing hit ratio, as duplicate computations are no longer routed to edge resources or utilized cache. Through simulations, we demonstrate that our method significantly enhances cache reuse compared to the default method with no reuse, achieving an average improvement of over 57%. Furthermore, the speed up ratio of enhancement amounts to 15%. Notably, our method surpasses previous approaches by exhibiting the lowest average completion time, particularly when dealing with lower request frequencies. These findings highlight the efficacy and potential of our proposed method in optimizing edge computing performance.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"34 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart contracts auditing and multi-classification using machine learning algorithms: an efficient vulnerability detection in ethereum blockchain 使用机器学习算法进行智能合约审计和多分类:以太坊区块链中的高效漏洞检测
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-03 DOI: 10.1007/s00607-024-01314-w
Samia El Haddouti, Mohammed Khaldoune, Meryeme Ayache, Mohamed Dafir Ech-Cherif El Kettani

The adoption of Smart Contracts has revolutionized industries like DeFi and supply chain management, streamlining processes and enhancing transparency. However, ensuring their security is crucial due to their unchangeable nature, which makes them vulnerable to exploitation and errors. Neglecting security can lead to severe consequences such as financial losses and reputation damage. To address this, rigorous analytical processes are needed to evaluate Smart Contract security, despite challenges like cost and complexity associated with current tools. Following an empirical examination of current tools designed to identify vulnerabilities in Smart Contracts, this paper presents a robust and promising solution based on Machine Learning algorithms. The objective is to elevate the auditing and classification of Smart Contracts, building trust and confidence in Blockchain-based applications. By automating the security auditing process, the model not only reduces manual efforts and execution time but also ensures a comprehensive analysis, uncovering even the most complex security vulnerabilities that traditional tools may miss. Overall, the evaluation demonstrates that our proposed model surpasses conventional counterparts in terms of vulnerability detection performance, achieving an accuracy exceeding 98% with optimized execution times.

智能合约的采用彻底改变了 DeFi 和供应链管理等行业,简化了流程并提高了透明度。然而,由于其不可更改的性质,确保其安全性至关重要,这使得它们很容易被利用和出错。忽视安全性会导致严重后果,如经济损失和声誉受损。为了解决这个问题,尽管目前的工具存在成本和复杂性等挑战,但仍需要严格的分析流程来评估智能合约的安全性。在对当前旨在识别智能合约漏洞的工具进行实证检查后,本文提出了一种基于机器学习算法的稳健而有前途的解决方案。其目的是提升智能合约的审计和分类,建立对基于区块链应用的信任和信心。通过自动化安全审计流程,该模型不仅减少了人工操作和执行时间,还确保了全面分析,甚至能发现传统工具可能忽略的最复杂的安全漏洞。总体而言,评估结果表明,我们提出的模型在漏洞检测性能方面超越了传统模型,在优化执行时间的情况下,准确率超过 98%。
{"title":"Smart contracts auditing and multi-classification using machine learning algorithms: an efficient vulnerability detection in ethereum blockchain","authors":"Samia El Haddouti, Mohammed Khaldoune, Meryeme Ayache, Mohamed Dafir Ech-Cherif El Kettani","doi":"10.1007/s00607-024-01314-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01314-w","url":null,"abstract":"<p>The adoption of Smart Contracts has revolutionized industries like DeFi and supply chain management, streamlining processes and enhancing transparency. However, ensuring their security is crucial due to their unchangeable nature, which makes them vulnerable to exploitation and errors. Neglecting security can lead to severe consequences such as financial losses and reputation damage. To address this, rigorous analytical processes are needed to evaluate Smart Contract security, despite challenges like cost and complexity associated with current tools. Following an empirical examination of current tools designed to identify vulnerabilities in Smart Contracts, this paper presents a robust and promising solution based on Machine Learning algorithms. The objective is to elevate the auditing and classification of Smart Contracts, building trust and confidence in Blockchain-based applications. By automating the security auditing process, the model not only reduces manual efforts and execution time but also ensures a comprehensive analysis, uncovering even the most complex security vulnerabilities that traditional tools may miss. Overall, the evaluation demonstrates that our proposed model surpasses conventional counterparts in terms of vulnerability detection performance, achieving an accuracy exceeding 98% with optimized execution times.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"28 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling end-to-end delays in TSCH wireless sensor networks using queuing theory and combinatorics 利用队列理论和组合学模拟 TSCH 无线传感器网络中的端到端延迟
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-02 DOI: 10.1007/s00607-024-01313-x
Yevhenii Shudrenko, Andreas Timm-Giel

Wireless communication offers significant advantages in terms of flexibility, coverage and maintenance compared to wired solutions and is being actively deployed in the industry. IEEE 802.15.4 standardizes the Physical and the Medium Access Control (MAC) layer for Low Power and Lossy Networks (LLNs) and features Timeslotted Channel Hopping (TSCH) for reliable, low-latency communication with scheduling capabilities. Multiple scheduling schemes were proposed to address Quality of Service (QoS) in challenging scenarios. However, most of them are evaluated through simulations and experiments, which are often time-consuming and may be difficult to reproduce. Analytical modeling of TSCH performance is lacking, as only one-hop communication with simplified traffic patterns is considered in state-of-the-art. This work proposes a new framework based on queuing theory and combinatorics to evaluate end-to-end delays in multihop TSCH networks of arbitrary topology, traffic and link conditions. The framework is validated in simulations using OMNeT++ and shows below 6% root-mean-square error (RMSE), providing quick and reliable latency estimation tool to support decision-making and enable formalized comparison of existing scheduling solutions.

与有线解决方案相比,无线通信在灵活性、覆盖范围和维护方面具有明显优势,目前正在业界积极部署。IEEE 802.15.4 规范了低功耗和低损耗网络 (LLN) 的物理层和介质访问控制 (MAC)层,并采用分时信道跳频 (TSCH) 技术实现可靠、低延迟的通信,同时具备调度功能。人们提出了多种调度方案,以解决挑战性场景中的服务质量(QoS)问题。然而,大多数方案都是通过模拟和实验进行评估的,而模拟和实验往往非常耗时,而且可能难以重现。由于最先进的技术只考虑了具有简化流量模式的单跳通信,因此缺乏 TSCH 性能的分析模型。这项工作提出了一个基于排队理论和组合学的新框架,用于评估任意拓扑、流量和链路条件下多跳 TSCH 网络的端到端延迟。使用 OMNeT++ 对该框架进行了仿真验证,结果表明其均方根误差(RMSE)低于 6%,为支持决策提供了快速可靠的延迟估计工具,并可对现有调度解决方案进行正规化比较。
{"title":"Modeling end-to-end delays in TSCH wireless sensor networks using queuing theory and combinatorics","authors":"Yevhenii Shudrenko, Andreas Timm-Giel","doi":"10.1007/s00607-024-01313-x","DOIUrl":"https://doi.org/10.1007/s00607-024-01313-x","url":null,"abstract":"<p>Wireless communication offers significant advantages in terms of flexibility, coverage and maintenance compared to wired solutions and is being actively deployed in the industry. IEEE 802.15.4 standardizes the Physical and the Medium Access Control (MAC) layer for Low Power and Lossy Networks (LLNs) and features Timeslotted Channel Hopping (TSCH) for reliable, low-latency communication with scheduling capabilities. Multiple scheduling schemes were proposed to address Quality of Service (QoS) in challenging scenarios. However, most of them are evaluated through simulations and experiments, which are often time-consuming and may be difficult to reproduce. Analytical modeling of TSCH performance is lacking, as only one-hop communication with simplified traffic patterns is considered in state-of-the-art. This work proposes a new framework based on queuing theory and combinatorics to evaluate end-to-end delays in multihop TSCH networks of arbitrary topology, traffic and link conditions. The framework is validated in simulations using OMNeT++ and shows below 6% root-mean-square error (RMSE), providing quick and reliable latency estimation tool to support decision-making and enable formalized comparison of existing scheduling solutions.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"14 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141516458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing virtual machine placement efficiency in cloud data centers: a hybrid approach using multi-objective reinforcement learning and clustering strategies 提高云数据中心的虚拟机放置效率:使用多目标强化学习和聚类策略的混合方法
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-02 DOI: 10.1007/s00607-024-01311-z
Arezoo Ghasemi, Abolfazl Toroghi Haghighat, Amin Keshavarzi

Deploying virtual machines poses a significant challenge for cloud data centers, requiring careful consideration of various objectives such as minimizing energy consumption, resource wastage, ensuring load balancing, and meeting service level agreements. While researchers have explored multi-objective methods to tackle virtual machine placement, evaluating potential solutions remains complex in such scenarios. In this paper, we introduce two novel multi-objective algorithms tailored to address this challenge. The VMPMFuzzyORL method employs reinforcement learning for virtual machine placement, with candidate solutions assessed using a fuzzy system. While practical, incorporating fuzzy systems introduces notable runtime overhead. To mitigate this, we propose MRRL, an alternative approach involving initial virtual machine clustering using the k-means algorithm, followed by optimized placement utilizing a customized reinforcement learning strategy with multiple reward signals. Extensive simulations highlight the significant advantages of these approaches over existing techniques, particularly energy efficiency, resource utilization, load balancing, and overall execution time.

部署虚拟机是云数据中心面临的一项重大挑战,需要仔细考虑各种目标,如尽量减少能耗、资源浪费、确保负载平衡和满足服务水平协议。虽然研究人员已经探索了解决虚拟机部署问题的多目标方法,但在这种情况下,评估潜在的解决方案仍然很复杂。在本文中,我们介绍了两种为应对这一挑战而量身定制的新型多目标算法。VMPMFuzzyORL 方法采用强化学习来处理虚拟机放置问题,并使用模糊系统来评估候选解决方案。模糊系统虽然实用,但会带来显著的运行时开销。为了缓解这一问题,我们提出了 MRRL,这是一种替代方法,涉及使用 k-means 算法对虚拟机进行初始聚类,然后利用具有多重奖励信号的定制强化学习策略优化放置。大量的仿真突出显示了这些方法相对于现有技术的显著优势,特别是能源效率、资源利用率、负载平衡和整体执行时间。
{"title":"Enhancing virtual machine placement efficiency in cloud data centers: a hybrid approach using multi-objective reinforcement learning and clustering strategies","authors":"Arezoo Ghasemi, Abolfazl Toroghi Haghighat, Amin Keshavarzi","doi":"10.1007/s00607-024-01311-z","DOIUrl":"https://doi.org/10.1007/s00607-024-01311-z","url":null,"abstract":"<p>Deploying virtual machines poses a significant challenge for cloud data centers, requiring careful consideration of various objectives such as minimizing energy consumption, resource wastage, ensuring load balancing, and meeting service level agreements. While researchers have explored multi-objective methods to tackle virtual machine placement, evaluating potential solutions remains complex in such scenarios. In this paper, we introduce two novel multi-objective algorithms tailored to address this challenge. The VMPMFuzzyORL method employs reinforcement learning for virtual machine placement, with candidate solutions assessed using a fuzzy system. While practical, incorporating fuzzy systems introduces notable runtime overhead. To mitigate this, we propose MRRL, an alternative approach involving initial virtual machine clustering using the k-means algorithm, followed by optimized placement utilizing a customized reinforcement learning strategy with multiple reward signals. Extensive simulations highlight the significant advantages of these approaches over existing techniques, particularly energy efficiency, resource utilization, load balancing, and overall execution time.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"2 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141516489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Packet header-based reweight-long short term memory (Rew-LSTM) method for encrypted network traffic classification 用于加密网络流量分类的基于数据包头的加权长时短记忆(Rew-LSTM)方法
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-02 DOI: 10.1007/s00607-024-01306-w
Jiangang Hou, Xin Li, Hongji Xu, Chun Wang, Lizhen Cui, Zhi Liu, Changzhen Hu

With the development of Internet technology, cyberspace security has become a research hotspot. Network traffic classification is closely related to cyberspace security. In this paper, the problem of classification based on raw traffic data is investigated. This involves the granularity analysis of packets, separating packet headers from payloads, complementing and aligning packet headers, and converting them into structured data, including three representation types: bit, byte, and segmented protocol fields. Based on this, we propose the Rew-LSTM classification model for experiments on publicly available datasets of encrypted traffic, and the results show that excellent results can be obtained when using only the data in packet headers for multiple classification, especially when the data is represented using bit, which outperforms state-of-the-art methods. In addition, we propose a global normalization method, and experimental results show that it outperforms feature-specific normalization methods for both Tor traffic and regular encrypted traffic.

随着互联网技术的发展,网络空间安全已成为研究热点。网络流量分类与网络空间安全密切相关。本文研究了基于原始流量数据的分类问题。这涉及到数据包的粒度分析、包头和有效载荷的分离、包头的补充和对齐,以及将其转换为结构化数据,包括比特、字节和分段协议字段三种表示类型。在此基础上,我们提出了 Rew-LSTM 分类模型,并在公开的加密流量数据集上进行了实验,结果表明,仅使用数据包头中的数据进行多重分类就能获得出色的结果,尤其是当数据使用比特表示时,其效果优于最先进的方法。此外,我们还提出了一种全局归一化方法,实验结果表明,对于 Tor 流量和普通加密流量,该方法优于针对特定特征的归一化方法。
{"title":"Packet header-based reweight-long short term memory (Rew-LSTM) method for encrypted network traffic classification","authors":"Jiangang Hou, Xin Li, Hongji Xu, Chun Wang, Lizhen Cui, Zhi Liu, Changzhen Hu","doi":"10.1007/s00607-024-01306-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01306-w","url":null,"abstract":"<p>With the development of Internet technology, cyberspace security has become a research hotspot. Network traffic classification is closely related to cyberspace security. In this paper, the problem of classification based on raw traffic data is investigated. This involves the granularity analysis of packets, separating packet headers from payloads, complementing and aligning packet headers, and converting them into structured data, including three representation types: bit, byte, and segmented protocol fields. Based on this, we propose the Rew-LSTM classification model for experiments on publicly available datasets of encrypted traffic, and the results show that excellent results can be obtained when using only the data in packet headers for multiple classification, especially when the data is represented using bit, which outperforms state-of-the-art methods. In addition, we propose a global normalization method, and experimental results show that it outperforms feature-specific normalization methods for both Tor traffic and regular encrypted traffic.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"15 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141516457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Back-and-Forth (BaF): a new greedy algorithm for geometric path planning of unmanned aerial vehicles 往返算法(BaF):一种用于无人飞行器几何路径规划的新贪婪算法
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-01 DOI: 10.1007/s00607-024-01309-7
Selcuk Aslan

The autonomous task success of an unmanned aerial vehiclel (UAV) or its military specialization called the unmanned combat aerial vehicle (UCAV) has a direct relationship with the planned path. However, planning a path for a UAV or UCAV system requires solving a challenging problem optimally by considering the different objectives about the enemy threats protecting the battlefield, fuel consumption or battery usage and kinematic constraints on the turning maneuvers. Because of the increasing demands to the UAV systems and game-changing roles played by them, developing new and versatile path planning algorithms become more critical and urgent. In this study, a greedy algorithm named as the Back-and-Forth (BaF) was designed and introduced for solving the path planning problem. The BaF algorithm gets its name from the main strategy where a heuristic approach is responsible to generate two predecessor paths, one of which is calculated from the start point to the target point, while the other is calculated in the reverse direction, and combines the generated paths for utilizing their advantageous line segments when obtaining more safe, short and maneuverable path candidates. The performance of the BaF was investigated over three battlefield scenarios and twelve test cases belonging to them. Moreover, the BaF was integrated into the workflow of a well-known meta-heuristic, artificial bee colony (ABC) algorithm, and detailed experiments were also carried out for evaluating the possible contribution of the BaF on the path planning capabilities of another technique. The results of the experiments showed that the BaF algorithm is able to plan at least promising or generally better paths with the exact consistency than other tested meta-heuristic techniques and runs nine or more times faster as validated through the comparison between the BaF and ABC algorithms. The results of the experiments further proved that the integration of the BaF boosts the performance of the ABC and helps it to outperform all of fifteen competitors for nine of twelve test cases.

无人驾驶飞行器(UAV)或其军事专业飞行器--无人战斗飞行器(UCAV)的自主任务成功与否与规划的路径有直接关系。然而,无人飞行器或无人战斗飞行器系统的路径规划需要考虑保护战场的敌方威胁、燃料消耗或电池使用以及转弯动作的运动学约束等不同目标,从而优化解决一个具有挑战性的问题。由于对无人机系统的要求越来越高,无人机系统的作用也在不断改变,因此开发新的多功能路径规划算法变得更加重要和紧迫。本研究设计并引入了一种名为 "来回"(BaF)的贪婪算法来解决路径规划问题。BaF 算法的名称来源于其主要策略,即采用启发式方法生成两条前置路径,其中一条是从起点到目标点的计算路径,另一条是反方向的计算路径,并将生成的路径进行组合,以利用其有利线段获得更安全、更短、更机动的候选路径。在三个战场场景和 12 个测试案例中对 BaF 的性能进行了研究。此外,还将 BaF 集成到了一种著名的元启发式算法--人工蜂群(ABC)算法的工作流程中,并进行了详细的实验,以评估 BaF 对另一种技术的路径规划能力可能做出的贡献。实验结果表明,与其他测试过的元启发式技术相比,BaF 算法至少能规划出有希望的路径,甚至一般情况下能规划出更好的路径,而且与其他元启发式技术相比,BaF 算法的运行速度要快 9 倍或更多,这一点通过 BaF 算法和 ABC 算法之间的比较得到了验证。实验结果进一步证明,BaF 的集成提高了 ABC 的性能,并帮助其在 12 个测试案例中的 9 个案例中超越了所有 15 个竞争对手。
{"title":"Back-and-Forth (BaF): a new greedy algorithm for geometric path planning of unmanned aerial vehicles","authors":"Selcuk Aslan","doi":"10.1007/s00607-024-01309-7","DOIUrl":"https://doi.org/10.1007/s00607-024-01309-7","url":null,"abstract":"<p>The autonomous task success of an unmanned aerial vehiclel (UAV) or its military specialization called the unmanned combat aerial vehicle (UCAV) has a direct relationship with the planned path. However, planning a path for a UAV or UCAV system requires solving a challenging problem optimally by considering the different objectives about the enemy threats protecting the battlefield, fuel consumption or battery usage and kinematic constraints on the turning maneuvers. Because of the increasing demands to the UAV systems and game-changing roles played by them, developing new and versatile path planning algorithms become more critical and urgent. In this study, a greedy algorithm named as the Back-and-Forth (BaF) was designed and introduced for solving the path planning problem. The BaF algorithm gets its name from the main strategy where a heuristic approach is responsible to generate two predecessor paths, one of which is calculated from the start point to the target point, while the other is calculated in the reverse direction, and combines the generated paths for utilizing their advantageous line segments when obtaining more safe, short and maneuverable path candidates. The performance of the BaF was investigated over three battlefield scenarios and twelve test cases belonging to them. Moreover, the BaF was integrated into the workflow of a well-known meta-heuristic, artificial bee colony (ABC) algorithm, and detailed experiments were also carried out for evaluating the possible contribution of the BaF on the path planning capabilities of another technique. The results of the experiments showed that the BaF algorithm is able to plan at least promising or generally better paths with the exact consistency than other tested meta-heuristic techniques and runs nine or more times faster as validated through the comparison between the BaF and ABC algorithms. The results of the experiments further proved that the integration of the BaF boosts the performance of the ABC and helps it to outperform all of fifteen competitors for nine of twelve test cases.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"80 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI enabled: a novel IoT-based fake currency detection using millimeter wave (mmWave) sensor 启用人工智能:使用毫米波 (mmWave) 传感器进行基于物联网的新型假币检测
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-06-27 DOI: 10.1007/s00607-024-01300-2
Fahim Niaz, Jian Zhang, Muhammad Khalid, Kashif Naseer Qureshi, Yang Zheng, Muhammad Younas, Naveed Imran

In recent years, the significance of millimeter wave sensors has achieved a paramount role, especially in the non-invasive and ubiquitous analysis of various materials and objects. This paper introduces a novel IoT-based fake currency detection using millimeter wave (mmWave) that leverages machine and deep learning algorithms for the detection of fake and genuine currency based on their distinct sensor reflections. To gather these reflections or signatures from different currency notes, we utilize multiple receiving (RX) antennae of the radar sensor module. Our proposed framework encompasses three different approaches for genuine and fake currency detection, Convolutional Neural Network (CNN), k-nearest Neighbor (k-NN), and Transfer Learning Technique (TLT). After extensive experiments, the proposed framework exhibits impressive accuracy and obtained classification accuracy of 96%, 94%, and 98% for CNN, k-NN, and TLT in distinguishing 10 different currency notes using radar signals.

近年来,毫米波传感器发挥了至关重要的作用,尤其是在对各种材料和物体进行无创和无处不在的分析方面。本文介绍了一种新颖的基于物联网的毫米波(mmWave)假币检测方法,该方法利用机器和深度学习算法,根据不同传感器的反射来检测假币和真币。为了收集不同纸币的反射或特征,我们利用了雷达传感器模块的多个接收(RX)天线。我们提出的框架包括三种不同的真假货币检测方法:卷积神经网络(CNN)、k-近邻(k-NN)和迁移学习技术(TLT)。经过大量实验,所提出的框架表现出令人印象深刻的准确性,在使用雷达信号区分 10 种不同纸币时,CNN、k-NN 和 TLT 的分类准确率分别达到 96%、94% 和 98%。
{"title":"AI enabled: a novel IoT-based fake currency detection using millimeter wave (mmWave) sensor","authors":"Fahim Niaz, Jian Zhang, Muhammad Khalid, Kashif Naseer Qureshi, Yang Zheng, Muhammad Younas, Naveed Imran","doi":"10.1007/s00607-024-01300-2","DOIUrl":"https://doi.org/10.1007/s00607-024-01300-2","url":null,"abstract":"<p>In recent years, the significance of millimeter wave sensors has achieved a paramount role, especially in the non-invasive and ubiquitous analysis of various materials and objects. This paper introduces a novel IoT-based fake currency detection using millimeter wave (mmWave) that leverages machine and deep learning algorithms for the detection of fake and genuine currency based on their distinct sensor reflections. To gather these reflections or signatures from different currency notes, we utilize multiple receiving (<i>RX</i>) antennae of the radar sensor module. Our proposed framework encompasses three different approaches for genuine and fake currency detection, Convolutional Neural Network (CNN), k-nearest Neighbor (k-NN), and Transfer Learning Technique (TLT). After extensive experiments, the proposed framework exhibits impressive accuracy and obtained classification accuracy of 96%, 94%, and 98% for CNN, k-NN, and TLT in distinguishing 10 different currency notes using radar signals.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"1 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1