首页 > 最新文献

Computing最新文献

英文 中文
Exploiting recurrent graph neural networks for suffix prediction in predictive monitoring 利用递归图神经网络在预测性监测中进行后缀预测
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-09 DOI: 10.1007/s00607-024-01315-9
Efrén Rama-Maneiro, Juan C. Vidal, Manuel Lama, Pablo Monteagudo-Lago

Predictive monitoring is a subfield of process mining that aims to predict how a running case will unfold in the future. One of its main challenges is forecasting the sequence of activities that will occur from a given point in time —suffix prediction—. Most approaches to the suffix prediction problem learn to predict the suffix by learning how to predict the next activity only, while also disregarding structural information present in the process model. This paper proposes a novel architecture based on an encoder-decoder model with an attention mechanism that decouples the representation learning of the prefixes from the inference phase, predicting only the activities of the suffix. During the inference phase, this architecture is extended with a heuristic search algorithm that selects the most probable suffix according to both the structural information extracted from the process model and the information extracted from the log. Our approach has been tested using 12 public event logs against 6 different state-of-the-art proposals, showing that it significantly outperforms these proposals.

预测性监控是流程挖掘的一个子领域,旨在预测正在运行的案例未来将如何发展。其主要挑战之一是预测从给定时间点开始将发生的活动序列--后缀预测。大多数解决后缀预测问题的方法都是通过学习如何仅预测下一个活动来预测后缀,同时忽略流程模型中存在的结构信息。本文提出了一种基于编码器-解码器模型的新型架构,该架构具有注意力机制,可将前缀的表征学习与推理阶段分离开来,只预测后缀的活动。在推理阶段,该结构通过启发式搜索算法进行扩展,该算法可根据从流程模型中提取的结构信息和从日志中提取的信息选择最有可能的后缀。我们的方法使用 12 个公共事件日志与 6 个不同的最先进方案进行了测试,结果表明它明显优于这些方案。
{"title":"Exploiting recurrent graph neural networks for suffix prediction in predictive monitoring","authors":"Efrén Rama-Maneiro, Juan C. Vidal, Manuel Lama, Pablo Monteagudo-Lago","doi":"10.1007/s00607-024-01315-9","DOIUrl":"https://doi.org/10.1007/s00607-024-01315-9","url":null,"abstract":"<p>Predictive monitoring is a subfield of process mining that aims to predict how a running case will unfold in the future. One of its main challenges is forecasting the sequence of activities that will occur from a given point in time —suffix prediction—. Most approaches to the suffix prediction problem learn to predict the suffix by learning how to predict the next activity only, while also disregarding structural information present in the process model. This paper proposes a novel architecture based on an encoder-decoder model with an attention mechanism that decouples the representation learning of the prefixes from the inference phase, predicting only the activities of the suffix. During the inference phase, this architecture is extended with a heuristic search algorithm that selects the most probable suffix according to both the structural information extracted from the process model and the information extracted from the log. Our approach has been tested using 12 public event logs against 6 different state-of-the-art proposals, showing that it significantly outperforms these proposals.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based classification and application test of multiple crop leaf diseases using transfer learning and the attention mechanism 利用迁移学习和注意力机制对多种作物叶片病害进行基于深度学习的分类和应用测试
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-08 DOI: 10.1007/s00607-024-01308-8
Yifu Zhang, Qian Sun, Ji Chen, Huini Zhou

Crop diseases are among the major natural disasters in agricultural production that seriously restrict the growth and development of crops, threatening food security. Timely classification, accurate identification, and the application of methods suitable for the situation can effectively prevent and control crop diseases, improving the quality of agricultural products. Considering the huge variety of crops, diseases, and differences in the characteristics of diseases during each stage, the current convolutional neural network models based on deep learning need to meet the higher requirement of classifying crop diseases accurately. It is necessary to introduce a new architecture scheme to improve the recognition effect. Therefore, in this study, we optimized the deep learning-based classification model for multiple crop leaf diseases using combined transfer learning and the attention mechanism, the modified model was deployed in the smartphone for testing. Dataset that containing 10 types of crops, 61 types of diseases, and different degrees was established, the algorithm structure based on ResNet50 was designed using transfer learning and the SE attention mechanism. The classification performances of different improvement methods were compared by model training. Result indicates that the average accuracy of the proposed TL-SE-ResNet50 model is increased by 7.7%, reaching 96.32%. The model was also integrated and implemented in the smartphone and the test result of the application reaches 94.8%, and the average response time is 882 ms. The improved model proposed has a good effect on the identification of diseases and their condition of multiple crops, and the application can meet the portable usage needs of farmers. This study can provide reference for more crop disease management research in agricultural production.

农作物病害是农业生产中的主要自然灾害之一,严重制约着农作物的生长发育,威胁着粮食安全。及时分类、准确识别、因势利导,可以有效防治农作物病害,提高农产品质量。考虑到农作物种类繁多、病害种类繁多、各阶段病害特征存在差异,目前基于深度学习的卷积神经网络模型需要满足农作物病害精准分类的更高要求。这就需要引入新的架构方案来提高识别效果。因此,在本研究中,我们结合迁移学习和注意力机制,优化了基于深度学习的多种作物叶片病害分类模型,并将修改后的模型部署到智能手机中进行测试。建立了包含 10 种作物、61 种病害和不同程度病害的数据集,利用迁移学习和 SE 注意机制设计了基于 ResNet50 的算法结构。通过模型训练比较了不同改进方法的分类性能。结果表明,所提出的 TL-SE-ResNet50 模型的平均准确率提高了 7.7%,达到 96.32%。该模型还被集成并应用于智能手机,应用的测试结果达到 94.8%,平均响应时间为 882 毫秒。所提出的改进模型对多种作物的病害及其病情识别具有良好的效果,该应用可满足农民的便携使用需求。本研究可为农业生产中更多的作物病害管理研究提供参考。
{"title":"Deep learning-based classification and application test of multiple crop leaf diseases using transfer learning and the attention mechanism","authors":"Yifu Zhang, Qian Sun, Ji Chen, Huini Zhou","doi":"10.1007/s00607-024-01308-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01308-8","url":null,"abstract":"<p>Crop diseases are among the major natural disasters in agricultural production that seriously restrict the growth and development of crops, threatening food security. Timely classification, accurate identification, and the application of methods suitable for the situation can effectively prevent and control crop diseases, improving the quality of agricultural products. Considering the huge variety of crops, diseases, and differences in the characteristics of diseases during each stage, the current convolutional neural network models based on deep learning need to meet the higher requirement of classifying crop diseases accurately. It is necessary to introduce a new architecture scheme to improve the recognition effect. Therefore, in this study, we optimized the deep learning-based classification model for multiple crop leaf diseases using combined transfer learning and the attention mechanism, the modified model was deployed in the smartphone for testing. Dataset that containing 10 types of crops, 61 types of diseases, and different degrees was established, the algorithm structure based on ResNet50 was designed using transfer learning and the SE attention mechanism. The classification performances of different improvement methods were compared by model training. Result indicates that the average accuracy of the proposed TL-SE-ResNet50 model is increased by 7.7%, reaching 96.32%. The model was also integrated and implemented in the smartphone and the test result of the application reaches 94.8%, and the average response time is 882 ms. The improved model proposed has a good effect on the identification of diseases and their condition of multiple crops, and the application can meet the portable usage needs of farmers. This study can provide reference for more crop disease management research in agricultural production.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing pre-copy live virtual machine migration in cloud computing using machine learning-based prediction model 利用基于机器学习的预测模型优化云计算中的预复制实时虚拟机迁移
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-08 DOI: 10.1007/s00607-024-01318-6
Raseena M. Haris, Mahmoud Barhamgi, Armstrong Nhlabatsi, Khaled M. Khan

One of the preconditions for efficient cloud computing services is the continuous availability of services to clients. However, there are various reasons for temporary service unavailability due to routine maintenance, load balancing, cyber-attacks, power management, fault tolerance, emergency incident response, and resource usage. Live Virtual Machine Migration (LVM) is an option to address service unavailability by moving virtual machines between hosts without disrupting running services. Pre-copy memory migration is a common LVM approach used in cloud systems, but it faces challenges due to the high rate of frequently updated memory pages known as dirty pages. Transferring these dirty pages during pre-copy migration prolongs the overall migration time. If there are large numbers of remaining memory pages after a predefined iteration of page transfer, the stop-and-copy phase is initiated, which significantly increases downtime and negatively impacts service availability. To mitigate this issue, we introduce a prediction-based approach that optimizes the migration process by dynamically halting the iteration phase when the predicted downtime falls below a predefined threshold. Our proposed machine learning method was rigorously evaluated through experiments conducted on a dedicated testbed using KVM/QEMU technology, involving different VM sizes and memory-intensive workloads. A comparative analysis against proposed pre-copy methods and default migration approach reveals a remarkable improvement, with an average 64.91% reduction in downtime for different RAM configurations in high-write-intensive workloads, along with an average reduction in total migration time of approximately 85.81%. These findings underscore the practical advantages of our method in reducing service disruptions during live virtual machine migration in cloud systems.

高效云计算服务的先决条件之一是为客户提供持续可用的服务。然而,由于日常维护、负载平衡、网络攻击、电源管理、容错、紧急事件响应和资源使用等各种原因,导致服务暂时不可用。实时虚拟机迁移(LVM)是在不中断运行服务的情况下,通过在主机间移动虚拟机来解决服务不可用问题的一种选择。预复制内存迁移是云系统中常用的 LVM 方法,但由于频繁更新的内存页(称为脏页)的高更新率,这种方法面临着挑战。在预复制迁移过程中转移这些脏页会延长整体迁移时间。如果在预定的页面传输迭代后存在大量剩余内存页面,则会启动停止和复制阶段,这将大大增加停机时间,并对服务可用性产生负面影响。为缓解这一问题,我们引入了一种基于预测的方法,当预测的停机时间低于预定阈值时,该方法会动态停止迭代阶段,从而优化迁移过程。通过在使用 KVM/QEMU 技术的专用测试平台上进行实验,对我们提出的机器学习方法进行了严格评估,实验涉及不同的虚拟机规模和内存密集型工作负载。通过与建议的预复制方法和默认迁移方法进行比较分析,发现了显著的改进,在高写入密集型工作负载中,不同内存配置的停机时间平均减少了 64.91%,总迁移时间平均减少了约 85.81%。这些发现强调了我们的方法在减少云系统中实时虚拟机迁移期间服务中断方面的实际优势。
{"title":"Optimizing pre-copy live virtual machine migration in cloud computing using machine learning-based prediction model","authors":"Raseena M. Haris, Mahmoud Barhamgi, Armstrong Nhlabatsi, Khaled M. Khan","doi":"10.1007/s00607-024-01318-6","DOIUrl":"https://doi.org/10.1007/s00607-024-01318-6","url":null,"abstract":"<p>One of the preconditions for efficient cloud computing services is the continuous availability of services to clients. However, there are various reasons for temporary service unavailability due to routine maintenance, load balancing, cyber-attacks, power management, fault tolerance, emergency incident response, and resource usage. Live Virtual Machine Migration (LVM) is an option to address service unavailability by moving virtual machines between hosts without disrupting running services. Pre-copy memory migration is a common LVM approach used in cloud systems, but it faces challenges due to the high rate of frequently updated memory pages known as dirty pages. Transferring these dirty pages during pre-copy migration prolongs the overall migration time. If there are large numbers of remaining memory pages after a predefined iteration of page transfer, the stop-and-copy phase is initiated, which significantly increases downtime and negatively impacts service availability. To mitigate this issue, we introduce a prediction-based approach that optimizes the migration process by dynamically halting the iteration phase when the predicted downtime falls below a predefined threshold. Our proposed machine learning method was rigorously evaluated through experiments conducted on a dedicated testbed using KVM/QEMU technology, involving different VM sizes and memory-intensive workloads. A comparative analysis against proposed pre-copy methods and default migration approach reveals a remarkable improvement, with an average 64.91% reduction in downtime for different RAM configurations in high-write-intensive workloads, along with an average reduction in total migration time of approximately 85.81%. These findings underscore the practical advantages of our method in reducing service disruptions during live virtual machine migration in cloud systems.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A clarity and fairness aware framework for selecting workers in competitive crowdsourcing tasks 在竞争性众包任务中选择工人的清晰度和公平性意识框架
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-06 DOI: 10.1007/s00607-024-01316-8
Seyyed Javad Bozorg Zadeh Razavi, Haleh Amintoosi, Mohammad Allahbakhsh

Crowdsourcing is a powerful technique for accomplishing tasks that are difficult for machines but easy for humans. However, ensuring the quality of the workers who participate in the task is a major challenge. Most of the existing studies have focused on selecting suitable workers based on their attributes and the task requirements, while neglecting the requesters’ characteristics as a key factor in the crowdsourcing process. In this paper, we address this gap by considering the requesters’ preferences and behavior in crowdsourcing systems with competition, where the requester chooses only one worker’s contribution as the final answer. A model is proposed in which the requesters’ characteristics are taken into consideration when finding suitable workers. Also, we propose new definitions for clarity and the fairness of requesters and propose models and formulations to employ them, alongside task and workers’ attributes, to find more suitable workers. We have evaluated the efficacy of our proposed model by analyzing a real-world dataset and compared it with two current state-of-the-art approaches. Our results demonstrate the superiority of our proposed method in assigning the most suitable workers.

众包是一种强大的技术,可以完成对机器来说困难但对人类来说容易的任务。然而,确保参与任务的工人的质量是一项重大挑战。现有的大多数研究都侧重于根据工人的属性和任务要求来选择合适的工人,而忽略了请求者的特征这一众包过程中的关键因素。在本文中,我们通过考虑请求者在具有竞争性的众包系统中的偏好和行为来弥补这一不足,在这种系统中,请求者只选择一个工作者的贡献作为最终答案。我们提出了一个模型,在这个模型中,请求者的特征会在寻找合适的工作者时被考虑在内。此外,我们还为请求者的清晰度和公平性提出了新的定义,并提出了使用这些定义的模型和公式,以及任务和工作者的属性,以找到更合适的工作者。我们通过分析现实世界的数据集评估了我们提出的模型的功效,并将其与当前两种最先进的方法进行了比较。结果表明,我们提出的方法在分配最合适的工人方面更具优势。
{"title":"A clarity and fairness aware framework for selecting workers in competitive crowdsourcing tasks","authors":"Seyyed Javad Bozorg Zadeh Razavi, Haleh Amintoosi, Mohammad Allahbakhsh","doi":"10.1007/s00607-024-01316-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01316-8","url":null,"abstract":"<p>Crowdsourcing is a powerful technique for accomplishing tasks that are difficult for machines but easy for humans. However, ensuring the quality of the workers who participate in the task is a major challenge. Most of the existing studies have focused on selecting suitable workers based on their attributes and the task requirements, while neglecting the requesters’ characteristics as a key factor in the crowdsourcing process. In this paper, we address this gap by considering the requesters’ preferences and behavior in crowdsourcing systems with competition, where the requester chooses only one worker’s contribution as the final answer. A model is proposed in which the requesters’ characteristics are taken into consideration when finding suitable workers. Also, we propose new definitions for clarity and the fairness of requesters and propose models and formulations to employ them, alongside task and workers’ attributes, to find more suitable workers. We have evaluated the efficacy of our proposed model by analyzing a real-world dataset and compared it with two current state-of-the-art approaches. Our results demonstrate the superiority of our proposed method in assigning the most suitable workers.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new approach for service activation management in fog computing using Cat Swarm Optimization algorithm 使用猫群优化算法的雾计算服务激活管理新方法
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-04 DOI: 10.1007/s00607-024-01302-0
Sayed Mohsen Hashemi, Amir Sahafi, Amir Masoud Rahmani, Mahdi Bohlouli

Today, with the increasing expansion of IoT devices and the growing number of user requests, processing their demands in computational environments has become increasingly challenging.The large volume of user requests and the appropriate distribution of tasks among computational resources often result in disordered energy consumption and increased latency. The correct allocation of resources and reducing energy consumption in fog computing are still significant challenges in this field. Improving resource management methods can provide better services for users. In this article, with the aim of more efficient allocation of resources and service activation management, the metaheuristic algorithm CSO (Cat Swarm Optimization) is used. User requests are received by a request evaluator, prioritized, and efficiently executed using the container live migration technique on fog resources. The container live migration technique leads to the migration of services and their better placement on fog resources, avoiding unnecessary activation of physical resources. The proposed method uses a resource manager to identify and classify available resources, aiming to determine the initial capacity of physical fog resources. The performance of the proposed method has been tested and evaluated using six metaheuristic algorithms, namely Particle Swarm Optimization (PSO), Ant Colony Optimization, Grasshopper Optimization algorithm, Genetic algorithm, Cuckoo Optimization algorithm, and Gray Wolf Optimization, within iFogSim. The proposed method has shown superior efficiency in energy consumption, execution time, latency, and network lifetime compared to other algorithms.

如今,随着物联网设备的日益扩展和用户请求数量的不断增加,在计算环境中处理用户需求变得越来越具有挑战性。大量的用户请求和计算资源之间任务的合理分配往往会导致能源消耗紊乱和延迟增加。如何在雾计算中正确分配资源并降低能耗,仍是该领域面临的重大挑战。改进资源管理方法可以为用户提供更好的服务。本文采用元启发式算法 CSO(猫群优化)来实现更高效的资源分配和服务激活管理。用户请求由请求评估器接收,经过优先排序后,使用容器实时迁移技术在雾资源上高效执行。容器实时迁移技术可以迁移服务并将其更好地放置在雾资源上,避免不必要地激活物理资源。建议的方法使用资源管理器来识别和分类可用资源,旨在确定物理雾资源的初始容量。在 iFogSim 中,使用六种元启发式算法,即粒子群优化算法(PSO)、蚁群优化算法、蚱蜢优化算法、遗传算法、布谷鸟优化算法和灰狼优化算法,对所提方法的性能进行了测试和评估。与其他算法相比,所提出的方法在能源消耗、执行时间、延迟和网络寿命方面都表现出更高的效率。
{"title":"A new approach for service activation management in fog computing using Cat Swarm Optimization algorithm","authors":"Sayed Mohsen Hashemi, Amir Sahafi, Amir Masoud Rahmani, Mahdi Bohlouli","doi":"10.1007/s00607-024-01302-0","DOIUrl":"https://doi.org/10.1007/s00607-024-01302-0","url":null,"abstract":"<p>Today, with the increasing expansion of IoT devices and the growing number of user requests, processing their demands in computational environments has become increasingly challenging.The large volume of user requests and the appropriate distribution of tasks among computational resources often result in disordered energy consumption and increased latency. The correct allocation of resources and reducing energy consumption in fog computing are still significant challenges in this field. Improving resource management methods can provide better services for users. In this article, with the aim of more efficient allocation of resources and service activation management, the metaheuristic algorithm CSO (Cat Swarm Optimization) is used. User requests are received by a request evaluator, prioritized, and efficiently executed using the container live migration technique on fog resources. The container live migration technique leads to the migration of services and their better placement on fog resources, avoiding unnecessary activation of physical resources. The proposed method uses a resource manager to identify and classify available resources, aiming to determine the initial capacity of physical fog resources. The performance of the proposed method has been tested and evaluated using six metaheuristic algorithms, namely Particle Swarm Optimization (PSO), Ant Colony Optimization, Grasshopper Optimization algorithm, Genetic algorithm, Cuckoo Optimization algorithm, and Gray Wolf Optimization, within iFogSim. The proposed method has shown superior efficiency in energy consumption, execution time, latency, and network lifetime compared to other algorithms.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing computation reuse efficiency in ICN-based edge computing by modifying content store table structure 通过修改内容存储表结构提高基于 ICN 的边缘计算中的计算重用效率
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-03 DOI: 10.1007/s00607-024-01312-y
Atiyeh Javaheri, Ali Bohlooli, Kamal Jamshidi

In edge computing, repetitive computations are a common occurrence. However, the traditional TCP/IP architecture used in edge computing fails to identify these repetitions, resulting in redundant computations being recomputed by edge resources. To address this issue and enhance the efficiency of edge computing, Information-Centric Networking (ICN)-based edge computing is employed. The ICN architecture leverages its forwarding and naming convention features to recognize repetitive computations and direct them to the appropriate edge resources, thereby promoting “computation reuse”. This approach significantly improves the overall effectiveness of edge computing. In the realm of edge computing, dynamically generated computations often experience prolonged response times. To establish and track connections between input requests and the edge, naming conventions become crucial. By incorporating unique IDs within these naming conventions, each computing request with identical input data is treated as distinct, rendering ICN’s aggregation feature unusable. In this study, we propose a novel approach that modifies the Content Store (CS) table, treating computing requests with the same input data and unique IDs, resulting in identical outcomes, as equivalent. The benefits of this approach include reducing distance and completion time, and increasing hit ratio, as duplicate computations are no longer routed to edge resources or utilized cache. Through simulations, we demonstrate that our method significantly enhances cache reuse compared to the default method with no reuse, achieving an average improvement of over 57%. Furthermore, the speed up ratio of enhancement amounts to 15%. Notably, our method surpasses previous approaches by exhibiting the lowest average completion time, particularly when dealing with lower request frequencies. These findings highlight the efficacy and potential of our proposed method in optimizing edge computing performance.

在边缘计算中,重复计算是一种常见现象。然而,边缘计算中使用的传统 TCP/IP 架构无法识别这些重复计算,导致边缘资源重新计算冗余计算。为了解决这个问题并提高边缘计算的效率,我们采用了基于信息中心网络(ICN)的边缘计算。ICN 架构利用其转发和命名约定功能识别重复计算,并将其引导到适当的边缘资源,从而促进 "计算重用"。这种方法大大提高了边缘计算的整体效率。在边缘计算领域,动态生成的计算往往会经历较长的响应时间。为了建立和跟踪输入请求与边缘计算之间的连接,命名约定变得至关重要。通过在这些命名约定中加入唯一 ID,每个具有相同输入数据的计算请求都会被视为不同的请求,从而导致 ICN 的聚合功能无法使用。在本研究中,我们提出了一种修改内容存储(CS)表的新方法,将具有相同输入数据和唯一 ID 的计算请求视为等价请求,从而产生相同的结果。这种方法的好处包括减少距离和完成时间,并提高命中率,因为重复计算不再被路由到边缘资源或使用缓存。通过模拟,我们证明,与没有重复使用的默认方法相比,我们的方法显著提高了缓存重复使用率,平均提高了 57% 以上。此外,提升的速度比达到了 15%。值得注意的是,我们的方法超越了之前的方法,表现出最低的平均完成时间,尤其是在处理较低请求频率时。这些发现凸显了我们提出的方法在优化边缘计算性能方面的功效和潜力。
{"title":"Enhancing computation reuse efficiency in ICN-based edge computing by modifying content store table structure","authors":"Atiyeh Javaheri, Ali Bohlooli, Kamal Jamshidi","doi":"10.1007/s00607-024-01312-y","DOIUrl":"https://doi.org/10.1007/s00607-024-01312-y","url":null,"abstract":"<p>In edge computing, repetitive computations are a common occurrence. However, the traditional TCP/IP architecture used in edge computing fails to identify these repetitions, resulting in redundant computations being recomputed by edge resources. To address this issue and enhance the efficiency of edge computing, Information-Centric Networking (ICN)-based edge computing is employed. The ICN architecture leverages its forwarding and naming convention features to recognize repetitive computations and direct them to the appropriate edge resources, thereby promoting “computation reuse”. This approach significantly improves the overall effectiveness of edge computing. In the realm of edge computing, dynamically generated computations often experience prolonged response times. To establish and track connections between input requests and the edge, naming conventions become crucial. By incorporating unique IDs within these naming conventions, each computing request with identical input data is treated as distinct, rendering ICN’s aggregation feature unusable. In this study, we propose a novel approach that modifies the Content Store (CS) table, treating computing requests with the same input data and unique IDs, resulting in identical outcomes, as equivalent. The benefits of this approach include reducing distance and completion time, and increasing hit ratio, as duplicate computations are no longer routed to edge resources or utilized cache. Through simulations, we demonstrate that our method significantly enhances cache reuse compared to the default method with no reuse, achieving an average improvement of over 57%. Furthermore, the speed up ratio of enhancement amounts to 15%. Notably, our method surpasses previous approaches by exhibiting the lowest average completion time, particularly when dealing with lower request frequencies. These findings highlight the efficacy and potential of our proposed method in optimizing edge computing performance.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart contracts auditing and multi-classification using machine learning algorithms: an efficient vulnerability detection in ethereum blockchain 使用机器学习算法进行智能合约审计和多分类:以太坊区块链中的高效漏洞检测
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-03 DOI: 10.1007/s00607-024-01314-w
Samia El Haddouti, Mohammed Khaldoune, Meryeme Ayache, Mohamed Dafir Ech-Cherif El Kettani

The adoption of Smart Contracts has revolutionized industries like DeFi and supply chain management, streamlining processes and enhancing transparency. However, ensuring their security is crucial due to their unchangeable nature, which makes them vulnerable to exploitation and errors. Neglecting security can lead to severe consequences such as financial losses and reputation damage. To address this, rigorous analytical processes are needed to evaluate Smart Contract security, despite challenges like cost and complexity associated with current tools. Following an empirical examination of current tools designed to identify vulnerabilities in Smart Contracts, this paper presents a robust and promising solution based on Machine Learning algorithms. The objective is to elevate the auditing and classification of Smart Contracts, building trust and confidence in Blockchain-based applications. By automating the security auditing process, the model not only reduces manual efforts and execution time but also ensures a comprehensive analysis, uncovering even the most complex security vulnerabilities that traditional tools may miss. Overall, the evaluation demonstrates that our proposed model surpasses conventional counterparts in terms of vulnerability detection performance, achieving an accuracy exceeding 98% with optimized execution times.

智能合约的采用彻底改变了 DeFi 和供应链管理等行业,简化了流程并提高了透明度。然而,由于其不可更改的性质,确保其安全性至关重要,这使得它们很容易被利用和出错。忽视安全性会导致严重后果,如经济损失和声誉受损。为了解决这个问题,尽管目前的工具存在成本和复杂性等挑战,但仍需要严格的分析流程来评估智能合约的安全性。在对当前旨在识别智能合约漏洞的工具进行实证检查后,本文提出了一种基于机器学习算法的稳健而有前途的解决方案。其目的是提升智能合约的审计和分类,建立对基于区块链应用的信任和信心。通过自动化安全审计流程,该模型不仅减少了人工操作和执行时间,还确保了全面分析,甚至能发现传统工具可能忽略的最复杂的安全漏洞。总体而言,评估结果表明,我们提出的模型在漏洞检测性能方面超越了传统模型,在优化执行时间的情况下,准确率超过 98%。
{"title":"Smart contracts auditing and multi-classification using machine learning algorithms: an efficient vulnerability detection in ethereum blockchain","authors":"Samia El Haddouti, Mohammed Khaldoune, Meryeme Ayache, Mohamed Dafir Ech-Cherif El Kettani","doi":"10.1007/s00607-024-01314-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01314-w","url":null,"abstract":"<p>The adoption of Smart Contracts has revolutionized industries like DeFi and supply chain management, streamlining processes and enhancing transparency. However, ensuring their security is crucial due to their unchangeable nature, which makes them vulnerable to exploitation and errors. Neglecting security can lead to severe consequences such as financial losses and reputation damage. To address this, rigorous analytical processes are needed to evaluate Smart Contract security, despite challenges like cost and complexity associated with current tools. Following an empirical examination of current tools designed to identify vulnerabilities in Smart Contracts, this paper presents a robust and promising solution based on Machine Learning algorithms. The objective is to elevate the auditing and classification of Smart Contracts, building trust and confidence in Blockchain-based applications. By automating the security auditing process, the model not only reduces manual efforts and execution time but also ensures a comprehensive analysis, uncovering even the most complex security vulnerabilities that traditional tools may miss. Overall, the evaluation demonstrates that our proposed model surpasses conventional counterparts in terms of vulnerability detection performance, achieving an accuracy exceeding 98% with optimized execution times.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling end-to-end delays in TSCH wireless sensor networks using queuing theory and combinatorics 利用队列理论和组合学模拟 TSCH 无线传感器网络中的端到端延迟
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-02 DOI: 10.1007/s00607-024-01313-x
Yevhenii Shudrenko, Andreas Timm-Giel

Wireless communication offers significant advantages in terms of flexibility, coverage and maintenance compared to wired solutions and is being actively deployed in the industry. IEEE 802.15.4 standardizes the Physical and the Medium Access Control (MAC) layer for Low Power and Lossy Networks (LLNs) and features Timeslotted Channel Hopping (TSCH) for reliable, low-latency communication with scheduling capabilities. Multiple scheduling schemes were proposed to address Quality of Service (QoS) in challenging scenarios. However, most of them are evaluated through simulations and experiments, which are often time-consuming and may be difficult to reproduce. Analytical modeling of TSCH performance is lacking, as only one-hop communication with simplified traffic patterns is considered in state-of-the-art. This work proposes a new framework based on queuing theory and combinatorics to evaluate end-to-end delays in multihop TSCH networks of arbitrary topology, traffic and link conditions. The framework is validated in simulations using OMNeT++ and shows below 6% root-mean-square error (RMSE), providing quick and reliable latency estimation tool to support decision-making and enable formalized comparison of existing scheduling solutions.

与有线解决方案相比,无线通信在灵活性、覆盖范围和维护方面具有明显优势,目前正在业界积极部署。IEEE 802.15.4 规范了低功耗和低损耗网络 (LLN) 的物理层和介质访问控制 (MAC)层,并采用分时信道跳频 (TSCH) 技术实现可靠、低延迟的通信,同时具备调度功能。人们提出了多种调度方案,以解决挑战性场景中的服务质量(QoS)问题。然而,大多数方案都是通过模拟和实验进行评估的,而模拟和实验往往非常耗时,而且可能难以重现。由于最先进的技术只考虑了具有简化流量模式的单跳通信,因此缺乏 TSCH 性能的分析模型。这项工作提出了一个基于排队理论和组合学的新框架,用于评估任意拓扑、流量和链路条件下多跳 TSCH 网络的端到端延迟。使用 OMNeT++ 对该框架进行了仿真验证,结果表明其均方根误差(RMSE)低于 6%,为支持决策提供了快速可靠的延迟估计工具,并可对现有调度解决方案进行正规化比较。
{"title":"Modeling end-to-end delays in TSCH wireless sensor networks using queuing theory and combinatorics","authors":"Yevhenii Shudrenko, Andreas Timm-Giel","doi":"10.1007/s00607-024-01313-x","DOIUrl":"https://doi.org/10.1007/s00607-024-01313-x","url":null,"abstract":"<p>Wireless communication offers significant advantages in terms of flexibility, coverage and maintenance compared to wired solutions and is being actively deployed in the industry. IEEE 802.15.4 standardizes the Physical and the Medium Access Control (MAC) layer for Low Power and Lossy Networks (LLNs) and features Timeslotted Channel Hopping (TSCH) for reliable, low-latency communication with scheduling capabilities. Multiple scheduling schemes were proposed to address Quality of Service (QoS) in challenging scenarios. However, most of them are evaluated through simulations and experiments, which are often time-consuming and may be difficult to reproduce. Analytical modeling of TSCH performance is lacking, as only one-hop communication with simplified traffic patterns is considered in state-of-the-art. This work proposes a new framework based on queuing theory and combinatorics to evaluate end-to-end delays in multihop TSCH networks of arbitrary topology, traffic and link conditions. The framework is validated in simulations using OMNeT++ and shows below 6% root-mean-square error (RMSE), providing quick and reliable latency estimation tool to support decision-making and enable formalized comparison of existing scheduling solutions.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141516458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing virtual machine placement efficiency in cloud data centers: a hybrid approach using multi-objective reinforcement learning and clustering strategies 提高云数据中心的虚拟机放置效率:使用多目标强化学习和聚类策略的混合方法
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-02 DOI: 10.1007/s00607-024-01311-z
Arezoo Ghasemi, Abolfazl Toroghi Haghighat, Amin Keshavarzi

Deploying virtual machines poses a significant challenge for cloud data centers, requiring careful consideration of various objectives such as minimizing energy consumption, resource wastage, ensuring load balancing, and meeting service level agreements. While researchers have explored multi-objective methods to tackle virtual machine placement, evaluating potential solutions remains complex in such scenarios. In this paper, we introduce two novel multi-objective algorithms tailored to address this challenge. The VMPMFuzzyORL method employs reinforcement learning for virtual machine placement, with candidate solutions assessed using a fuzzy system. While practical, incorporating fuzzy systems introduces notable runtime overhead. To mitigate this, we propose MRRL, an alternative approach involving initial virtual machine clustering using the k-means algorithm, followed by optimized placement utilizing a customized reinforcement learning strategy with multiple reward signals. Extensive simulations highlight the significant advantages of these approaches over existing techniques, particularly energy efficiency, resource utilization, load balancing, and overall execution time.

部署虚拟机是云数据中心面临的一项重大挑战,需要仔细考虑各种目标,如尽量减少能耗、资源浪费、确保负载平衡和满足服务水平协议。虽然研究人员已经探索了解决虚拟机部署问题的多目标方法,但在这种情况下,评估潜在的解决方案仍然很复杂。在本文中,我们介绍了两种为应对这一挑战而量身定制的新型多目标算法。VMPMFuzzyORL 方法采用强化学习来处理虚拟机放置问题,并使用模糊系统来评估候选解决方案。模糊系统虽然实用,但会带来显著的运行时开销。为了缓解这一问题,我们提出了 MRRL,这是一种替代方法,涉及使用 k-means 算法对虚拟机进行初始聚类,然后利用具有多重奖励信号的定制强化学习策略优化放置。大量的仿真突出显示了这些方法相对于现有技术的显著优势,特别是能源效率、资源利用率、负载平衡和整体执行时间。
{"title":"Enhancing virtual machine placement efficiency in cloud data centers: a hybrid approach using multi-objective reinforcement learning and clustering strategies","authors":"Arezoo Ghasemi, Abolfazl Toroghi Haghighat, Amin Keshavarzi","doi":"10.1007/s00607-024-01311-z","DOIUrl":"https://doi.org/10.1007/s00607-024-01311-z","url":null,"abstract":"<p>Deploying virtual machines poses a significant challenge for cloud data centers, requiring careful consideration of various objectives such as minimizing energy consumption, resource wastage, ensuring load balancing, and meeting service level agreements. While researchers have explored multi-objective methods to tackle virtual machine placement, evaluating potential solutions remains complex in such scenarios. In this paper, we introduce two novel multi-objective algorithms tailored to address this challenge. The VMPMFuzzyORL method employs reinforcement learning for virtual machine placement, with candidate solutions assessed using a fuzzy system. While practical, incorporating fuzzy systems introduces notable runtime overhead. To mitigate this, we propose MRRL, an alternative approach involving initial virtual machine clustering using the k-means algorithm, followed by optimized placement utilizing a customized reinforcement learning strategy with multiple reward signals. Extensive simulations highlight the significant advantages of these approaches over existing techniques, particularly energy efficiency, resource utilization, load balancing, and overall execution time.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141516489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Packet header-based reweight-long short term memory (Rew-LSTM) method for encrypted network traffic classification 用于加密网络流量分类的基于数据包头的加权长时短记忆(Rew-LSTM)方法
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-02 DOI: 10.1007/s00607-024-01306-w
Jiangang Hou, Xin Li, Hongji Xu, Chun Wang, Lizhen Cui, Zhi Liu, Changzhen Hu

With the development of Internet technology, cyberspace security has become a research hotspot. Network traffic classification is closely related to cyberspace security. In this paper, the problem of classification based on raw traffic data is investigated. This involves the granularity analysis of packets, separating packet headers from payloads, complementing and aligning packet headers, and converting them into structured data, including three representation types: bit, byte, and segmented protocol fields. Based on this, we propose the Rew-LSTM classification model for experiments on publicly available datasets of encrypted traffic, and the results show that excellent results can be obtained when using only the data in packet headers for multiple classification, especially when the data is represented using bit, which outperforms state-of-the-art methods. In addition, we propose a global normalization method, and experimental results show that it outperforms feature-specific normalization methods for both Tor traffic and regular encrypted traffic.

随着互联网技术的发展,网络空间安全已成为研究热点。网络流量分类与网络空间安全密切相关。本文研究了基于原始流量数据的分类问题。这涉及到数据包的粒度分析、包头和有效载荷的分离、包头的补充和对齐,以及将其转换为结构化数据,包括比特、字节和分段协议字段三种表示类型。在此基础上,我们提出了 Rew-LSTM 分类模型,并在公开的加密流量数据集上进行了实验,结果表明,仅使用数据包头中的数据进行多重分类就能获得出色的结果,尤其是当数据使用比特表示时,其效果优于最先进的方法。此外,我们还提出了一种全局归一化方法,实验结果表明,对于 Tor 流量和普通加密流量,该方法优于针对特定特征的归一化方法。
{"title":"Packet header-based reweight-long short term memory (Rew-LSTM) method for encrypted network traffic classification","authors":"Jiangang Hou, Xin Li, Hongji Xu, Chun Wang, Lizhen Cui, Zhi Liu, Changzhen Hu","doi":"10.1007/s00607-024-01306-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01306-w","url":null,"abstract":"<p>With the development of Internet technology, cyberspace security has become a research hotspot. Network traffic classification is closely related to cyberspace security. In this paper, the problem of classification based on raw traffic data is investigated. This involves the granularity analysis of packets, separating packet headers from payloads, complementing and aligning packet headers, and converting them into structured data, including three representation types: bit, byte, and segmented protocol fields. Based on this, we propose the Rew-LSTM classification model for experiments on publicly available datasets of encrypted traffic, and the results show that excellent results can be obtained when using only the data in packet headers for multiple classification, especially when the data is represented using bit, which outperforms state-of-the-art methods. In addition, we propose a global normalization method, and experimental results show that it outperforms feature-specific normalization methods for both Tor traffic and regular encrypted traffic.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141516457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1