首页 > 最新文献

Computing最新文献

英文 中文
Energy-aware and trust-based cluster head selection in healthcare WBANs with enhanced GWO optimization 增强型 GWO 优化医疗保健 WBAN 中的能量感知和基于信任的簇头选择
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-08-20 DOI: 10.1007/s00607-024-01339-1
C. Venkata Subbaiah, K. Govinda

This paper describes a comprehensive methodology for improving Wireless Body Area Networks (WBANs) in healthcare systems using Enhanced Gray Wolf Optimization (GWO). The methodology begins with WBAN initialization and the configuration of critical network parameters. To improve network performance and trustworthiness, direct trust calculations, historical trust , and energy trust, as well as energy consumption models based on distance and transmission type, are integrated. The use of an Enhanced GWO approach makes it easier to select optimal cluster heads, guided by a customized fitness function that balances trust and energy efficiency. This work has been carried on a PC with 16 GB RAM using MATLAB R2022b tool for simulation purpose. The methodology outperforms existing methods in terms of throughput, computation time, and residual energy. This promising methodology provides improved data routing, energy efficiency, and trustworthiness, making it a valuable asset in WBAN-based healthcare systems.

本文介绍了一种利用增强型灰狼优化(GWO)改进医疗保健系统中无线体域网(WBAN)的综合方法。该方法从 WBAN 初始化和关键网络参数配置开始。为了提高网络性能和可信度,集成了直接信任计算、历史信任、能量信任以及基于距离和传输类型的能耗模型。通过使用增强型 GWO 方法,可以在兼顾信任和能效的定制合适度函数指导下,更轻松地选择最佳簇头。这项工作是在内存为 16 GB 的个人电脑上进行的,使用 MATLAB R2022b 工具进行仿真。就吞吐量、计算时间和剩余能量而言,该方法优于现有方法。这种前景广阔的方法改进了数据路由、能效和可信度,使其成为基于 WBAN 的医疗保健系统的宝贵资产。
{"title":"Energy-aware and trust-based cluster head selection in healthcare WBANs with enhanced GWO optimization","authors":"C. Venkata Subbaiah, K. Govinda","doi":"10.1007/s00607-024-01339-1","DOIUrl":"https://doi.org/10.1007/s00607-024-01339-1","url":null,"abstract":"<p>This paper describes a comprehensive methodology for improving Wireless Body Area Networks (WBANs) in healthcare systems using Enhanced Gray Wolf Optimization (GWO). The methodology begins with WBAN initialization and the configuration of critical network parameters. To improve network performance and trustworthiness, direct trust calculations, historical trust , and energy trust, as well as energy consumption models based on distance and transmission type, are integrated. The use of an Enhanced GWO approach makes it easier to select optimal cluster heads, guided by a customized fitness function that balances trust and energy efficiency. This work has been carried on a PC with 16 GB RAM using MATLAB R2022b tool for simulation purpose. The methodology outperforms existing methods in terms of throughput, computation time, and residual energy. This promising methodology provides improved data routing, energy efficiency, and trustworthiness, making it a valuable asset in WBAN-based healthcare systems.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"27 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on the cold start latency approaches in serverless computing: an optimization-based perspective 无服务器计算中的冷启动延迟方法调查:基于优化的视角
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-08-17 DOI: 10.1007/s00607-024-01335-5
Mohsen Ghorbian, Mostafa Ghobaei-Arani

Serverless computing is one of the latest technologies that has received much attention from researchers and companies in recent years since it provides dynamic scalability and a clear economic model. Serverless computing enables users to pay only for the time they use resources. This approach has several benefits, including optimizing costs and resource utilization; however, cold starts are a concern and challenge. Various studies have been conducted in the academic and industrial sectors to deal with this problem, which poses a significant research challenge. This paper comprehensively reviews recent cold start research in serverless computing. Hence, this paper presents a detailed taxonomy of several serverless computing strategies for dealing with cold start latency. We have considered two main approaches in the proposed classification: Optimizing Loading Times (OLT) and Optimizing Resource Usage (ORU), each including several subsets. The subsets of the primary approach OLT are divided into container-based and checkpoint-based. Also, the primary approach ORU is divided into machine learning (ML)-based, optimization-based, and heuristic-based approaches. After analyzing current methods, we have categorized and investigated them according to their characteristics and commonalities. Additionally, we examine potential challenges and directions for future research.

无服务器计算是近年来备受研究人员和企业关注的最新技术之一,因为它具有动态可扩展性和清晰的经济模型。无服务器计算使用户只需为使用资源的时间付费。这种方法有多种好处,包括优化成本和资源利用率;但是,冷启动是一个令人担忧的问题和挑战。针对这一问题,学术界和工业界开展了各种研究,这对研究工作提出了巨大挑战。本文全面回顾了近期无服务器计算领域的冷启动研究。因此,本文对处理冷启动延迟的几种无服务器计算策略进行了详细分类。在拟议的分类中,我们考虑了两种主要方法:优化加载时间(OLT)和优化资源使用(ORU),每种方法都包括几个子集。主要方法 OLT 的子集分为基于容器和基于检查点两种。此外,主要方法 ORU 还分为基于机器学习(ML)的方法、基于优化的方法和基于启发式的方法。在分析了当前的方法后,我们根据这些方法的特点和共性对其进行了分类和研究。此外,我们还探讨了潜在的挑战和未来的研究方向。
{"title":"A survey on the cold start latency approaches in serverless computing: an optimization-based perspective","authors":"Mohsen Ghorbian, Mostafa Ghobaei-Arani","doi":"10.1007/s00607-024-01335-5","DOIUrl":"https://doi.org/10.1007/s00607-024-01335-5","url":null,"abstract":"<p>Serverless computing is one of the latest technologies that has received much attention from researchers and companies in recent years since it provides dynamic scalability and a clear economic model. Serverless computing enables users to pay only for the time they use resources. This approach has several benefits, including optimizing costs and resource utilization; however, cold starts are a concern and challenge. Various studies have been conducted in the academic and industrial sectors to deal with this problem, which poses a significant research challenge. This paper comprehensively reviews recent cold start research in serverless computing. Hence, this paper presents a detailed taxonomy of several serverless computing strategies for dealing with cold start latency. We have considered two main approaches in the proposed classification: Optimizing Loading Times (OLT) and Optimizing Resource Usage (ORU), each including several subsets. The subsets of the primary approach OLT are divided into container-based and checkpoint-based. Also, the primary approach ORU is divided into machine learning (ML)-based, optimization-based, and heuristic-based approaches. After analyzing current methods, we have categorized and investigated them according to their characteristics and commonalities. Additionally, we examine potential challenges and directions for future research.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"60 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-objective cuckoo optimizer for task scheduling to balance workload in cloud computing 用于任务调度的多目标布谷鸟优化器,以平衡云计算中的工作量
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-08-17 DOI: 10.1007/s00607-024-01332-8
Brototi Mondal, Avishek Choudhury

A cloud load balancer should be proficient to modify it’s approach to handle the various task kinds and the dynamic environment. In order to prevent situations where computing resources are excess or underutilized, an efficient task scheduling system is always necessary for optimum or efficient utilization of resources in cloud computing. Task Scheduling can be thought of as an optimization problem. As task scheduling in the cloud is an NP-Complete problem, the best solution cannot be found using gradient-based methods that look for optimal solutions to NP-Complete problems in a reasonable amount of time. Therefore, the task scheduling problem should be solved using evolutionary and meta-heuristic techniques. This study proposes a novel approach to task scheduling using the Cuckoo Optimization algorithm. With this approach, the load is effectively distributed among the virtual machines that are available, all the while keeping the total response time and average task processing time(PT) low. The comparative simulation results show that the proposed strategy performs better than state-of-the-art techniques such as Particle Swarm optimization, Ant Colony optimization, Genetic Algorithm and Stochastic Hill Climbing.

云负载平衡器应该能够熟练地修改自己的方法,以处理各种任务类型和动态环境。为了防止出现计算资源过剩或利用不足的情况,在云计算中始终需要一个高效的任务调度系统来优化或高效利用资源。任务调度可视为一个优化问题。由于云计算中的任务调度是一个 NP-Complete 问题,使用基于梯度的方法无法在合理的时间内找到 NP-Complete 问题的最佳解决方案。因此,任务调度问题应使用进化和元启发式技术来解决。本研究提出了一种使用布谷鸟优化算法进行任务调度的新方法。采用这种方法,可以有效地在可用的虚拟机之间分配负载,同时保持较低的总响应时间和平均任务处理时间(PT)。比较仿真结果表明,所提出的策略比粒子群优化、蚁群优化、遗传算法和随机爬坡等最先进的技术性能更好。
{"title":"Multi-objective cuckoo optimizer for task scheduling to balance workload in cloud computing","authors":"Brototi Mondal, Avishek Choudhury","doi":"10.1007/s00607-024-01332-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01332-8","url":null,"abstract":"<p>A cloud load balancer should be proficient to modify it’s approach to handle the various task kinds and the dynamic environment. In order to prevent situations where computing resources are excess or underutilized, an efficient task scheduling system is always necessary for optimum or efficient utilization of resources in cloud computing. Task Scheduling can be thought of as an optimization problem. As task scheduling in the cloud is an NP-Complete problem, the best solution cannot be found using gradient-based methods that look for optimal solutions to NP-Complete problems in a reasonable amount of time. Therefore, the task scheduling problem should be solved using evolutionary and meta-heuristic techniques. This study proposes a novel approach to task scheduling using the Cuckoo Optimization algorithm. With this approach, the load is effectively distributed among the virtual machines that are available, all the while keeping the total response time and average task processing time(PT) low. The comparative simulation results show that the proposed strategy performs better than state-of-the-art techniques such as Particle Swarm optimization, Ant Colony optimization, Genetic Algorithm and Stochastic Hill Climbing.\u0000</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"5 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data and resource aware incremental ML training in support of pervasive applications 支持普适应用的数据和资源感知增量 ML 训练
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-08-16 DOI: 10.1007/s00607-024-01338-2
Thanasis Moustakas, Athanasios Tziouvaras, Kostas Kolomvatsos

Nowadays, the use of Artificial Intelligence (AI) and Machine Learning (ML) algorithms is increasingly affecting the performance of innovative systems. At the same time, the advent of the Internet of Things (IoT) and the Edge Computing (EC) as means to place computational resources close to users create the need for new models in the training process of ML schemes due to the limited computational capabilities of the devices/nodes placed there. In any case, we should not forget that IoT devices or EC nodes exhibit less capabilities than the Cloud back end that could be adopted for a more complex training upon vast volumes of data. The ideal case is to have, at least, basic training capabilities at the IoT-EC ecosystem in order to reduce the latency and face the needs of near real time applications. In this paper, we are motivated by this need and propose a model that tries to save time in the training process by focusing on the training dataset and its statistical description. We do not dive into the architecture of any ML model as we target to provide a more generic scheme that can be applied upon any ML module. We monitor the statistics of the training dataset and the loss during the process and identify if there is a potential to stop it when not significant contribution is foreseen for the data not yet adopted in the model. We argue that our approach can be applied only when a negligibly decreased accuracy is acceptable by the application gaining time and resources from the training process. We provide two algorithms for applying this approach and an extensive experimental evaluation upon multiple supervised ML models to reveal the benefits of the proposed scheme and its constraints.

如今,人工智能(AI)和机器学习(ML)算法的使用正日益影响创新系统的性能。同时,物联网(IoT)和边缘计算(EC)作为将计算资源放置在用户附近的手段,由于放置在那里的设备/节点的计算能力有限,因此在 ML 方案的训练过程中需要新的模型。无论如何,我们都不应忘记,物联网设备或 EC 节点的能力不如云后端,而云后端可用于对海量数据进行更复杂的训练。理想的情况是,物联网-EC 生态系统至少具备基本的训练能力,以减少延迟并满足近实时应用的需求。在本文中,我们正是基于这一需求,提出了一个模型,该模型试图通过关注训练数据集及其统计描述来节省训练过程中的时间。我们没有深入研究任何 ML 模型的架构,因为我们的目标是提供一种可应用于任何 ML 模块的通用方案。我们监控训练数据集的统计信息和过程中的损失,并确定在模型中尚未采用的数据预计不会有重大贡献时,是否有可能停止训练。我们认为,我们的方法只有在精确度下降到可以忽略不计的程度,并且应用程序可以从训练过程中获得时间和资源的情况下才能使用。我们提供了两种应用这种方法的算法,并对多个有监督的 ML 模型进行了广泛的实验评估,以揭示所提方案的优势及其限制因素。
{"title":"Data and resource aware incremental ML training in support of pervasive applications","authors":"Thanasis Moustakas, Athanasios Tziouvaras, Kostas Kolomvatsos","doi":"10.1007/s00607-024-01338-2","DOIUrl":"https://doi.org/10.1007/s00607-024-01338-2","url":null,"abstract":"<p>Nowadays, the use of Artificial Intelligence (AI) and Machine Learning (ML) algorithms is increasingly affecting the performance of innovative systems. At the same time, the advent of the Internet of Things (IoT) and the Edge Computing (EC) as means to place computational resources close to users create the need for new models in the training process of ML schemes due to the limited computational capabilities of the devices/nodes placed there. In any case, we should not forget that IoT devices or EC nodes exhibit less capabilities than the Cloud back end that could be adopted for a more complex training upon vast volumes of data. The ideal case is to have, at least, basic training capabilities at the IoT-EC ecosystem in order to reduce the latency and face the needs of near real time applications. In this paper, we are motivated by this need and propose a model that tries to save time in the training process by focusing on the training dataset and its statistical description. We do not dive into the architecture of any ML model as we target to provide a more generic scheme that can be applied upon any ML module. We monitor the statistics of the training dataset and the loss during the process and identify if there is a potential to stop it when not significant contribution is foreseen for the data not yet adopted in the model. We argue that our approach can be applied only when a negligibly decreased accuracy is acceptable by the application gaining time and resources from the training process. We provide two algorithms for applying this approach and an extensive experimental evaluation upon multiple supervised ML models to reveal the benefits of the proposed scheme and its constraints.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"21 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Community detection in multiplex continous weighted nodes networks using an extension of the stochastic block model 利用随机块模型的扩展在多路连续加权节点网络中进行群落检测
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-08-15 DOI: 10.1007/s00607-024-01341-7
Abir El Haj

The stochastic block model (SBM) is a probabilistic model aimed at clustering individuals within a simple network based on their social behavior. This network consists of individuals and edges representing the presence or absence of relationships between each pair of individuals. This paper aims to extend the traditional stochastic block model to accommodate multiplex weighted nodes networks. These networks are characterized by multiple relationship types occurring simultaneously among network individuals, with each individual associated with a weight representing its influence in the network. We introduce an inference method utilizing a variational expectation-maximization algorithm to estimate model parameters and classify individuals. Finally, we demonstrate the effectiveness of our approach through applications using simulated and real data, highlighting its main characteristics.

随机区块模型(SBM)是一种概率模型,旨在根据个体的社会行为将其聚类到一个简单的网络中。该网络由个体和代表每对个体之间存在或不存在关系的边组成。本文旨在扩展传统的随机块模型,以适应多重加权节点网络。这些网络的特点是网络个体之间同时存在多种关系类型,每个个体都与代表其在网络中影响力的权重相关联。我们介绍了一种推理方法,利用变分期望最大化算法来估计模型参数并对个体进行分类。最后,我们通过模拟数据和真实数据的应用,展示了我们方法的有效性,突出了其主要特点。
{"title":"Community detection in multiplex continous weighted nodes networks using an extension of the stochastic block model","authors":"Abir El Haj","doi":"10.1007/s00607-024-01341-7","DOIUrl":"https://doi.org/10.1007/s00607-024-01341-7","url":null,"abstract":"<p>The stochastic block model (SBM) is a probabilistic model aimed at clustering individuals within a simple network based on their social behavior. This network consists of individuals and edges representing the presence or absence of relationships between each pair of individuals. This paper aims to extend the traditional stochastic block model to accommodate multiplex weighted nodes networks. These networks are characterized by multiple relationship types occurring simultaneously among network individuals, with each individual associated with a weight representing its influence in the network. We introduce an inference method utilizing a variational expectation-maximization algorithm to estimate model parameters and classify individuals. Finally, we demonstrate the effectiveness of our approach through applications using simulated and real data, highlighting its main characteristics.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"8 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic video captioning using tree hierarchical deep convolutional neural network and ASRNN-bi-directional LSTM 使用树状分层深度卷积神经网络和 ASRNN 双向 LSTM 自动制作视频字幕
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-08-13 DOI: 10.1007/s00607-024-01334-6
N. Kavitha, K. Ruba Soundar, R. Karthick, J. Kohila

The development of automatic video understanding technology is highly needed due to the rise of mass video data, like surveillance videos, personal video data. Several methods have been presented previously for automatic video captioning. But, the existing methods have some problems, like more time consume during processing a huge number of frames, and also it contains over fitting problem. This is a difficult task to automate the process of video caption. So, it affects final result (Caption) accuracy. To overcome these issues, Automatic Video Captioning using Tree Hierarchical Deep Convolutional Neural Network and attention segmental recurrent neural network-bi-directional Long Short-Term Memory (ASRNN-bi-directional LSTM) is proposed in this paper. The captioning part contains two phases: Feature Encoder and Decoder. In feature encoder phase, the tree hierarchical Deep Convolutional Neural Network (Tree CNN) encodes the vector representation of video and extract three kinds of features. In decoder phase, the attention segmental recurrent neural network (ASRNN) decode vector into textual description. ASRNN-base methods struck with long-term dependency issue. To deal this issue, focuses on all generated words from the bi-directional LSTM and caption generator for extracting global context information presented by concealed state of caption generator is local and unfinished. Hence, Golden Eagle Optimization is exploited to enhance ASRNN weight parameters. The proposed method is executed in Python. The proposed technique achieves 34.89%, 29.06% and 20.78% higher accuracy, 23.65%, 22.10% and 29.68% lesser Mean Squared Error compared to the existing methods.

由于监控视频、个人视频数据等海量视频数据的增加,亟需开发自动视频理解技术。此前已有多种方法用于自动视频字幕制作。但是,现有的方法都存在一些问题,比如在处理大量帧的过程中耗时较长,而且还存在过度拟合的问题。这是视频字幕自动处理过程中的一个难点。因此,它会影响最终结果(字幕)的准确性。为了克服这些问题,本文提出了使用树状分层深度卷积神经网络和注意分段递归神经网络-双向长短期记忆(ASRNN-bi-directional LSTM)进行自动视频字幕制作。字幕制作部分包括两个阶段:特征编码器和解码器。在特征编码器阶段,树状分层深度卷积神经网络(Tree CNN)对视频的向量表示进行编码,并提取三种特征。在解码器阶段,注意力分段递归神经网络(ASRNN)将向量解码为文本描述。基于 ASRNN 的方法会遇到长期依赖性问题。为了解决这个问题,我们将重点放在双向 LSTM 和字幕生成器生成的所有单词上,以提取字幕生成器隐藏状态所呈现的局部和未完成的全局上下文信息。因此,我们利用金鹰优化技术来增强 ASRNN 权重参数。提出的方法在 Python 中执行。与现有方法相比,拟议技术的准确率分别提高了 34.89%、29.06% 和 20.78%,平均平方误差分别降低了 23.65%、22.10% 和 29.68%。
{"title":"Automatic video captioning using tree hierarchical deep convolutional neural network and ASRNN-bi-directional LSTM","authors":"N. Kavitha, K. Ruba Soundar, R. Karthick, J. Kohila","doi":"10.1007/s00607-024-01334-6","DOIUrl":"https://doi.org/10.1007/s00607-024-01334-6","url":null,"abstract":"<p>The development of automatic video understanding technology is highly needed due to the rise of mass video data, like surveillance videos, personal video data. Several methods have been presented previously for automatic video captioning. But, the existing methods have some problems, like more time consume during processing a huge number of frames, and also it contains over fitting problem. This is a difficult task to automate the process of video caption. So, it affects final result (Caption) accuracy. To overcome these issues, Automatic Video Captioning using Tree Hierarchical Deep Convolutional Neural Network and attention segmental recurrent neural network-bi-directional Long Short-Term Memory (ASRNN-bi-directional LSTM) is proposed in this paper. The captioning part contains two phases: Feature Encoder and Decoder. In feature encoder phase, the tree hierarchical Deep Convolutional Neural Network (Tree CNN) encodes the vector representation of video and extract three kinds of features. In decoder phase, the attention segmental recurrent neural network (ASRNN) decode vector into textual description. ASRNN-base methods struck with long-term dependency issue. To deal this issue, focuses on all generated words from the bi-directional LSTM and caption generator for extracting global context information presented by concealed state of caption generator is local and unfinished. Hence, Golden Eagle Optimization is exploited to enhance ASRNN weight parameters. The proposed method is executed in Python. The proposed technique achieves 34.89%, 29.06% and 20.78% higher accuracy, 23.65%, 22.10% and 29.68% lesser Mean Squared Error compared to the existing methods.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"61 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved snake optimization-based task scheduling in cloud computing 改进云计算中基于蛇形优化的任务调度
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-08-07 DOI: 10.1007/s00607-024-01323-9
Vijay Kumar Damera, G. Vanitha, B. Indira, G. Sirisha, Ramesh Vatambeti

The recent focus on cloud computing is due to its evolving platform and features like multiplexing users on shared infrastructure and on-demand resource computation. Efficient use of computer resources is crucial in cloud computing. Effective task-scheduling methods are essential to optimize cloud system performance. Scheduling virtual machines in dynamic cloud environments, marked by uncertainty and constant change, is challenging. Despite many efforts to improve cloud task scheduling, it remains an unresolved issue. Various scheduling approaches have been proposed, but researchers continue to refine performance by incorporating diverse quality-of-service characteristics, enhancing overall cloud performance. This study introduces an innovative task-scheduling algorithm that improves upon existing methods, particularly in quality-of-service criteria like makespan and energy efficiency. The proposed technique enhances the Snake Optimization Algorithm (SO) by incorporating sine chaos mapping, a spiral search strategy, and dynamic adaptive weights. These enhancements increase the algorithm’s ability to escape local optima and improve global search. Compared to other models, the proposed method shows improvements in cloud scheduling performance by 6%, 4.6%, and 3.27%. Additionally, the approach quickly converges to the optimal scheduling solution.

云计算之所以成为近期关注的焦点,是因为它不断发展的平台和功能,如在共享基础设施上复用用户和按需计算资源。在云计算中,有效利用计算机资源至关重要。有效的任务调度方法对于优化云系统性能至关重要。在以不确定性和不断变化为特征的动态云环境中调度虚拟机具有挑战性。尽管在改进云任务调度方面做出了很多努力,但这仍然是一个悬而未决的问题。目前已经提出了多种调度方法,但研究人员仍在继续通过结合不同的服务质量特性来改进性能,从而提高云的整体性能。本研究介绍了一种创新的任务调度算法,该算法改进了现有方法,特别是在服务质量标准(如时间跨度和能效)方面。所提出的技术结合了正弦混沌映射、螺旋搜索策略和动态自适应权重,从而增强了蛇形优化算法(SO)。这些改进提高了算法摆脱局部最优的能力,并改善了全局搜索。与其他模型相比,所提出的方法在云调度性能方面分别提高了 6%、4.6% 和 3.27%。此外,该方法还能快速收敛到最优调度解决方案。
{"title":"Improved snake optimization-based task scheduling in cloud computing","authors":"Vijay Kumar Damera, G. Vanitha, B. Indira, G. Sirisha, Ramesh Vatambeti","doi":"10.1007/s00607-024-01323-9","DOIUrl":"https://doi.org/10.1007/s00607-024-01323-9","url":null,"abstract":"<p>The recent focus on cloud computing is due to its evolving platform and features like multiplexing users on shared infrastructure and on-demand resource computation. Efficient use of computer resources is crucial in cloud computing. Effective task-scheduling methods are essential to optimize cloud system performance. Scheduling virtual machines in dynamic cloud environments, marked by uncertainty and constant change, is challenging. Despite many efforts to improve cloud task scheduling, it remains an unresolved issue. Various scheduling approaches have been proposed, but researchers continue to refine performance by incorporating diverse quality-of-service characteristics, enhancing overall cloud performance. This study introduces an innovative task-scheduling algorithm that improves upon existing methods, particularly in quality-of-service criteria like makespan and energy efficiency. The proposed technique enhances the Snake Optimization Algorithm (SO) by incorporating sine chaos mapping, a spiral search strategy, and dynamic adaptive weights. These enhancements increase the algorithm’s ability to escape local optima and improve global search. Compared to other models, the proposed method shows improvements in cloud scheduling performance by 6%, 4.6%, and 3.27%. Additionally, the approach quickly converges to the optimal scheduling solution.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"22 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141931601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PP-PRNU: PRNU-based source camera attribution with privacy-preserving applications PP-PRNU:基于 PRNU 的源相机归属与隐私保护应用
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-08-06 DOI: 10.1007/s00607-024-01330-w
Riyanka Jena, Priyanka Singh, Manoranjan Mohanty, Manik Lal Das

Tracing the origin of digital images is a crucial concern in digital image forensics, where accurately identifying the source of an image is essential that leads important clues to investing and law enforcement agencies. Photo Response Non-Uniformity (PRNU) based camera attribution is an effective forensic tool for identifying the source camera of a crime scene image. The PRNU pattern approach helps investigators determine whether a specific camera captured a crime scene image using the Pearson correlation coefficient between the unique camera fingerprint and the PRNU noise. However, this approach raises privacy concerns as the camera fingerprint or the PRNU noise can be linked to non-crime images taken by the camera, potentially disclosing the photographer’s identity. To address this issue, we propose a novel PRNU-based source camera attribution scheme that enables forensic investigators to conduct criminal investigations while preserving privacy. In the proposed scheme, a camera fingerprint extracted from a set of known images and PRNU noise extracted from the anonymous image are divided into multiple shares using Shamir’s Secret Sharing (SSS). These shares are distributed to various cloud servers where correlation is computed on a share basis between the camera fingerprint and the PRNU noise. The partial correlation values are combined to obtain the final correlation value, determining whether the camera took the image. The security analysis and the experimental results demonstrate that the proposed scheme not only preserves privacy and ensures data confidentiality and integrity, but also is computationally efficient compared to existing methods. Specifically, the results showed that our scheme achieves similar accuracy in source camera attribution with a negligible decrease in performance compared to non-privacy-preserving methods and is computationally less expensive than state-of-the-art schemes. Our work advances research in image forensics by addressing the need for accurate source identification and privacy protection. The privacy-preserving approach is beneficial for scenarios where protecting the identity of the photographer is crucial, such as in whistleblower cases.

追踪数字图像的来源是数字图像取证中的一个关键问题,准确识别图像的来源至关重要,可为投资和执法机构提供重要线索。基于照片响应不均匀性(PRNU)的相机归属是识别犯罪现场图像源相机的有效取证工具。PRNU 模式方法可帮助调查人员利用独特相机指纹与 PRNU 噪声之间的皮尔逊相关系数来确定是否有特定相机拍摄了犯罪现场图像。然而,这种方法会引发隐私问题,因为相机指纹或 PRNU 噪声可能与相机拍摄的非犯罪图像相关联,从而可能泄露拍摄者的身份。为了解决这个问题,我们提出了一种新颖的基于 PRNU 的源相机归属方案,使法医调查人员能够在保护隐私的同时进行犯罪调查。在所提出的方案中,从一组已知图像中提取的相机指纹和从匿名图像中提取的 PRNU 噪声通过沙米尔秘密共享(SSS)分成多个份额。这些份额被分配到不同的云服务器上,在云服务器上按份额计算摄像头指纹和 PRNU 噪声之间的相关性。部分相关值合并后得到最终相关值,从而确定相机是否拍摄了图像。安全分析和实验结果表明,与现有方法相比,所提出的方案不仅能保护隐私,确保数据的机密性和完整性,而且计算效率高。具体来说,实验结果表明,与非隐私保护方法相比,我们的方案在源相机归属方面实现了相似的准确性,性能下降可忽略不计,而且与最先进的方案相比,计算成本更低。我们的工作满足了准确来源识别和隐私保护的需求,从而推动了图像取证研究。保护隐私的方法有利于保护拍摄者身份至关重要的场景,如举报人案件。
{"title":"PP-PRNU: PRNU-based source camera attribution with privacy-preserving applications","authors":"Riyanka Jena, Priyanka Singh, Manoranjan Mohanty, Manik Lal Das","doi":"10.1007/s00607-024-01330-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01330-w","url":null,"abstract":"<p>Tracing the origin of digital images is a crucial concern in digital image forensics, where accurately identifying the source of an image is essential that leads important clues to investing and law enforcement agencies. Photo Response Non-Uniformity (PRNU) based camera attribution is an effective forensic tool for identifying the source camera of a crime scene image. The PRNU pattern approach helps investigators determine whether a specific camera captured a crime scene image using the Pearson correlation coefficient between the unique camera fingerprint and the PRNU noise. However, this approach raises privacy concerns as the camera fingerprint or the PRNU noise can be linked to non-crime images taken by the camera, potentially disclosing the photographer’s identity. To address this issue, we propose a novel PRNU-based source camera attribution scheme that enables forensic investigators to conduct criminal investigations while preserving privacy. In the proposed scheme, a camera fingerprint extracted from a set of known images and PRNU noise extracted from the anonymous image are divided into multiple shares using Shamir’s Secret Sharing (SSS). These shares are distributed to various cloud servers where correlation is computed on a share basis between the camera fingerprint and the PRNU noise. The partial correlation values are combined to obtain the final correlation value, determining whether the camera took the image. The security analysis and the experimental results demonstrate that the proposed scheme not only preserves privacy and ensures data confidentiality and integrity, but also is computationally efficient compared to existing methods. Specifically, the results showed that our scheme achieves similar accuracy in source camera attribution with a negligible decrease in performance compared to non-privacy-preserving methods and is computationally less expensive than state-of-the-art schemes. Our work advances research in image forensics by addressing the need for accurate source identification and privacy protection. The privacy-preserving approach is beneficial for scenarios where protecting the identity of the photographer is crucial, such as in whistleblower cases.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"127 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141931686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Phasic parallel-network policy: a deep reinforcement learning framework based on action correlation 相位平行网络策略:基于行动相关性的深度强化学习框架
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-08-06 DOI: 10.1007/s00607-024-01329-3
Jiahao Li, Tianhan Gao, Qingwei Mi

Reinforcement learning algorithms show significant variations in performance across different environments. Optimization for reinforcement learning thus becomes the major research task since the instability and unpredictability of the reinforcement learning algorithms have consistently hindered their generalization capabilities. In this study, we address this issue by optimizing the algorithm itself rather than environment-specific optimizations. We start by tackling the uncertainty caused by the mutual influence of original action interferences, aiming to enhance the overall performance. The Phasic Parallel-Network Policy (PPP), which is a deep reinforcement learning framework. It diverges from the traditional policy actor-critic method by grouping the action space based on action correlations. The PPP incorporates parallel network structures and combines network optimization strategies. With the assistance of the value network, the training process is divided into different specific stages, namely the Extra-group Policy Phase and the Inter-group Optimization Phase. PPP breaks through the traditional unit learning structure. The experimental results indicate that it not only optimizes training effectiveness but also reduces training steps, enhances sample efficiency, and significantly improves stability and generalization.

强化学习算法在不同环境下的性能差异很大。由于强化学习算法的不稳定性和不可预测性一直阻碍着它们的泛化能力,因此优化强化学习算法就成了主要的研究任务。在本研究中,我们通过优化算法本身而不是特定环境的优化来解决这一问题。我们首先解决了原始动作干扰相互影响造成的不确定性,旨在提高整体性能。相位并行网络策略(PPP)是一种深度强化学习框架。它不同于传统的策略行动者批判方法,而是根据行动相关性对行动空间进行分组。PPP 融合了并行网络结构,并结合了网络优化策略。在价值网络的辅助下,训练过程被划分为不同的具体阶段,即组外策略阶段和组间优化阶段。PPP 突破了传统的单元学习结构。实验结果表明,它不仅优化了训练效果,还减少了训练步骤,提高了样本效率,并显著提高了稳定性和泛化能力。
{"title":"Phasic parallel-network policy: a deep reinforcement learning framework based on action correlation","authors":"Jiahao Li, Tianhan Gao, Qingwei Mi","doi":"10.1007/s00607-024-01329-3","DOIUrl":"https://doi.org/10.1007/s00607-024-01329-3","url":null,"abstract":"<p>Reinforcement learning algorithms show significant variations in performance across different environments. Optimization for reinforcement learning thus becomes the major research task since the instability and unpredictability of the reinforcement learning algorithms have consistently hindered their generalization capabilities. In this study, we address this issue by optimizing the algorithm itself rather than environment-specific optimizations. We start by tackling the uncertainty caused by the mutual influence of original action interferences, aiming to enhance the overall performance. The <i>Phasic Parallel-Network Policy</i> (PPP), which is a deep reinforcement learning framework. It diverges from the traditional policy actor-critic method by grouping the action space based on action correlations. The PPP incorporates parallel network structures and combines network optimization strategies. With the assistance of the value network, the training process is divided into different specific stages, namely the Extra-group Policy Phase and the Inter-group Optimization Phase. PPP breaks through the traditional unit learning structure. The experimental results indicate that it not only optimizes training effectiveness but also reduces training steps, enhances sample efficiency, and significantly improves stability and generalization.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"34 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141931602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cost, time, energy-aware workflow scheduling using adaptive PSO algorithm in a cloud–fog environment 云雾环境中使用自适应 PSO 算法的成本、时间和能源感知工作流调度系统
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-31 DOI: 10.1007/s00607-024-01322-w
Gyan Singh, Amit K. Chaturvedi

Recent years have seen an exponential rise in data produced by Internet of Things (IoT) applications. Cloud servers were not designed for such extensive data, leading to challenges like increased makespan, cost, bandwidth, energy consumption, and network latency. To address these, the cloud–fog environment has emerged as an extension to cloud servers, offering services closer to IoT devices. Scheduling workflow applications to optimize multiple conflicting objectives in cloud fog is an NP-hard problem. Particle Swarm Optimization (PSO) is a good choice for multi-objective solutions due to its simplicity and rapid convergence. However, it has shortcomings like premature convergence and stagnation. To address these challenges, we formalize a theoretical background for scheduling workflow applications in the cloud–fog environment with multiple conflicting objectives. Subsequently, we propose an adaptive particle swarm optimization (APSO) algorithm with novel enhancements, including an S-shaped sigmoid function to dynamically decrease inertia weight and a linear updating mechanism for cognitive factors. Their integration in cloud–fog environments has not been previously explored. This novel application addresses unique challenges of workflow scheduling in cloud–fog systems, such as heterogeneous resource management, energy consumption, and increased cost. The effectiveness of APSO is evaluated using a real-world scientific workflow in a simulated cloud–fog environment and compared with four meta-heuristics. Our proposed workflow scheduling significantly reduces makespan and energy consumption without compromising overall cost compared to other meta-heuristics.

近年来,物联网(IoT)应用产生的数据呈指数级增长。云服务器并不是为处理如此大量的数据而设计的,这导致了诸如时间跨度、成本、带宽、能耗和网络延迟增加等挑战。为了解决这些问题,云雾环境作为云服务器的扩展而出现,提供更接近物联网设备的服务。在云雾环境中调度工作流应用程序以优化多个相互冲突的目标是一个 NP 难问题。粒子群优化(PSO)因其简单性和快速收敛性,是多目标解决方案的不错选择。然而,它也存在过早收敛和停滞等缺点。为了应对这些挑战,我们正式提出了在云雾环境中调度具有多个冲突目标的工作流应用的理论背景。随后,我们提出了一种自适应粒子群优化(APSO)算法,并对该算法进行了新的改进,包括使用 S 型 sigmoid 函数动态降低惯性权重和认知因素的线性更新机制。在云雾环境中整合这些算法,此前还从未有过探索。这种新颖的应用解决了云雾系统中工作流调度所面临的独特挑战,如异构资源管理、能源消耗和成本增加等。我们使用模拟云雾环境中的真实科学工作流对 APSO 的有效性进行了评估,并与四种元启发式算法进行了比较。与其他元启发式相比,我们提出的工作流调度方法在不影响总体成本的情况下显著降低了时间跨度和能耗。
{"title":"A cost, time, energy-aware workflow scheduling using adaptive PSO algorithm in a cloud–fog environment","authors":"Gyan Singh, Amit K. Chaturvedi","doi":"10.1007/s00607-024-01322-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01322-w","url":null,"abstract":"<p>Recent years have seen an exponential rise in data produced by Internet of Things (IoT) applications. Cloud servers were not designed for such extensive data, leading to challenges like increased makespan, cost, bandwidth, energy consumption, and network latency. To address these, the cloud–fog environment has emerged as an extension to cloud servers, offering services closer to IoT devices. Scheduling workflow applications to optimize multiple conflicting objectives in cloud fog is an NP-hard problem. Particle Swarm Optimization (PSO) is a good choice for multi-objective solutions due to its simplicity and rapid convergence. However, it has shortcomings like premature convergence and stagnation. To address these challenges, we formalize a theoretical background for scheduling workflow applications in the cloud–fog environment with multiple conflicting objectives. Subsequently, we propose an adaptive particle swarm optimization (APSO) algorithm with novel enhancements, including an S-shaped sigmoid function to dynamically decrease inertia weight and a linear updating mechanism for cognitive factors. Their integration in cloud–fog environments has not been previously explored. This novel application addresses unique challenges of workflow scheduling in cloud–fog systems, such as heterogeneous resource management, energy consumption, and increased cost. The effectiveness of APSO is evaluated using a real-world scientific workflow in a simulated cloud–fog environment and compared with four meta-heuristics. Our proposed workflow scheduling significantly reduces makespan and energy consumption without compromising overall cost compared to other meta-heuristics.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"295 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1