Pub Date : 2024-08-20DOI: 10.1007/s00607-024-01339-1
C. Venkata Subbaiah, K. Govinda
This paper describes a comprehensive methodology for improving Wireless Body Area Networks (WBANs) in healthcare systems using Enhanced Gray Wolf Optimization (GWO). The methodology begins with WBAN initialization and the configuration of critical network parameters. To improve network performance and trustworthiness, direct trust calculations, historical trust , and energy trust, as well as energy consumption models based on distance and transmission type, are integrated. The use of an Enhanced GWO approach makes it easier to select optimal cluster heads, guided by a customized fitness function that balances trust and energy efficiency. This work has been carried on a PC with 16 GB RAM using MATLAB R2022b tool for simulation purpose. The methodology outperforms existing methods in terms of throughput, computation time, and residual energy. This promising methodology provides improved data routing, energy efficiency, and trustworthiness, making it a valuable asset in WBAN-based healthcare systems.
{"title":"Energy-aware and trust-based cluster head selection in healthcare WBANs with enhanced GWO optimization","authors":"C. Venkata Subbaiah, K. Govinda","doi":"10.1007/s00607-024-01339-1","DOIUrl":"https://doi.org/10.1007/s00607-024-01339-1","url":null,"abstract":"<p>This paper describes a comprehensive methodology for improving Wireless Body Area Networks (WBANs) in healthcare systems using Enhanced Gray Wolf Optimization (GWO). The methodology begins with WBAN initialization and the configuration of critical network parameters. To improve network performance and trustworthiness, direct trust calculations, historical trust , and energy trust, as well as energy consumption models based on distance and transmission type, are integrated. The use of an Enhanced GWO approach makes it easier to select optimal cluster heads, guided by a customized fitness function that balances trust and energy efficiency. This work has been carried on a PC with 16 GB RAM using MATLAB R2022b tool for simulation purpose. The methodology outperforms existing methods in terms of throughput, computation time, and residual energy. This promising methodology provides improved data routing, energy efficiency, and trustworthiness, making it a valuable asset in WBAN-based healthcare systems.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"27 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-17DOI: 10.1007/s00607-024-01335-5
Mohsen Ghorbian, Mostafa Ghobaei-Arani
Serverless computing is one of the latest technologies that has received much attention from researchers and companies in recent years since it provides dynamic scalability and a clear economic model. Serverless computing enables users to pay only for the time they use resources. This approach has several benefits, including optimizing costs and resource utilization; however, cold starts are a concern and challenge. Various studies have been conducted in the academic and industrial sectors to deal with this problem, which poses a significant research challenge. This paper comprehensively reviews recent cold start research in serverless computing. Hence, this paper presents a detailed taxonomy of several serverless computing strategies for dealing with cold start latency. We have considered two main approaches in the proposed classification: Optimizing Loading Times (OLT) and Optimizing Resource Usage (ORU), each including several subsets. The subsets of the primary approach OLT are divided into container-based and checkpoint-based. Also, the primary approach ORU is divided into machine learning (ML)-based, optimization-based, and heuristic-based approaches. After analyzing current methods, we have categorized and investigated them according to their characteristics and commonalities. Additionally, we examine potential challenges and directions for future research.
无服务器计算是近年来备受研究人员和企业关注的最新技术之一,因为它具有动态可扩展性和清晰的经济模型。无服务器计算使用户只需为使用资源的时间付费。这种方法有多种好处,包括优化成本和资源利用率;但是,冷启动是一个令人担忧的问题和挑战。针对这一问题,学术界和工业界开展了各种研究,这对研究工作提出了巨大挑战。本文全面回顾了近期无服务器计算领域的冷启动研究。因此,本文对处理冷启动延迟的几种无服务器计算策略进行了详细分类。在拟议的分类中,我们考虑了两种主要方法:优化加载时间(OLT)和优化资源使用(ORU),每种方法都包括几个子集。主要方法 OLT 的子集分为基于容器和基于检查点两种。此外,主要方法 ORU 还分为基于机器学习(ML)的方法、基于优化的方法和基于启发式的方法。在分析了当前的方法后,我们根据这些方法的特点和共性对其进行了分类和研究。此外,我们还探讨了潜在的挑战和未来的研究方向。
{"title":"A survey on the cold start latency approaches in serverless computing: an optimization-based perspective","authors":"Mohsen Ghorbian, Mostafa Ghobaei-Arani","doi":"10.1007/s00607-024-01335-5","DOIUrl":"https://doi.org/10.1007/s00607-024-01335-5","url":null,"abstract":"<p>Serverless computing is one of the latest technologies that has received much attention from researchers and companies in recent years since it provides dynamic scalability and a clear economic model. Serverless computing enables users to pay only for the time they use resources. This approach has several benefits, including optimizing costs and resource utilization; however, cold starts are a concern and challenge. Various studies have been conducted in the academic and industrial sectors to deal with this problem, which poses a significant research challenge. This paper comprehensively reviews recent cold start research in serverless computing. Hence, this paper presents a detailed taxonomy of several serverless computing strategies for dealing with cold start latency. We have considered two main approaches in the proposed classification: Optimizing Loading Times (OLT) and Optimizing Resource Usage (ORU), each including several subsets. The subsets of the primary approach OLT are divided into container-based and checkpoint-based. Also, the primary approach ORU is divided into machine learning (ML)-based, optimization-based, and heuristic-based approaches. After analyzing current methods, we have categorized and investigated them according to their characteristics and commonalities. Additionally, we examine potential challenges and directions for future research.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"60 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-17DOI: 10.1007/s00607-024-01332-8
Brototi Mondal, Avishek Choudhury
A cloud load balancer should be proficient to modify it’s approach to handle the various task kinds and the dynamic environment. In order to prevent situations where computing resources are excess or underutilized, an efficient task scheduling system is always necessary for optimum or efficient utilization of resources in cloud computing. Task Scheduling can be thought of as an optimization problem. As task scheduling in the cloud is an NP-Complete problem, the best solution cannot be found using gradient-based methods that look for optimal solutions to NP-Complete problems in a reasonable amount of time. Therefore, the task scheduling problem should be solved using evolutionary and meta-heuristic techniques. This study proposes a novel approach to task scheduling using the Cuckoo Optimization algorithm. With this approach, the load is effectively distributed among the virtual machines that are available, all the while keeping the total response time and average task processing time(PT) low. The comparative simulation results show that the proposed strategy performs better than state-of-the-art techniques such as Particle Swarm optimization, Ant Colony optimization, Genetic Algorithm and Stochastic Hill Climbing.
{"title":"Multi-objective cuckoo optimizer for task scheduling to balance workload in cloud computing","authors":"Brototi Mondal, Avishek Choudhury","doi":"10.1007/s00607-024-01332-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01332-8","url":null,"abstract":"<p>A cloud load balancer should be proficient to modify it’s approach to handle the various task kinds and the dynamic environment. In order to prevent situations where computing resources are excess or underutilized, an efficient task scheduling system is always necessary for optimum or efficient utilization of resources in cloud computing. Task Scheduling can be thought of as an optimization problem. As task scheduling in the cloud is an NP-Complete problem, the best solution cannot be found using gradient-based methods that look for optimal solutions to NP-Complete problems in a reasonable amount of time. Therefore, the task scheduling problem should be solved using evolutionary and meta-heuristic techniques. This study proposes a novel approach to task scheduling using the Cuckoo Optimization algorithm. With this approach, the load is effectively distributed among the virtual machines that are available, all the while keeping the total response time and average task processing time(PT) low. The comparative simulation results show that the proposed strategy performs better than state-of-the-art techniques such as Particle Swarm optimization, Ant Colony optimization, Genetic Algorithm and Stochastic Hill Climbing.\u0000</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"5 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, the use of Artificial Intelligence (AI) and Machine Learning (ML) algorithms is increasingly affecting the performance of innovative systems. At the same time, the advent of the Internet of Things (IoT) and the Edge Computing (EC) as means to place computational resources close to users create the need for new models in the training process of ML schemes due to the limited computational capabilities of the devices/nodes placed there. In any case, we should not forget that IoT devices or EC nodes exhibit less capabilities than the Cloud back end that could be adopted for a more complex training upon vast volumes of data. The ideal case is to have, at least, basic training capabilities at the IoT-EC ecosystem in order to reduce the latency and face the needs of near real time applications. In this paper, we are motivated by this need and propose a model that tries to save time in the training process by focusing on the training dataset and its statistical description. We do not dive into the architecture of any ML model as we target to provide a more generic scheme that can be applied upon any ML module. We monitor the statistics of the training dataset and the loss during the process and identify if there is a potential to stop it when not significant contribution is foreseen for the data not yet adopted in the model. We argue that our approach can be applied only when a negligibly decreased accuracy is acceptable by the application gaining time and resources from the training process. We provide two algorithms for applying this approach and an extensive experimental evaluation upon multiple supervised ML models to reveal the benefits of the proposed scheme and its constraints.
如今,人工智能(AI)和机器学习(ML)算法的使用正日益影响创新系统的性能。同时,物联网(IoT)和边缘计算(EC)作为将计算资源放置在用户附近的手段,由于放置在那里的设备/节点的计算能力有限,因此在 ML 方案的训练过程中需要新的模型。无论如何,我们都不应忘记,物联网设备或 EC 节点的能力不如云后端,而云后端可用于对海量数据进行更复杂的训练。理想的情况是,物联网-EC 生态系统至少具备基本的训练能力,以减少延迟并满足近实时应用的需求。在本文中,我们正是基于这一需求,提出了一个模型,该模型试图通过关注训练数据集及其统计描述来节省训练过程中的时间。我们没有深入研究任何 ML 模型的架构,因为我们的目标是提供一种可应用于任何 ML 模块的通用方案。我们监控训练数据集的统计信息和过程中的损失,并确定在模型中尚未采用的数据预计不会有重大贡献时,是否有可能停止训练。我们认为,我们的方法只有在精确度下降到可以忽略不计的程度,并且应用程序可以从训练过程中获得时间和资源的情况下才能使用。我们提供了两种应用这种方法的算法,并对多个有监督的 ML 模型进行了广泛的实验评估,以揭示所提方案的优势及其限制因素。
{"title":"Data and resource aware incremental ML training in support of pervasive applications","authors":"Thanasis Moustakas, Athanasios Tziouvaras, Kostas Kolomvatsos","doi":"10.1007/s00607-024-01338-2","DOIUrl":"https://doi.org/10.1007/s00607-024-01338-2","url":null,"abstract":"<p>Nowadays, the use of Artificial Intelligence (AI) and Machine Learning (ML) algorithms is increasingly affecting the performance of innovative systems. At the same time, the advent of the Internet of Things (IoT) and the Edge Computing (EC) as means to place computational resources close to users create the need for new models in the training process of ML schemes due to the limited computational capabilities of the devices/nodes placed there. In any case, we should not forget that IoT devices or EC nodes exhibit less capabilities than the Cloud back end that could be adopted for a more complex training upon vast volumes of data. The ideal case is to have, at least, basic training capabilities at the IoT-EC ecosystem in order to reduce the latency and face the needs of near real time applications. In this paper, we are motivated by this need and propose a model that tries to save time in the training process by focusing on the training dataset and its statistical description. We do not dive into the architecture of any ML model as we target to provide a more generic scheme that can be applied upon any ML module. We monitor the statistics of the training dataset and the loss during the process and identify if there is a potential to stop it when not significant contribution is foreseen for the data not yet adopted in the model. We argue that our approach can be applied only when a negligibly decreased accuracy is acceptable by the application gaining time and resources from the training process. We provide two algorithms for applying this approach and an extensive experimental evaluation upon multiple supervised ML models to reveal the benefits of the proposed scheme and its constraints.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"21 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-15DOI: 10.1007/s00607-024-01341-7
Abir El Haj
The stochastic block model (SBM) is a probabilistic model aimed at clustering individuals within a simple network based on their social behavior. This network consists of individuals and edges representing the presence or absence of relationships between each pair of individuals. This paper aims to extend the traditional stochastic block model to accommodate multiplex weighted nodes networks. These networks are characterized by multiple relationship types occurring simultaneously among network individuals, with each individual associated with a weight representing its influence in the network. We introduce an inference method utilizing a variational expectation-maximization algorithm to estimate model parameters and classify individuals. Finally, we demonstrate the effectiveness of our approach through applications using simulated and real data, highlighting its main characteristics.
{"title":"Community detection in multiplex continous weighted nodes networks using an extension of the stochastic block model","authors":"Abir El Haj","doi":"10.1007/s00607-024-01341-7","DOIUrl":"https://doi.org/10.1007/s00607-024-01341-7","url":null,"abstract":"<p>The stochastic block model (SBM) is a probabilistic model aimed at clustering individuals within a simple network based on their social behavior. This network consists of individuals and edges representing the presence or absence of relationships between each pair of individuals. This paper aims to extend the traditional stochastic block model to accommodate multiplex weighted nodes networks. These networks are characterized by multiple relationship types occurring simultaneously among network individuals, with each individual associated with a weight representing its influence in the network. We introduce an inference method utilizing a variational expectation-maximization algorithm to estimate model parameters and classify individuals. Finally, we demonstrate the effectiveness of our approach through applications using simulated and real data, highlighting its main characteristics.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"8 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1007/s00607-024-01334-6
N. Kavitha, K. Ruba Soundar, R. Karthick, J. Kohila
The development of automatic video understanding technology is highly needed due to the rise of mass video data, like surveillance videos, personal video data. Several methods have been presented previously for automatic video captioning. But, the existing methods have some problems, like more time consume during processing a huge number of frames, and also it contains over fitting problem. This is a difficult task to automate the process of video caption. So, it affects final result (Caption) accuracy. To overcome these issues, Automatic Video Captioning using Tree Hierarchical Deep Convolutional Neural Network and attention segmental recurrent neural network-bi-directional Long Short-Term Memory (ASRNN-bi-directional LSTM) is proposed in this paper. The captioning part contains two phases: Feature Encoder and Decoder. In feature encoder phase, the tree hierarchical Deep Convolutional Neural Network (Tree CNN) encodes the vector representation of video and extract three kinds of features. In decoder phase, the attention segmental recurrent neural network (ASRNN) decode vector into textual description. ASRNN-base methods struck with long-term dependency issue. To deal this issue, focuses on all generated words from the bi-directional LSTM and caption generator for extracting global context information presented by concealed state of caption generator is local and unfinished. Hence, Golden Eagle Optimization is exploited to enhance ASRNN weight parameters. The proposed method is executed in Python. The proposed technique achieves 34.89%, 29.06% and 20.78% higher accuracy, 23.65%, 22.10% and 29.68% lesser Mean Squared Error compared to the existing methods.
{"title":"Automatic video captioning using tree hierarchical deep convolutional neural network and ASRNN-bi-directional LSTM","authors":"N. Kavitha, K. Ruba Soundar, R. Karthick, J. Kohila","doi":"10.1007/s00607-024-01334-6","DOIUrl":"https://doi.org/10.1007/s00607-024-01334-6","url":null,"abstract":"<p>The development of automatic video understanding technology is highly needed due to the rise of mass video data, like surveillance videos, personal video data. Several methods have been presented previously for automatic video captioning. But, the existing methods have some problems, like more time consume during processing a huge number of frames, and also it contains over fitting problem. This is a difficult task to automate the process of video caption. So, it affects final result (Caption) accuracy. To overcome these issues, Automatic Video Captioning using Tree Hierarchical Deep Convolutional Neural Network and attention segmental recurrent neural network-bi-directional Long Short-Term Memory (ASRNN-bi-directional LSTM) is proposed in this paper. The captioning part contains two phases: Feature Encoder and Decoder. In feature encoder phase, the tree hierarchical Deep Convolutional Neural Network (Tree CNN) encodes the vector representation of video and extract three kinds of features. In decoder phase, the attention segmental recurrent neural network (ASRNN) decode vector into textual description. ASRNN-base methods struck with long-term dependency issue. To deal this issue, focuses on all generated words from the bi-directional LSTM and caption generator for extracting global context information presented by concealed state of caption generator is local and unfinished. Hence, Golden Eagle Optimization is exploited to enhance ASRNN weight parameters. The proposed method is executed in Python. The proposed technique achieves 34.89%, 29.06% and 20.78% higher accuracy, 23.65%, 22.10% and 29.68% lesser Mean Squared Error compared to the existing methods.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"61 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1007/s00607-024-01323-9
Vijay Kumar Damera, G. Vanitha, B. Indira, G. Sirisha, Ramesh Vatambeti
The recent focus on cloud computing is due to its evolving platform and features like multiplexing users on shared infrastructure and on-demand resource computation. Efficient use of computer resources is crucial in cloud computing. Effective task-scheduling methods are essential to optimize cloud system performance. Scheduling virtual machines in dynamic cloud environments, marked by uncertainty and constant change, is challenging. Despite many efforts to improve cloud task scheduling, it remains an unresolved issue. Various scheduling approaches have been proposed, but researchers continue to refine performance by incorporating diverse quality-of-service characteristics, enhancing overall cloud performance. This study introduces an innovative task-scheduling algorithm that improves upon existing methods, particularly in quality-of-service criteria like makespan and energy efficiency. The proposed technique enhances the Snake Optimization Algorithm (SO) by incorporating sine chaos mapping, a spiral search strategy, and dynamic adaptive weights. These enhancements increase the algorithm’s ability to escape local optima and improve global search. Compared to other models, the proposed method shows improvements in cloud scheduling performance by 6%, 4.6%, and 3.27%. Additionally, the approach quickly converges to the optimal scheduling solution.
{"title":"Improved snake optimization-based task scheduling in cloud computing","authors":"Vijay Kumar Damera, G. Vanitha, B. Indira, G. Sirisha, Ramesh Vatambeti","doi":"10.1007/s00607-024-01323-9","DOIUrl":"https://doi.org/10.1007/s00607-024-01323-9","url":null,"abstract":"<p>The recent focus on cloud computing is due to its evolving platform and features like multiplexing users on shared infrastructure and on-demand resource computation. Efficient use of computer resources is crucial in cloud computing. Effective task-scheduling methods are essential to optimize cloud system performance. Scheduling virtual machines in dynamic cloud environments, marked by uncertainty and constant change, is challenging. Despite many efforts to improve cloud task scheduling, it remains an unresolved issue. Various scheduling approaches have been proposed, but researchers continue to refine performance by incorporating diverse quality-of-service characteristics, enhancing overall cloud performance. This study introduces an innovative task-scheduling algorithm that improves upon existing methods, particularly in quality-of-service criteria like makespan and energy efficiency. The proposed technique enhances the Snake Optimization Algorithm (SO) by incorporating sine chaos mapping, a spiral search strategy, and dynamic adaptive weights. These enhancements increase the algorithm’s ability to escape local optima and improve global search. Compared to other models, the proposed method shows improvements in cloud scheduling performance by 6%, 4.6%, and 3.27%. Additionally, the approach quickly converges to the optimal scheduling solution.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"22 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141931601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-06DOI: 10.1007/s00607-024-01330-w
Riyanka Jena, Priyanka Singh, Manoranjan Mohanty, Manik Lal Das
Tracing the origin of digital images is a crucial concern in digital image forensics, where accurately identifying the source of an image is essential that leads important clues to investing and law enforcement agencies. Photo Response Non-Uniformity (PRNU) based camera attribution is an effective forensic tool for identifying the source camera of a crime scene image. The PRNU pattern approach helps investigators determine whether a specific camera captured a crime scene image using the Pearson correlation coefficient between the unique camera fingerprint and the PRNU noise. However, this approach raises privacy concerns as the camera fingerprint or the PRNU noise can be linked to non-crime images taken by the camera, potentially disclosing the photographer’s identity. To address this issue, we propose a novel PRNU-based source camera attribution scheme that enables forensic investigators to conduct criminal investigations while preserving privacy. In the proposed scheme, a camera fingerprint extracted from a set of known images and PRNU noise extracted from the anonymous image are divided into multiple shares using Shamir’s Secret Sharing (SSS). These shares are distributed to various cloud servers where correlation is computed on a share basis between the camera fingerprint and the PRNU noise. The partial correlation values are combined to obtain the final correlation value, determining whether the camera took the image. The security analysis and the experimental results demonstrate that the proposed scheme not only preserves privacy and ensures data confidentiality and integrity, but also is computationally efficient compared to existing methods. Specifically, the results showed that our scheme achieves similar accuracy in source camera attribution with a negligible decrease in performance compared to non-privacy-preserving methods and is computationally less expensive than state-of-the-art schemes. Our work advances research in image forensics by addressing the need for accurate source identification and privacy protection. The privacy-preserving approach is beneficial for scenarios where protecting the identity of the photographer is crucial, such as in whistleblower cases.
{"title":"PP-PRNU: PRNU-based source camera attribution with privacy-preserving applications","authors":"Riyanka Jena, Priyanka Singh, Manoranjan Mohanty, Manik Lal Das","doi":"10.1007/s00607-024-01330-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01330-w","url":null,"abstract":"<p>Tracing the origin of digital images is a crucial concern in digital image forensics, where accurately identifying the source of an image is essential that leads important clues to investing and law enforcement agencies. Photo Response Non-Uniformity (PRNU) based camera attribution is an effective forensic tool for identifying the source camera of a crime scene image. The PRNU pattern approach helps investigators determine whether a specific camera captured a crime scene image using the Pearson correlation coefficient between the unique camera fingerprint and the PRNU noise. However, this approach raises privacy concerns as the camera fingerprint or the PRNU noise can be linked to non-crime images taken by the camera, potentially disclosing the photographer’s identity. To address this issue, we propose a novel PRNU-based source camera attribution scheme that enables forensic investigators to conduct criminal investigations while preserving privacy. In the proposed scheme, a camera fingerprint extracted from a set of known images and PRNU noise extracted from the anonymous image are divided into multiple shares using Shamir’s Secret Sharing (SSS). These shares are distributed to various cloud servers where correlation is computed on a share basis between the camera fingerprint and the PRNU noise. The partial correlation values are combined to obtain the final correlation value, determining whether the camera took the image. The security analysis and the experimental results demonstrate that the proposed scheme not only preserves privacy and ensures data confidentiality and integrity, but also is computationally efficient compared to existing methods. Specifically, the results showed that our scheme achieves similar accuracy in source camera attribution with a negligible decrease in performance compared to non-privacy-preserving methods and is computationally less expensive than state-of-the-art schemes. Our work advances research in image forensics by addressing the need for accurate source identification and privacy protection. The privacy-preserving approach is beneficial for scenarios where protecting the identity of the photographer is crucial, such as in whistleblower cases.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"127 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141931686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-06DOI: 10.1007/s00607-024-01329-3
Jiahao Li, Tianhan Gao, Qingwei Mi
Reinforcement learning algorithms show significant variations in performance across different environments. Optimization for reinforcement learning thus becomes the major research task since the instability and unpredictability of the reinforcement learning algorithms have consistently hindered their generalization capabilities. In this study, we address this issue by optimizing the algorithm itself rather than environment-specific optimizations. We start by tackling the uncertainty caused by the mutual influence of original action interferences, aiming to enhance the overall performance. The Phasic Parallel-Network Policy (PPP), which is a deep reinforcement learning framework. It diverges from the traditional policy actor-critic method by grouping the action space based on action correlations. The PPP incorporates parallel network structures and combines network optimization strategies. With the assistance of the value network, the training process is divided into different specific stages, namely the Extra-group Policy Phase and the Inter-group Optimization Phase. PPP breaks through the traditional unit learning structure. The experimental results indicate that it not only optimizes training effectiveness but also reduces training steps, enhances sample efficiency, and significantly improves stability and generalization.
{"title":"Phasic parallel-network policy: a deep reinforcement learning framework based on action correlation","authors":"Jiahao Li, Tianhan Gao, Qingwei Mi","doi":"10.1007/s00607-024-01329-3","DOIUrl":"https://doi.org/10.1007/s00607-024-01329-3","url":null,"abstract":"<p>Reinforcement learning algorithms show significant variations in performance across different environments. Optimization for reinforcement learning thus becomes the major research task since the instability and unpredictability of the reinforcement learning algorithms have consistently hindered their generalization capabilities. In this study, we address this issue by optimizing the algorithm itself rather than environment-specific optimizations. We start by tackling the uncertainty caused by the mutual influence of original action interferences, aiming to enhance the overall performance. The <i>Phasic Parallel-Network Policy</i> (PPP), which is a deep reinforcement learning framework. It diverges from the traditional policy actor-critic method by grouping the action space based on action correlations. The PPP incorporates parallel network structures and combines network optimization strategies. With the assistance of the value network, the training process is divided into different specific stages, namely the Extra-group Policy Phase and the Inter-group Optimization Phase. PPP breaks through the traditional unit learning structure. The experimental results indicate that it not only optimizes training effectiveness but also reduces training steps, enhances sample efficiency, and significantly improves stability and generalization.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"34 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141931602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-31DOI: 10.1007/s00607-024-01322-w
Gyan Singh, Amit K. Chaturvedi
Recent years have seen an exponential rise in data produced by Internet of Things (IoT) applications. Cloud servers were not designed for such extensive data, leading to challenges like increased makespan, cost, bandwidth, energy consumption, and network latency. To address these, the cloud–fog environment has emerged as an extension to cloud servers, offering services closer to IoT devices. Scheduling workflow applications to optimize multiple conflicting objectives in cloud fog is an NP-hard problem. Particle Swarm Optimization (PSO) is a good choice for multi-objective solutions due to its simplicity and rapid convergence. However, it has shortcomings like premature convergence and stagnation. To address these challenges, we formalize a theoretical background for scheduling workflow applications in the cloud–fog environment with multiple conflicting objectives. Subsequently, we propose an adaptive particle swarm optimization (APSO) algorithm with novel enhancements, including an S-shaped sigmoid function to dynamically decrease inertia weight and a linear updating mechanism for cognitive factors. Their integration in cloud–fog environments has not been previously explored. This novel application addresses unique challenges of workflow scheduling in cloud–fog systems, such as heterogeneous resource management, energy consumption, and increased cost. The effectiveness of APSO is evaluated using a real-world scientific workflow in a simulated cloud–fog environment and compared with four meta-heuristics. Our proposed workflow scheduling significantly reduces makespan and energy consumption without compromising overall cost compared to other meta-heuristics.
近年来,物联网(IoT)应用产生的数据呈指数级增长。云服务器并不是为处理如此大量的数据而设计的,这导致了诸如时间跨度、成本、带宽、能耗和网络延迟增加等挑战。为了解决这些问题,云雾环境作为云服务器的扩展而出现,提供更接近物联网设备的服务。在云雾环境中调度工作流应用程序以优化多个相互冲突的目标是一个 NP 难问题。粒子群优化(PSO)因其简单性和快速收敛性,是多目标解决方案的不错选择。然而,它也存在过早收敛和停滞等缺点。为了应对这些挑战,我们正式提出了在云雾环境中调度具有多个冲突目标的工作流应用的理论背景。随后,我们提出了一种自适应粒子群优化(APSO)算法,并对该算法进行了新的改进,包括使用 S 型 sigmoid 函数动态降低惯性权重和认知因素的线性更新机制。在云雾环境中整合这些算法,此前还从未有过探索。这种新颖的应用解决了云雾系统中工作流调度所面临的独特挑战,如异构资源管理、能源消耗和成本增加等。我们使用模拟云雾环境中的真实科学工作流对 APSO 的有效性进行了评估,并与四种元启发式算法进行了比较。与其他元启发式相比,我们提出的工作流调度方法在不影响总体成本的情况下显著降低了时间跨度和能耗。
{"title":"A cost, time, energy-aware workflow scheduling using adaptive PSO algorithm in a cloud–fog environment","authors":"Gyan Singh, Amit K. Chaturvedi","doi":"10.1007/s00607-024-01322-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01322-w","url":null,"abstract":"<p>Recent years have seen an exponential rise in data produced by Internet of Things (IoT) applications. Cloud servers were not designed for such extensive data, leading to challenges like increased makespan, cost, bandwidth, energy consumption, and network latency. To address these, the cloud–fog environment has emerged as an extension to cloud servers, offering services closer to IoT devices. Scheduling workflow applications to optimize multiple conflicting objectives in cloud fog is an NP-hard problem. Particle Swarm Optimization (PSO) is a good choice for multi-objective solutions due to its simplicity and rapid convergence. However, it has shortcomings like premature convergence and stagnation. To address these challenges, we formalize a theoretical background for scheduling workflow applications in the cloud–fog environment with multiple conflicting objectives. Subsequently, we propose an adaptive particle swarm optimization (APSO) algorithm with novel enhancements, including an S-shaped sigmoid function to dynamically decrease inertia weight and a linear updating mechanism for cognitive factors. Their integration in cloud–fog environments has not been previously explored. This novel application addresses unique challenges of workflow scheduling in cloud–fog systems, such as heterogeneous resource management, energy consumption, and increased cost. The effectiveness of APSO is evaluated using a real-world scientific workflow in a simulated cloud–fog environment and compared with four meta-heuristics. Our proposed workflow scheduling significantly reduces makespan and energy consumption without compromising overall cost compared to other meta-heuristics.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"295 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}