首页 > 最新文献

Journal of King Saud University-Computer and Information Sciences最新文献

英文 中文
GCNT: Identify influential seed set effectively in social networks by integrating graph convolutional networks with graph transformers GCNT:通过将图卷积网络与图转换器整合,有效识别社交网络中具有影响力的种子集
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-02 DOI: 10.1016/j.jksuci.2024.102183
Jianxin Tang , Jitao Qu , Shihui Song , Zhili Zhao , Qian Du

Exploring effective and efficient strategies for identifying influential nodes from social networks as seeds to promote the propagation of influence remains a crucial challenge in the field of influence maximization (IM), which has attracted significant research efforts. Deep learning-based approaches have been adopted as an alternative promising solution to the IM problem. However, a robust model that captures the associations between network information and node influence needs to be investigated, while concurrently considering the effects of the overlapped influence on training labels. To address these challenges, a GCNT model, which integrates Graph Convolutional Networks with Graph Transformers, is introduced in this paper to capture the intricate relationships among the topology of the network, node attributes, and node influence effectively. Furthermore, an innovative method called Greedy-LIE is proposed to generate labels to alleviate the issue of overlapped influence spread. Moreover, a Mask mechanism specially tailored for the IM problem is presented along with an input embedding balancing strategy. The effectiveness of the GCNT model is demonstrated through comprehensive experiments conducted on six real-world networks, and the model shows its competitive performance in terms of both influence maximization and computational efficiency over state-of-the-art methods.

在影响力最大化(IM)领域,探索从社交网络中识别有影响力的节点作为种子以促进影响力传播的切实有效的策略仍然是一个重要挑战,吸引了大量研究人员的努力。基于深度学习的方法已被采用,作为解决 IM 问题的另一种有前途的方案。然而,需要研究一种能捕捉网络信息与节点影响力之间关联的稳健模型,同时考虑重叠影响力对训练标签的影响。为了应对这些挑战,本文引入了一个 GCNT 模型,该模型将图卷积网络与图变换器整合在一起,能有效捕捉网络拓扑、节点属性和节点影响力之间错综复杂的关系。此外,本文还提出了一种名为 "Greedy-LIE "的创新方法来生成标签,以缓解影响扩散重叠的问题。此外,还提出了专门针对 IM 问题的掩码机制以及输入嵌入平衡策略。通过在六个真实世界网络上进行的综合实验,证明了 GCNT 模型的有效性,而且该模型在影响力最大化和计算效率方面的表现都优于最先进的方法。
{"title":"GCNT: Identify influential seed set effectively in social networks by integrating graph convolutional networks with graph transformers","authors":"Jianxin Tang ,&nbsp;Jitao Qu ,&nbsp;Shihui Song ,&nbsp;Zhili Zhao ,&nbsp;Qian Du","doi":"10.1016/j.jksuci.2024.102183","DOIUrl":"10.1016/j.jksuci.2024.102183","url":null,"abstract":"<div><p>Exploring effective and efficient strategies for identifying influential nodes from social networks as seeds to promote the propagation of influence remains a crucial challenge in the field of influence maximization (IM), which has attracted significant research efforts. Deep learning-based approaches have been adopted as an alternative promising solution to the IM problem. However, a robust model that captures the associations between network information and node influence needs to be investigated, while concurrently considering the effects of the overlapped influence on training labels. To address these challenges, a GCNT model, which integrates Graph Convolutional Networks with Graph Transformers, is introduced in this paper to capture the intricate relationships among the topology of the network, node attributes, and node influence effectively. Furthermore, an innovative method called <span><math><mrow><mi>G</mi><mi>r</mi><mi>e</mi><mi>e</mi><mi>d</mi><mi>y</mi></mrow></math></span>-<span><math><mrow><mi>L</mi><mi>I</mi><mi>E</mi></mrow></math></span> is proposed to generate labels to alleviate the issue of overlapped influence spread. Moreover, a Mask mechanism specially tailored for the IM problem is presented along with an input embedding balancing strategy. The effectiveness of the GCNT model is demonstrated through comprehensive experiments conducted on six real-world networks, and the model shows its competitive performance in terms of both influence maximization and computational efficiency over state-of-the-art methods.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102183"},"PeriodicalIF":5.2,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002726/pdfft?md5=fb687d0a26ab54db6f7c889e608384a1&pid=1-s2.0-S1319157824002726-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning-driven Data Fabric Trends and Challenges for cloud-to-thing continuum 学习驱动的数据架构趋势与挑战,实现从云到物的连续性
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 DOI: 10.1016/j.jksuci.2024.102145
Praveen Kumar Donta , Chinmaya Kumar Dehury , Yu-Chen Hu

This special issue is a collection of emerging trends and challenges in applying learning-driven approaches to data fabric architectures within the cloud-to-thing continuum. As data generation and processing increasingly occur at the edge, there is a growing need for intelligent, adaptive data management solutions that seamlessly operate across distributed environments. In this special issue, we received research contributions from various groups around the world. We chose the eight most appropriate and novel contributions to include in this special issue. These eight contributions were further categorized into three themes: Data Handling approaches, resource optimization and management, and security and attacks. Additionally, this editorial suggests future research directions that will potentially lead to groundbreaking insights, which could pave the way for a new era of learning techniques in Data Fabric and the Cloud-to-Thing Continuum.

本特刊汇集了在 "从云到物 "的连续统一体中将学习驱动方法应用于数据结构架构的新兴趋势和挑战。随着数据生成和处理越来越多地发生在边缘,人们越来越需要能够在分布式环境中无缝运行的智能、自适应数据管理解决方案。在本特刊中,我们收到了来自世界各地不同团体的研究成果。我们选择了八篇最合适、最新颖的论文纳入本特刊。这八篇论文被进一步分为三个主题:数据处理方法、资源优化与管理以及安全与攻击。此外,这篇社论还提出了未来的研究方向,这些方向可能会带来突破性的见解,为数据架构和云到物连续体学习技术的新时代铺平道路。
{"title":"Learning-driven Data Fabric Trends and Challenges for cloud-to-thing continuum","authors":"Praveen Kumar Donta ,&nbsp;Chinmaya Kumar Dehury ,&nbsp;Yu-Chen Hu","doi":"10.1016/j.jksuci.2024.102145","DOIUrl":"10.1016/j.jksuci.2024.102145","url":null,"abstract":"<div><p>This special issue is a collection of emerging trends and challenges in applying learning-driven approaches to data fabric architectures within the cloud-to-thing continuum. As data generation and processing increasingly occur at the edge, there is a growing need for intelligent, adaptive data management solutions that seamlessly operate across distributed environments. In this special issue, we received research contributions from various groups around the world. We chose the eight most appropriate and novel contributions to include in this special issue. These eight contributions were further categorized into three themes: Data Handling approaches, resource optimization and management, and security and attacks. Additionally, this editorial suggests future research directions that will potentially lead to groundbreaking insights, which could pave the way for a new era of learning techniques in Data Fabric and the Cloud-to-Thing Continuum.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 7","pages":"Article 102145"},"PeriodicalIF":5.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002349/pdfft?md5=286285bbd5dfa0b63dd8785bf5349c2e&pid=1-s2.0-S1319157824002349-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EETS: An energy-efficient task scheduler in cloud computing based on improved DQN algorithm EETS:基于改进的 DQN 算法的云计算高能效任务调度器
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-31 DOI: 10.1016/j.jksuci.2024.102177
Huanhuan Hou , Azlan Ismail

The huge energy consumption of data centers in cloud computing leads to increased operating costs and high carbon emissions to the environment. Deep Reinforcement Learning (DRL) technology combines of deep learning and reinforcement learning, which has an obvious advantage in solving complex task scheduling problems. Deep Q Network(DQN)-based task scheduling has been employed for objective optimization. However, training the DQN algorithm may result in value overestimation, which can negatively impact the learning effectiveness. The replay buffer technique, while increasing sample utilization, does not distinguish between sample importance, resulting in limited utilization of valuable samples. This study proposes an enhanced task scheduling algorithm based on the DQN framework, which utilizes a more optimized Dueling-network architecture as well as Double DQN strategy to alleviate the overestimation bias and address the shortcomings of DQN. It also incorporates a prioritized experience replay technique to achieve importance sampling of experience data, which overcomes the problem of low utilization due to uniform sampling from replay memory. Based on these improved techniques, we developed an energy-efficient task scheduling algorithm called EETS (Energy-Efficient Task Scheduling). This algorithm automatically learns the optimal scheduling policy from historical data while interacting with the environment. Experimental results demonstrate that EETS exhibits faster convergence rates and higher rewards compared to both DQN and DDQN. In scheduling performance, EETS outperforms other baseline algorithms in key metrics, including energy consumption, average task response time, and average machine working time. Particularly, it has a significant advantage when handling large batches of tasks.

云计算中数据中心的巨大能耗导致运营成本增加,并对环境造成高碳排放。深度强化学习(DRL)技术结合了深度学习和强化学习,在解决复杂任务调度问题方面具有明显优势。基于深度 Q 网络(DQN)的任务调度已被用于目标优化。然而,训练 DQN 算法可能会导致值被高估,从而对学习效果产生负面影响。重放缓冲技术虽然能提高样本利用率,但无法区分样本的重要性,导致宝贵样本的利用率有限。本研究提出了一种基于 DQN 框架的增强型任务调度算法,利用更优化的 Dueling 网络架构和 Double DQN 策略来缓解高估偏差,解决 DQN 的不足。它还采用了优先经验重放技术来实现经验数据的重要性采样,从而克服了重放内存均匀采样导致的利用率低的问题。在这些改进技术的基础上,我们开发了一种名为 EETS(高能效任务调度)的高能效任务调度算法。该算法在与环境交互的过程中自动从历史数据中学习最优调度策略。实验结果表明,与 DQN 和 DDQN 相比,EETS 表现出更快的收敛速度和更高的回报率。在调度性能方面,EETS 在能耗、平均任务响应时间和平均机器工作时间等关键指标上都优于其他基准算法。尤其是在处理大批量任务时,EETS 具有明显优势。
{"title":"EETS: An energy-efficient task scheduler in cloud computing based on improved DQN algorithm","authors":"Huanhuan Hou ,&nbsp;Azlan Ismail","doi":"10.1016/j.jksuci.2024.102177","DOIUrl":"10.1016/j.jksuci.2024.102177","url":null,"abstract":"<div><p>The huge energy consumption of data centers in cloud computing leads to increased operating costs and high carbon emissions to the environment. Deep Reinforcement Learning (DRL) technology combines of deep learning and reinforcement learning, which has an obvious advantage in solving complex task scheduling problems. Deep Q Network(DQN)-based task scheduling has been employed for objective optimization. However, training the DQN algorithm may result in value overestimation, which can negatively impact the learning effectiveness. The replay buffer technique, while increasing sample utilization, does not distinguish between sample importance, resulting in limited utilization of valuable samples. This study proposes an enhanced task scheduling algorithm based on the DQN framework, which utilizes a more optimized Dueling-network architecture as well as Double DQN strategy to alleviate the overestimation bias and address the shortcomings of DQN. It also incorporates a prioritized experience replay technique to achieve importance sampling of experience data, which overcomes the problem of low utilization due to uniform sampling from replay memory. Based on these improved techniques, we developed an energy-efficient task scheduling algorithm called EETS (Energy-Efficient Task Scheduling). This algorithm automatically learns the optimal scheduling policy from historical data while interacting with the environment. Experimental results demonstrate that EETS exhibits faster convergence rates and higher rewards compared to both DQN and DDQN. In scheduling performance, EETS outperforms other baseline algorithms in key metrics, including energy consumption, average task response time, and average machine working time. Particularly, it has a significant advantage when handling large batches of tasks.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102177"},"PeriodicalIF":5.2,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002660/pdfft?md5=a86e26e6d8a0d8a013697db9338917a5&pid=1-s2.0-S1319157824002660-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Establishing a multimodal dataset for Arabic Sign Language (ArSL) production 建立阿拉伯手语(ArSL)制作的多模态数据集
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1016/j.jksuci.2024.102165
Samah Abbas , Dimah Alahmadi , Hassanin Al-Barhamtoshy

This paper addresses the potential of Arabic Sign Language (ArSL) recognition systems to facilitate direct communication and enhance social engagement between deaf and non-deaf. Specifically, we focus on the domain of religion to address the lack of accessible religious content for the deaf community. We propose a multimodal architecture framework and develop a novel dataset for ArSL production. The dataset comprises 1950 audio signals with corresponding 131 texts, including words and phrases, and 262 ArSL videos. These videos were recorded by two expert signers and annotated using ELAN based on gloss representation. To evaluate ArSL videos, we employ Cosine similarities and mode distances based on MobileNetV2 and Euclidean distance based on MediaPipe. Additionally, we implement Jac card Similarity to evaluate the gloss representation, resulting in an overall similarity score of 85% between the glosses of the two ArSL videos. The evaluation highlights the complexity of creating an ArSL video corpus and reveals slight differences between the two videos. The findings emphasize the need for careful annotation and representation of ArSL videos to ensure accurate recognition and understanding. Overall, it contributes to bridging the gap in accessible religious content for deaf community by developing a multimodal framework and a comprehensive ArSL dataset.

本文探讨了阿拉伯语手语 (ArSL) 识别系统在促进聋人与非聋人之间的直接交流和社会参与方面的潜力。具体而言,我们将重点放在宗教领域,以解决聋人群体缺乏无障碍宗教内容的问题。我们提出了一个多模态架构框架,并开发了一个新颖的 ArSL 生成数据集。该数据集包括 1950 个音频信号和相应的 131 个文本(包括单词和短语),以及 262 个 ArSL 视频。这些视频由两位专家手语者录制,并使用基于词汇表的 ELAN 进行注释。为了评估 ArSL 视频,我们采用了基于 MobileNetV2 的余弦相似度和模式距离,以及基于 MediaPipe 的欧氏距离。此外,我们还采用了 Jac card Similarity 来评估词汇表,结果发现两段 ArSL 视频的词汇表之间的总体相似度达到了 85%。评估结果凸显了创建 ArSL 视频语料库的复杂性,并揭示了两段视频之间的细微差别。评估结果强调了对 ArSL 视频进行仔细标注和表示的必要性,以确保准确的识别和理解。总之,通过开发一个多模态框架和一个全面的 ArSL 数据集,该研究有助于缩小聋人社区在无障碍宗教内容方面的差距。
{"title":"Establishing a multimodal dataset for Arabic Sign Language (ArSL) production","authors":"Samah Abbas ,&nbsp;Dimah Alahmadi ,&nbsp;Hassanin Al-Barhamtoshy","doi":"10.1016/j.jksuci.2024.102165","DOIUrl":"10.1016/j.jksuci.2024.102165","url":null,"abstract":"<div><p>This paper addresses the potential of Arabic Sign Language (ArSL) recognition systems to facilitate direct communication and enhance social engagement between deaf and non-deaf. Specifically, we focus on the domain of religion to address the lack of accessible religious content for the deaf community. We propose a multimodal architecture framework and develop a novel dataset for ArSL production. The dataset comprises 1950 audio signals with corresponding 131 texts, including words and phrases, and 262 ArSL videos. These videos were recorded by two expert signers and annotated using ELAN based on gloss representation. To evaluate ArSL videos, we employ Cosine similarities and mode distances based on MobileNetV2 and Euclidean distance based on MediaPipe. Additionally, we implement Jac card Similarity to evaluate the gloss representation, resulting in an overall similarity score of 85% between the glosses of the two ArSL videos. The evaluation highlights the complexity of creating an ArSL video corpus and reveals slight differences between the two videos. The findings emphasize the need for careful annotation and representation of ArSL videos to ensure accurate recognition and understanding. Overall, it contributes to bridging the gap in accessible religious content for deaf community by developing a multimodal framework and a comprehensive ArSL dataset.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102165"},"PeriodicalIF":5.2,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002544/pdfft?md5=301cc3d87bf22d8e207fb35edd191aea&pid=1-s2.0-S1319157824002544-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepExtract: Semantic-driven extractive text summarization framework using LLMs and hierarchical positional encoding DeepExtract:使用 LLM 和分层位置编码的语义驱动提取式文本摘要框架
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1016/j.jksuci.2024.102178
Aytuğ Onan , Hesham A. Alhumyani

In the age of information overload, the ability to distill essential content from extensive texts is invaluable. DeepExtract introduces an advanced framework for extractive summarization, utilizing the groundbreaking capabilities of GPT-4 along with innovative hierarchical positional encoding to redefine information extraction. This manuscript details the development of DeepExtract, which integrates semantic-driven techniques to analyze and summarize complex documents effectively. The framework is structured around a novel hierarchical tree construction that categorizes sentences and sections not just by their physical placement within a text, but by their contextual and thematic significance, leveraging dynamic embeddings generated by GPT-4. We introduce a multi-faceted scoring system that evaluates sentences based on coherence, relevance, and novelty, ensuring that summaries are not only concise but rich with essential content. Further, DeepExtract employs optimized semantic clustering to group thematic elements, which enhances the representativeness of the summaries. This paper demonstrates through comprehensive evaluations that DeepExtract significantly outperforms existing extractive summarization models in terms of accuracy and efficiency, making it a potent tool for academic, professional, and general use. We conclude with a discussion on the practical applications of DeepExtract in various domains, highlighting its adaptability and potential in navigating the vast expanses of digital text.

在信息过载的时代,从大量文本中提炼出重要内容的能力非常宝贵。DeepExtract 引入了先进的提取摘要框架,利用 GPT-4 的突破性功能和创新的分层位置编码重新定义信息提取。本手稿详细介绍了 DeepExtract 的开发过程,它集成了语义驱动技术,可有效分析和总结复杂文档。该框架是围绕一种新颖的分层树结构构建的,它不仅根据句子和章节在文本中的物理位置,还根据其上下文和主题意义,利用 GPT-4 生成的动态嵌入对其进行分类。我们引入了多方面的评分系统,根据连贯性、相关性和新颖性对句子进行评估,确保摘要不仅简明扼要,而且包含丰富的重要内容。此外,DeepExtract 还采用了优化的语义聚类来对主题元素进行分组,从而增强了摘要的代表性。本文通过综合评估证明,DeepExtract 在准确性和效率方面明显优于现有的提取式摘要模型,使其成为学术、专业和一般用途的有力工具。最后,我们讨论了 DeepExtract 在各个领域的实际应用,强调了它在浏览广袤的数字文本时的适应性和潜力。
{"title":"DeepExtract: Semantic-driven extractive text summarization framework using LLMs and hierarchical positional encoding","authors":"Aytuğ Onan ,&nbsp;Hesham A. Alhumyani","doi":"10.1016/j.jksuci.2024.102178","DOIUrl":"10.1016/j.jksuci.2024.102178","url":null,"abstract":"<div><p>In the age of information overload, the ability to distill essential content from extensive texts is invaluable. DeepExtract introduces an advanced framework for extractive summarization, utilizing the groundbreaking capabilities of GPT-4 along with innovative hierarchical positional encoding to redefine information extraction. This manuscript details the development of DeepExtract, which integrates semantic-driven techniques to analyze and summarize complex documents effectively. The framework is structured around a novel hierarchical tree construction that categorizes sentences and sections not just by their physical placement within a text, but by their contextual and thematic significance, leveraging dynamic embeddings generated by GPT-4. We introduce a multi-faceted scoring system that evaluates sentences based on coherence, relevance, and novelty, ensuring that summaries are not only concise but rich with essential content. Further, DeepExtract employs optimized semantic clustering to group thematic elements, which enhances the representativeness of the summaries. This paper demonstrates through comprehensive evaluations that DeepExtract significantly outperforms existing extractive summarization models in terms of accuracy and efficiency, making it a potent tool for academic, professional, and general use. We conclude with a discussion on the practical applications of DeepExtract in various domains, highlighting its adaptability and potential in navigating the vast expanses of digital text.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102178"},"PeriodicalIF":5.2,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002672/pdfft?md5=ee7790d3716e8b2a6454863f15695239&pid=1-s2.0-S1319157824002672-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142098684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised selective labeling for semi-supervised industrial defect detection 用于半监督工业缺陷检测的无监督选择性标记
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1016/j.jksuci.2024.102179
Jian Ge , Qin Qin , Shaojing Song , Jinhua Jiang , Zhiwei Shen

In industrial detection scenarios, achieving high accuracy typically relies on extensive labeled datasets, which are costly and time-consuming. This has motivated a shift towards semi-supervised learning (SSL), which leverages labeled and unlabeled data to improve learning efficiency and reduce annotation costs. This work proposes the unsupervised spectral clustering labeling (USCL) method to optimize SSL for industrial challenges like defect variability, rarity, and complex distributions. Integral to USCL, we employ the multi-task fusion self-supervised learning (MTSL) method to extract robust feature representations through multiple self-supervised tasks. Additionally, we introduce the Enhanced Spectral Clustering (ESC) method and a dynamic selecting function (DSF). ESC effectively integrates both local and global similarity matrices, improving clustering accuracy. The DSF maximally selects the most valuable instances for labeling, significantly enhancing the representativeness and diversity of the labeled data. USCL consistently improves various SSL methods compared to traditional instance selection methods. For example, it boosts Efficient Teacher by 5%, 6.6%, and 7.8% in mean Average Precision(mAP) on the Automotive Sealing Rings Defect Dataset, the Metallic Surface Defect Dataset, and the Printed Circuit Boards (PCB) Defect Dataset with 10% labeled data. Our work sets a new benchmark for SSL in industrial settings.

在工业检测场景中,要实现高精度通常需要大量标注数据集,而这些数据集成本高、耗时长。这促使人们转向半监督学习(SSL),即利用已标注和未标注数据来提高学习效率并降低标注成本。本研究提出了无监督光谱聚类标注(USCL)方法,以优化 SSL,应对缺陷多变性、稀有性和复杂分布等工业挑战。作为 USCL 的组成部分,我们采用了多任务融合自我监督学习(MTSL)方法,通过多个自我监督任务提取稳健的特征表征。此外,我们还引入了增强光谱聚类(ESC)方法和动态选择函数(DSF)。ESC 有效整合了局部和全局相似性矩阵,提高了聚类的准确性。DSF 可最大限度地选择最有价值的实例进行标记,从而显著提高标记数据的代表性和多样性。与传统的实例选择方法相比,USCL 不断改进各种 SSL 方法。例如,在汽车密封环缺陷数据集、金属表面缺陷数据集和印刷电路板(PCB)缺陷数据集上,USCL 在平均精度(mAP)方面分别提高了高效教师 5%、6.6% 和 7.8%,标注数据的比例为 10%。我们的工作为工业环境中的 SSL 树立了新的基准。
{"title":"Unsupervised selective labeling for semi-supervised industrial defect detection","authors":"Jian Ge ,&nbsp;Qin Qin ,&nbsp;Shaojing Song ,&nbsp;Jinhua Jiang ,&nbsp;Zhiwei Shen","doi":"10.1016/j.jksuci.2024.102179","DOIUrl":"10.1016/j.jksuci.2024.102179","url":null,"abstract":"<div><p>In industrial detection scenarios, achieving high accuracy typically relies on extensive labeled datasets, which are costly and time-consuming. This has motivated a shift towards semi-supervised learning (SSL), which leverages labeled and unlabeled data to improve learning efficiency and reduce annotation costs. This work proposes the unsupervised spectral clustering labeling (USCL) method to optimize SSL for industrial challenges like defect variability, rarity, and complex distributions. Integral to USCL, we employ the multi-task fusion self-supervised learning (MTSL) method to extract robust feature representations through multiple self-supervised tasks. Additionally, we introduce the Enhanced Spectral Clustering (ESC) method and a dynamic selecting function (DSF). ESC effectively integrates both local and global similarity matrices, improving clustering accuracy. The DSF maximally selects the most valuable instances for labeling, significantly enhancing the representativeness and diversity of the labeled data. USCL consistently improves various SSL methods compared to traditional instance selection methods. For example, it boosts Efficient Teacher by 5%, 6.6%, and 7.8% in mean Average Precision(mAP) on the Automotive Sealing Rings Defect Dataset, the Metallic Surface Defect Dataset, and the Printed Circuit Boards (PCB) Defect Dataset with 10% labeled data. Our work sets a new benchmark for SSL in industrial settings.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102179"},"PeriodicalIF":5.2,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002684/pdfft?md5=2e9ae7d3bfac3922191cefd8f900c5a6&pid=1-s2.0-S1319157824002684-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142117390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient hybrid approach for forecasting real-time stock market indices 预测实时股票市场指数的高效混合方法
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-29 DOI: 10.1016/j.jksuci.2024.102180
Riya Kalra , Tinku Singh , Suryanshi Mishra , Satakshi , Naveen Kumar , Taehong Kim , Manish Kumar

The stock market’s volatility, noise, and information overload necessitate efficient prediction methods. Forecasting index prices in this environment is complex due to the non-linear and non-stationary nature of time series data generated from the stock market. Machine learning and deep learning have emerged as powerful tools for identifying financial data patterns and generating predictions based on historical trends. However, updating these models in real-time is crucial for accurate predictions. Deep learning models require extensive computational resources and careful hyperparameter optimization, while incremental learning models struggle to balance stability and adaptability. This paper proposes a novel hybrid bidirectional-LSTM (H.BLSTM) model that combines incremental learning and deep learning techniques for real-time index price prediction, addressing these scalability and memory challenges. The method utilizes both univariate time series derived from historical index prices and multivariate time series incorporating technical indicators. Implementation within a real-time trading system demonstrates the method’s effectiveness in achieving more accurate price forecasts for major stock indices globally through extensive experimentation. The proposed model achieved an average mean absolute percentage error of 0.001 across nine stock indices, significantly outperforming traditional models. It has an average forecasting delay of 2 s, making it suitable for real-time trading applications.

股票市场的波动性、噪音和信息过载要求我们采用高效的预测方法。由于股票市场产生的时间序列数据具有非线性和非平稳性,因此在这种环境下预测指数价格非常复杂。机器学习和深度学习已成为基于历史趋势识别金融数据模式和生成预测的强大工具。然而,实时更新这些模型对于准确预测至关重要。深度学习模型需要大量的计算资源和细致的超参数优化,而增量学习模型则难以兼顾稳定性和适应性。本文提出了一种新颖的混合双向 LSTM(H.BLSTM)模型,该模型结合了增量学习和深度学习技术,用于实时指数价格预测,解决了这些可扩展性和内存方面的难题。该方法利用了从历史指数价格中得出的单变量时间序列和包含技术指标的多变量时间序列。在实时交易系统中的实施表明,通过广泛的实验,该方法能有效地对全球主要股票指数进行更准确的价格预测。所提出的模型在九个股票指数中的平均绝对百分比误差为 0.001,明显优于传统模型。它的平均预测延迟时间为 2 秒,适合实时交易应用。
{"title":"An efficient hybrid approach for forecasting real-time stock market indices","authors":"Riya Kalra ,&nbsp;Tinku Singh ,&nbsp;Suryanshi Mishra ,&nbsp;Satakshi ,&nbsp;Naveen Kumar ,&nbsp;Taehong Kim ,&nbsp;Manish Kumar","doi":"10.1016/j.jksuci.2024.102180","DOIUrl":"10.1016/j.jksuci.2024.102180","url":null,"abstract":"<div><p>The stock market’s volatility, noise, and information overload necessitate efficient prediction methods. Forecasting index prices in this environment is complex due to the non-linear and non-stationary nature of time series data generated from the stock market. Machine learning and deep learning have emerged as powerful tools for identifying financial data patterns and generating predictions based on historical trends. However, updating these models in real-time is crucial for accurate predictions. Deep learning models require extensive computational resources and careful hyperparameter optimization, while incremental learning models struggle to balance stability and adaptability. This paper proposes a novel hybrid bidirectional-LSTM (H.BLSTM) model that combines incremental learning and deep learning techniques for real-time index price prediction, addressing these scalability and memory challenges. The method utilizes both univariate time series derived from historical index prices and multivariate time series incorporating technical indicators. Implementation within a real-time trading system demonstrates the method’s effectiveness in achieving more accurate price forecasts for major stock indices globally through extensive experimentation. The proposed model achieved an average mean absolute percentage error of 0.001 across nine stock indices, significantly outperforming traditional models. It has an average forecasting delay of 2 s, making it suitable for real-time trading applications.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102180"},"PeriodicalIF":5.2,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002696/pdfft?md5=990fa1b67fa197073ed336d80589c08c&pid=1-s2.0-S1319157824002696-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142098691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A low-time-consumption image encryption combining 2D parametric Pascal matrix chaotic system and elementary operation 一种结合二维参数帕斯卡矩阵混沌系统和基本运算的低耗时图像加密方法
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-28 DOI: 10.1016/j.jksuci.2024.102169
Jun Lu , Jiaxin Zhang , Dezhi An , Dawei Hao , Xiaokai Ren , Ruoyu Zhao

The rapid development of the big data era has resulted in traditional image encryption algorithms consuming more time in handling the huge amount of data. The consumption of time cost needs to be reduced while ensuring the security of encryption algorithms. With this in mind, the paper proposes a low-time-consumption image encryption (LTC-IE) combining 2D parametric Pascal matrix chaotic system (2D-PPMCS) and elementary operation. First, the 2D-PPMCS with robustness and complex chaotic behavior is adopted. Second, the SHA-256 hash values are applied to the chaotic sequences generated by 2D-PPMCS, which are processed and applied to image permutation and diffusion encryption. In the permutation stage, the pixel matrix is permutation encrypted based on the permutation matrix generated from the chaotic sequences. For diffusion encryption, elementary operations are utilized to construct the model, such as exclusive or, modulo, and arithmetic operations (addition, subtraction, multiplication, and division). After analyzing the security experiments, the LTC-IE algorithm ensures security and robustness while reducing the time cost consumption.

大数据时代的快速发展导致传统图像加密算法在处理海量数据时耗费更多时间。在保证加密算法安全性的同时,还需要降低时间成本的消耗。有鉴于此,本文提出了一种结合二维参数帕斯卡矩阵混沌系统(2D-PPMCS)和基本运算的低耗时图像加密(LTC-IE)。首先,采用具有鲁棒性和复杂混沌行为的二维参数帕斯卡矩阵混沌系统。其次,将 SHA-256 哈希值应用于 2D-PPMCS 生成的混沌序列,经过处理后应用于图像置换和扩散加密。在置换阶段,根据混沌序列生成的置换矩阵对像素矩阵进行置换加密。在扩散加密阶段,利用基本运算来构建模型,如排他性或、模和算术运算(加、减、乘、除)。经过安全实验分析,LTC-IE 算法在降低时间成本消耗的同时,确保了安全性和鲁棒性。
{"title":"A low-time-consumption image encryption combining 2D parametric Pascal matrix chaotic system and elementary operation","authors":"Jun Lu ,&nbsp;Jiaxin Zhang ,&nbsp;Dezhi An ,&nbsp;Dawei Hao ,&nbsp;Xiaokai Ren ,&nbsp;Ruoyu Zhao","doi":"10.1016/j.jksuci.2024.102169","DOIUrl":"10.1016/j.jksuci.2024.102169","url":null,"abstract":"<div><p>The rapid development of the big data era has resulted in traditional image encryption algorithms consuming more time in handling the huge amount of data. The consumption of time cost needs to be reduced while ensuring the security of encryption algorithms. With this in mind, the paper proposes a low-time-consumption image encryption (LTC-IE) combining 2D parametric Pascal matrix chaotic system (2D-PPMCS) and elementary operation. First, the 2D-PPMCS with robustness and complex chaotic behavior is adopted. Second, the SHA-256 hash values are applied to the chaotic sequences generated by 2D-PPMCS, which are processed and applied to image permutation and diffusion encryption. In the permutation stage, the pixel matrix is permutation encrypted based on the permutation matrix generated from the chaotic sequences. For diffusion encryption, elementary operations are utilized to construct the model, such as exclusive or, modulo, and arithmetic operations (addition, subtraction, multiplication, and division). After analyzing the security experiments, the LTC-IE algorithm ensures security and robustness while reducing the time cost consumption.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102169"},"PeriodicalIF":5.2,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002581/pdfft?md5=db7fa2d27baba2dde9365c9407528c9f&pid=1-s2.0-S1319157824002581-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142098681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient authentication scheme syncretizing physical unclonable function and revocable biometrics in Industrial Internet of Things 在工业物联网中同步物理不可克隆功能和可撤销生物识别技术的高效认证方案
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-28 DOI: 10.1016/j.jksuci.2024.102166
Xinying Yu , Kejun Zhang , Zhufeng Suo , Jun Wang , Wenbin Wang , Bing Zou

Biometric recognition is extensive for user security authentication in the Industrial Internet of Things (IIoT). However, the potential leakage of biometric data has severe repercussions, such as identity theft or tracking. Existing authentication schemes primarily focus on protecting biometric templates but often overlook the “one-authentication multiple-access” mode. As a result, these schemes still confront challenges related to privacy leakage and low efficiency for users who frequently access the server. In this regard, this paper proposes an efficient authentication scheme syncretizing physical unclonable function (PUF) and revocable biometrics in IIoT. Specifically, we design a revocable biometric template generation method syncretizing the user’s biometric data and the device’s PUF to enhance the security and revocability of the dual identity information. Given the generated revocable biometric template and the secret sharing, our scheme implements secure authentication and key negotiation between users and servers. Additionally, we establish an access boundary and an authentication validity period to permit multiple accesses following one authentication, thus significantly decreasing the computational cost of the user-side device. We leverage BAN logic and the ROR model to prove our scheme’s security. Informal security analysis and performance comparison demonstrate that our scheme satisfies more security features with higher authentication efficiency.

生物识别技术在工业物联网(IIoT)中广泛应用于用户安全认证。然而,生物识别数据的潜在泄漏会造成严重影响,如身份盗用或跟踪。现有的身份验证方案主要侧重于保护生物识别模板,但往往忽略了 "一次验证多次访问 "模式。因此,对于频繁访问服务器的用户来说,这些方案仍然面临着隐私泄露和效率低下的挑战。为此,本文提出了一种将物理不可克隆函数(PUF)和可撤销生物识别技术同步应用于物联网的高效身份验证方案。具体来说,我们设计了一种可撤销生物识别模板生成方法,将用户的生物识别数据与设备的 PUF 同步,以增强双重身份信息的安全性和可撤销性。鉴于生成的可撤销生物识别模板和秘密共享,我们的方案实现了用户和服务器之间的安全认证和密钥协商。此外,我们还建立了访问边界和认证有效期,允许在一次认证后进行多次访问,从而大大降低了用户端设备的计算成本。我们利用 BAN 逻辑和 ROR 模型来证明我们方案的安全性。非正式的安全性分析和性能比较表明,我们的方案能以更高的验证效率满足更多的安全特性。
{"title":"An efficient authentication scheme syncretizing physical unclonable function and revocable biometrics in Industrial Internet of Things","authors":"Xinying Yu ,&nbsp;Kejun Zhang ,&nbsp;Zhufeng Suo ,&nbsp;Jun Wang ,&nbsp;Wenbin Wang ,&nbsp;Bing Zou","doi":"10.1016/j.jksuci.2024.102166","DOIUrl":"10.1016/j.jksuci.2024.102166","url":null,"abstract":"<div><p>Biometric recognition is extensive for user security authentication in the Industrial Internet of Things (IIoT). However, the potential leakage of biometric data has severe repercussions, such as identity theft or tracking. Existing authentication schemes primarily focus on protecting biometric templates but often overlook the “one-authentication multiple-access” mode. As a result, these schemes still confront challenges related to privacy leakage and low efficiency for users who frequently access the server. In this regard, this paper proposes an efficient authentication scheme syncretizing physical unclonable function (PUF) and revocable biometrics in IIoT. Specifically, we design a revocable biometric template generation method syncretizing the user’s biometric data and the device’s PUF to enhance the security and revocability of the dual identity information. Given the generated revocable biometric template and the secret sharing, our scheme implements secure authentication and key negotiation between users and servers. Additionally, we establish an access boundary and an authentication validity period to permit multiple accesses following one authentication, thus significantly decreasing the computational cost of the user-side device. We leverage BAN logic and the ROR model to prove our scheme’s security. Informal security analysis and performance comparison demonstrate that our scheme satisfies more security features with higher authentication efficiency.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102166"},"PeriodicalIF":5.2,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002556/pdfft?md5=bf447ec5a923cea7cdfc3e3a7567340f&pid=1-s2.0-S1319157824002556-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An electricity price and energy-efficient workflow scheduling in geographically distributed cloud data centers 地理分布式云数据中心的电价和节能工作流调度
IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-28 DOI: 10.1016/j.jksuci.2024.102170
Mehboob Hussain , Lian-Fu Wei , Amir Rehman , Abid Hussain , Muqadar Ali , Muhammad Hafeez Javed

The cloud computing platform has become a favorable destination for running cloud workflow applications. However, they are primarily complicated and require intensive computing. Task scheduling in cloud environments, when formulated as an optimization problem, is proven to be NP-hard. Thus, efficient task scheduling plays a decisive role in minimizing energy costs. Electricity prices fluctuate depending on the vending company, time, and location. Therefore, optimizing energy costs has become a serious issue that one must consider when building workflow applications scheduling across geographically distributed cloud data centers (GD-CDCs). To tackle this issue, we have suggested a dual optimization approach called electricity price and energy-efficient (EPEE) workflow scheduling algorithm that simultaneously considers energy efficiency and fluctuating electricity prices across GD-CDCs, aims to reach the minimum electricity costs of workflow applications under the deadline constraints. This novel integration of dynamic voltage and frequency scaling (DVFS) with energy and electricity price optimization is unique compared to existing methods. Moreover, our EPEE approach, which includes task prioritization, deadline partitioning, data center selection based on energy efficiency and price diversity, and dynamic task scheduling, provides a comprehensive solution that significantly reduces electricity costs and enhances resource utilization. In addition, the inclusion of both generated and original data transmission times further differentiates our approach, offering a more realistic and practical solution for cloud service providers (CSPs). The experimental results reveal that the EPEE model produces better success rates to meet task deadlines, maximize resource utilization, cost and energy efficiencies in comparison to adapted state-of-the-art algorithms for similar problems.

云计算平台已成为运行云工作流应用程序的有利去处。然而,它们主要比较复杂,需要密集的计算。如果将云环境中的任务调度表述为一个优化问题,则证明它是一个 NP 难问题。因此,高效的任务调度对能源成本最小化起着决定性作用。电价随自动售货机公司、时间和地点的不同而波动。因此,在跨地理分布云数据中心(GD-CDC)构建工作流应用调度时,优化能源成本已成为一个必须考虑的重要问题。为解决这一问题,我们提出了一种名为 "电价与能效(EPEE)工作流调度算法 "的双重优化方法,该算法同时考虑了跨 GD-CDC 的能效和波动电价,目的是在截止日期限制下实现工作流应用的最低电费。与现有方法相比,这种将动态电压和频率调整(DVFS)与能源和电价优化相结合的新方法是独一无二的。此外,我们的 EPEE 方法包括任务优先级排序、截止日期分区、基于能效和价格多样性的数据中心选择以及动态任务调度,它提供了一个全面的解决方案,可显著降低电费成本并提高资源利用率。此外,我们的方法还包含了生成数据和原始数据的传输时间,这使我们的方法更加与众不同,为云服务提供商(CSP)提供了更现实、更实用的解决方案。实验结果表明,与适用于类似问题的最先进算法相比,EPEE 模型在满足任务期限要求、最大化资源利用率、成本和能源效率方面具有更高的成功率。
{"title":"An electricity price and energy-efficient workflow scheduling in geographically distributed cloud data centers","authors":"Mehboob Hussain ,&nbsp;Lian-Fu Wei ,&nbsp;Amir Rehman ,&nbsp;Abid Hussain ,&nbsp;Muqadar Ali ,&nbsp;Muhammad Hafeez Javed","doi":"10.1016/j.jksuci.2024.102170","DOIUrl":"10.1016/j.jksuci.2024.102170","url":null,"abstract":"<div><p>The cloud computing platform has become a favorable destination for running cloud workflow applications. However, they are primarily complicated and require intensive computing. Task scheduling in cloud environments, when formulated as an optimization problem, is proven to be NP-hard. Thus, efficient task scheduling plays a decisive role in minimizing energy costs. Electricity prices fluctuate depending on the vending company, time, and location. Therefore, optimizing energy costs has become a serious issue that one must consider when building workflow applications scheduling across geographically distributed cloud data centers (GD-CDCs). To tackle this issue, we have suggested a dual optimization approach called electricity price and energy-efficient (EPEE) workflow scheduling algorithm that simultaneously considers energy efficiency and fluctuating electricity prices across GD-CDCs, aims to reach the minimum electricity costs of workflow applications under the deadline constraints. This novel integration of dynamic voltage and frequency scaling (DVFS) with energy and electricity price optimization is unique compared to existing methods. Moreover, our EPEE approach, which includes task prioritization, deadline partitioning, data center selection based on energy efficiency and price diversity, and dynamic task scheduling, provides a comprehensive solution that significantly reduces electricity costs and enhances resource utilization. In addition, the inclusion of both generated and original data transmission times further differentiates our approach, offering a more realistic and practical solution for cloud service providers (CSPs). The experimental results reveal that the EPEE model produces better success rates to meet task deadlines, maximize resource utilization, cost and energy efficiencies in comparison to adapted state-of-the-art algorithms for similar problems.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 8","pages":"Article 102170"},"PeriodicalIF":5.2,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002593/pdfft?md5=8ba14b81a0951bd08637405a78b6250b&pid=1-s2.0-S1319157824002593-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of King Saud University-Computer and Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1