首页 > 最新文献

Computing最新文献

英文 中文
Dynamic attention guider network 动态注意力引导网络
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-30 DOI: 10.1007/s00607-024-01328-4
Chunguang Yue, Jinbao Li, Qichen Wang, Donghuan Zhang

Hybrid networks, benefiting from both CNNs and Transformers architectures, exhibit stronger feature extraction capabilities compared to standalone CNNs or Transformers. However, in hybrid networks, the lack of attention in CNNs or insufficient refinement in attention mechanisms hinder the highlighting of target regions. Additionally, the computational cost of self-attention in Transformers poses a challenge to further improving network performance. To address these issues, we propose a novel point-to-point Dynamic Attention Guider(DAG) that dynamically generates multi-scale large receptive field attention to guide CNN networks to focus on target regions. Building upon DAG, we introduce a new hybrid network called the Dynamic Attention Guider Network (DAGN), which effectively combines Dynamic Attention Guider Block (DAGB) modules with Transformers to alleviate the computational cost of self-attention in processing high-resolution input images. Extensive experiments demonstrate that the proposed network outperforms existing state-of-the-art models across various downstream tasks. Specifically, the network achieves a Top-1 classification accuracy of 88.3% on ImageNet1k. For object detection and instance segmentation on COCO, it respectively surpasses the best FocalNet-T model by 1.6 (AP^b) and 1.5 (AP^m), while achieving a top performance of 48.2% in semantic segmentation on ADE20K.

与独立的 CNN 或 Transformers 相比,同时受益于 CNN 和 Transformers 架构的混合网络具有更强的特征提取能力。然而,在混合网络中,CNN 缺乏注意力或注意力机制不够精细,都会阻碍目标区域的突出显示。此外,Transformers 中自我关注的计算成本也对进一步提高网络性能构成了挑战。为了解决这些问题,我们提出了一种新颖的点对点动态注意力引导器(DAG),它能动态生成多尺度大感受野注意力,引导 CNN 网络聚焦目标区域。在 DAG 的基础上,我们引入了一种新的混合网络,称为动态注意力引导网络(DAGN),它有效地将动态注意力引导块(DAGB)模块与变换器结合在一起,以减轻处理高分辨率输入图像时自我注意力的计算成本。广泛的实验证明,所提出的网络在各种下游任务中的表现优于现有的一流模型。具体来说,该网络在 ImageNet1k 上达到了 88.3% 的 Top-1 分类准确率。在COCO上的物体检测和实例分割方面,它分别比最佳FocalNet-T模型高出1.6(AP^b)和1.5(AP^m),而在ADE20K上的语义分割方面则达到了48.2%的最高性能。
{"title":"Dynamic attention guider network","authors":"Chunguang Yue, Jinbao Li, Qichen Wang, Donghuan Zhang","doi":"10.1007/s00607-024-01328-4","DOIUrl":"https://doi.org/10.1007/s00607-024-01328-4","url":null,"abstract":"<p>Hybrid networks, benefiting from both CNNs and Transformers architectures, exhibit stronger feature extraction capabilities compared to standalone CNNs or Transformers. However, in hybrid networks, the lack of attention in CNNs or insufficient refinement in attention mechanisms hinder the highlighting of target regions. Additionally, the computational cost of self-attention in Transformers poses a challenge to further improving network performance. To address these issues, we propose a novel point-to-point Dynamic Attention Guider(DAG) that dynamically generates multi-scale large receptive field attention to guide CNN networks to focus on target regions. Building upon DAG, we introduce a new hybrid network called the Dynamic Attention Guider Network (DAGN), which effectively combines Dynamic Attention Guider Block (DAGB) modules with Transformers to alleviate the computational cost of self-attention in processing high-resolution input images. Extensive experiments demonstrate that the proposed network outperforms existing state-of-the-art models across various downstream tasks. Specifically, the network achieves a Top-1 classification accuracy of 88.3% on ImageNet1k. For object detection and instance segmentation on COCO, it respectively surpasses the best FocalNet-T model by 1.6 <span>(AP^b)</span> and 1.5 <span>(AP^m)</span>, while achieving a top performance of 48.2% in semantic segmentation on ADE20K.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"155 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Priority-based DAG task offloading and secondary resource allocation in IoT edge computing environments 物联网边缘计算环境中基于优先级的 DAG 任务卸载和二次资源分配
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-29 DOI: 10.1007/s00607-024-01327-5
Yishan Chen, Xiansong Luo, Peng Liang, Junxiao Han, Zhonghui Xu

With the development of IoT, the concept of intelligent services has gradually come to the fore. Intelligent services usually involve a large number of computation intensive tasks with data dependencies that are often modelled as directed acyclic graphs (DAGs), and the offloading of DAG tasks is complex and has proven to be an NP hard challenge. As a key research issue, the task offloading process migrates the computation intensive tasks from resource-constrained IoT devices to nearby edge servers, and pursuing a lower delay and energy consumption. However, data dependencies among tasks are complex, and it is challenging to coordinate the computation intensive tasks among multiple edge servers. In this paper, a flexible and generic DAG task model is built to support the associative task offloading process with complex data dependencies in IoT edge computing environments. Additionally, a priority-based DAG task offloading algorithm and a secondary resource allocation algorithm are proposed to minimize the response delay and improve the resource utilization of edge servers. Experimental results demonstrate that the proposed method can well support the DAG task offloading process with the shortest response delay, while outperforming all the benchmark policies, which is suitable for IoT edge computing environments.

随着物联网的发展,智能服务的概念逐渐凸显出来。智能服务通常涉及大量具有数据依赖关系的计算密集型任务,这些任务通常被建模为有向无环图(DAG),而 DAG 任务的卸载非常复杂,已被证明是一项 NP 难度很高的挑战。作为一个关键研究课题,任务卸载过程将计算密集型任务从资源受限的物联网设备迁移到附近的边缘服务器,并追求更低的延迟和能耗。然而,任务之间的数据依赖关系非常复杂,在多个边缘服务器之间协调计算密集型任务具有挑战性。本文建立了一个灵活通用的 DAG 任务模型,以支持物联网边缘计算环境中具有复杂数据依赖性的关联任务卸载过程。此外,本文还提出了基于优先级的 DAG 任务卸载算法和二次资源分配算法,以最大限度地减少响应延迟并提高边缘服务器的资源利用率。实验结果表明,所提出的方法能以最短的响应延迟很好地支持 DAG 任务卸载过程,同时性能优于所有基准策略,适用于物联网边缘计算环境。
{"title":"Priority-based DAG task offloading and secondary resource allocation in IoT edge computing environments","authors":"Yishan Chen, Xiansong Luo, Peng Liang, Junxiao Han, Zhonghui Xu","doi":"10.1007/s00607-024-01327-5","DOIUrl":"https://doi.org/10.1007/s00607-024-01327-5","url":null,"abstract":"<p>With the development of IoT, the concept of intelligent services has gradually come to the fore. Intelligent services usually involve a large number of computation intensive tasks with data dependencies that are often modelled as directed acyclic graphs (DAGs), and the offloading of DAG tasks is complex and has proven to be an NP hard challenge. As a key research issue, the task offloading process migrates the computation intensive tasks from resource-constrained IoT devices to nearby edge servers, and pursuing a lower delay and energy consumption. However, data dependencies among tasks are complex, and it is challenging to coordinate the computation intensive tasks among multiple edge servers. In this paper, a flexible and generic DAG task model is built to support the associative task offloading process with complex data dependencies in IoT edge computing environments. Additionally, a priority-based DAG task offloading algorithm and a secondary resource allocation algorithm are proposed to minimize the response delay and improve the resource utilization of edge servers. Experimental results demonstrate that the proposed method can well support the DAG task offloading process with the shortest response delay, while outperforming all the benchmark policies, which is suitable for IoT edge computing environments.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"77 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of strategies for scalable transaction creation in blockchains 区块链可扩展交易创建策略分析
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-29 DOI: 10.1007/s00607-024-01324-8
Ole Delzer, Richard Hobeck, Ingo Weber, Dominik Kaaser, Michael Sober, Stefan Schulte

The growing popularity of blockchains highlights the need to improve their scalability. While previous research has focused on scaling transaction processing, the scalability of transaction creation remains unexplored. This issue is particularly important for organizations needing to send large volumes of transactions quickly or continuously. Scaling transaction creation is challenging, especially for blockchain platforms like Ethereum, which require transactions to include a sequence number. This paper proposes four different methods to scale transaction creation. Our experimental evaluation assesses the scalability and latency of these methods, identifying two as feasible for scaling transaction creation. Additionally, we provide an in-depth theoretical analysis of these two methods.

区块链的日益普及凸显了提高其可扩展性的必要性。以往的研究主要集中在交易处理的可扩展性上,而交易创建的可扩展性仍未得到探索。对于需要快速或持续发送大量交易的组织来说,这个问题尤为重要。交易创建的扩展具有挑战性,特别是对于以太坊这样的区块链平台,因为它要求交易包含序列号。本文提出了四种不同的方法来扩展交易创建。我们的实验评估对这些方法的可扩展性和延迟进行了评估,确定了两种可行的交易创建扩展方法。此外,我们还对这两种方法进行了深入的理论分析。
{"title":"Analysis of strategies for scalable transaction creation in blockchains","authors":"Ole Delzer, Richard Hobeck, Ingo Weber, Dominik Kaaser, Michael Sober, Stefan Schulte","doi":"10.1007/s00607-024-01324-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01324-8","url":null,"abstract":"<p>The growing popularity of blockchains highlights the need to improve their scalability. While previous research has focused on scaling transaction processing, the scalability of transaction creation remains unexplored. This issue is particularly important for organizations needing to send large volumes of transactions quickly or continuously. Scaling transaction creation is challenging, especially for blockchain platforms like Ethereum, which require transactions to include a sequence number. This paper proposes four different methods to scale transaction creation. Our experimental evaluation assesses the scalability and latency of these methods, identifying two as feasible for scaling transaction creation. Additionally, we provide an in-depth theoretical analysis of these two methods.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"48 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
$$mu $$ XL: explainable lead generation with microservices and hypothetical answers $$mu $$ XL:利用微服务和假设性答案生成可解释的线索
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-24 DOI: 10.1007/s00607-024-01321-x
Luís Cruz-Filipe, Sofia Kostopoulou, Fabrizio Montesi, Jonas Vistrup

Lead generation refers to the identification of potential topics (the ‘leads’) of importance for journalists to report on. In this article we present (mu )XL, a new lead generation tool based on a microservice architecture that includes a component of explainable AI. (mu )XL collects and stores historical and real-time data from web sources, like Google Trends, and generates current and future leads. Leads are produced by a novel engine for hypothetical reasoning based on temporal logical rules, which can identify propositions that may hold depending on the outcomes of future events. This engine also supports additional features that are relevant for lead generation, such as user-defined predicates (allowing useful custom atomic propositions to be defined as Java functions) and negation (needed to specify and reason about leads characterized by the absence of specific properties). Our microservice architecture is designed using state-of-the-art methods and tools for API design and implementation, namely API patterns and the Jolie programming language. Thus, our development provides an additional validation of their usefulness in a new application domain (journalism). We also carry out an empirical evaluation of our tool.

线索生成指的是为记者确定重要的潜在报道主题("线索")。在这篇文章中,我们将介绍一个基于微服务架构的新型线索生成工具--(mu )XL,其中包括一个可解释的人工智能组件。(mu)XL 收集并存储来自谷歌趋势等网络来源的历史和实时数据,并生成当前和未来的线索。线索由一个基于时间逻辑规则的新颖假设推理引擎生成,它可以根据未来事件的结果确定可能成立的命题。该引擎还支持与线索生成相关的其他功能,如用户自定义谓词(允许将有用的自定义原子命题定义为 Java 函数)和否定(需要指定和推理以不存在特定属性为特征的线索)。我们的微服务架构设计采用了最先进的 API 设计和实施方法与工具,即 API 模式和 Jolie 编程语言。因此,我们的开发提供了在新应用领域(新闻业)中对其实用性的额外验证。我们还对我们的工具进行了实证评估。
{"title":"$$mu $$ XL: explainable lead generation with microservices and hypothetical answers","authors":"Luís Cruz-Filipe, Sofia Kostopoulou, Fabrizio Montesi, Jonas Vistrup","doi":"10.1007/s00607-024-01321-x","DOIUrl":"https://doi.org/10.1007/s00607-024-01321-x","url":null,"abstract":"<p>Lead generation refers to the identification of potential topics (the ‘leads’) of importance for journalists to report on. In this article we present <span>(mu )</span>XL, a new lead generation tool based on a microservice architecture that includes a component of explainable AI. <span>(mu )</span>XL collects and stores historical and real-time data from web sources, like Google Trends, and generates current and future leads. Leads are produced by a novel engine for hypothetical reasoning based on temporal logical rules, which can identify propositions that may hold depending on the outcomes of future events. This engine also supports additional features that are relevant for lead generation, such as user-defined predicates (allowing useful custom atomic propositions to be defined as Java functions) and negation (needed to specify and reason about leads characterized by the absence of specific properties). Our microservice architecture is designed using state-of-the-art methods and tools for API design and implementation, namely API patterns and the Jolie programming language. Thus, our development provides an additional validation of their usefulness in a new application domain (journalism). We also carry out an empirical evaluation of our tool.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"10 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141778641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heuristic algorithm for an optimal solution of fully fuzzy transportation problem 全模糊运输问题最优解的启发式算法
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-21 DOI: 10.1007/s00607-024-01319-5
Nermin Kartli, Erkan Bostanci, Mehmet Serdar Guzel

Several problems involving uncertainties can be modeled with fuzzy numbers according to the type of these uncertainties. It is natural to express the solution to such a problem with fuzzy numbers. In this study, we consider the fully fuzzy transportation problem. All input parameters of the problem are expressed with fuzzy numbers given in the parametric form. We propose a new heuristic algorithm to approximate the fuzzy optimal solution. The fuzzy problem is solved by transforming it into two independent parametric problems with the proposed method. We first divide the interval [0, 1] into a sufficiently large number of equal intervals, then write a linear programming problem for each partition point and solve these problems by transforming them into transportation problems. The proposed algorithm is supported by examples.

根据不确定性的类型,一些涉及不确定性的问题可以用模糊数建模。用模糊数来表示这类问题的解决方案是很自然的。在本研究中,我们考虑的是全模糊运输问题。问题的所有输入参数都用参数形式给出的模糊数表示。我们提出了一种近似模糊最优解的新启发式算法。利用所提出的方法,可以将模糊问题转化为两个独立的参数问题来解决。我们首先将区间 [0, 1] 分割成足够多的相等区间,然后为每个分割点编写一个线性规划问题,并通过将这些问题转化为运输问题来求解。提出的算法有实例支持。
{"title":"Heuristic algorithm for an optimal solution of fully fuzzy transportation problem","authors":"Nermin Kartli, Erkan Bostanci, Mehmet Serdar Guzel","doi":"10.1007/s00607-024-01319-5","DOIUrl":"https://doi.org/10.1007/s00607-024-01319-5","url":null,"abstract":"<p>Several problems involving uncertainties can be modeled with fuzzy numbers according to the type of these uncertainties. It is natural to express the solution to such a problem with fuzzy numbers. In this study, we consider the fully fuzzy transportation problem. All input parameters of the problem are expressed with fuzzy numbers given in the parametric form. We propose a new heuristic algorithm to approximate the fuzzy optimal solution. The fuzzy problem is solved by transforming it into two independent parametric problems with the proposed method. We first divide the interval [0, 1] into a sufficiently large number of equal intervals, then write a linear programming problem for each partition point and solve these problems by transforming them into transportation problems. The proposed algorithm is supported by examples.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"1 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141737350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient hashing technique for malicious profile detection at hypervisor environment 用于在管理程序环境中检测恶意配置文件的高效散列技术
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-19 DOI: 10.1007/s00607-024-01325-7
Anumukonda Naga Seshu Kumar, Rajesh Kumar Yadav, Nallanthighal Srinivasa Raghava

Attack detection in cyber security systems is one of the complex tasks which require domain specific knowledge and cognitive intelligence to detect novel and unknown attacks from large scale network data. This research explores how the network operations and network security affects the detection of unknown attacks in network systems. A hash based profile matching technique is presented in this paper for attack detection. The main objective of this work is to detect unknown attacks using a profile matching approach in Hypervisors. Hypervisors are characterized by their versatile nature since they allow the utilization of available system resources. The virtual machines (VMs) in the hypervisors are not dependent on the host hardware and as a result, hypervisors are considered advantageous. In addition, hypervisors have direct access to the hardware resources such as memory, storage and processors. However, hypervisors are more susceptible to the security threats which attack each and every VM. A SHA3-512 hashing algorithm used for generating hash values in hypervisor and the proposed model is used to verify whether the profile is malicious or benign. The performance of the hashbased profile matching technique is compared with traditional hash techniques namely SHA-256 and MD5 algorithm. Results show that the proposed SHA3-512 algorithm achieves a phenomenal performance in terms of phenomenal accuracy and zero false positive rates. Simulation results also show that the computation time required by Sha3-512 algorithm is lower compared to SHA-256 and MD5 algorithms. The performance analysis validates that the hash based approach achieves reliable performance for attack detection. The effectiveness of the hashing technique was determined using three different evaluation metrics namely attack DR, FPR, and computational time. Simulation results show that the existing SHA3- 512 algorithm detection rate of 97.24% with zero false positive rate and faster computational time compared to SHA 256 and MD5 algorithms.

网络安全系统中的攻击检测是一项复杂的任务,需要特定领域的知识和认知智能,才能从大规模网络数据中检测出新型和未知攻击。本研究探讨了网络运行和网络安全如何影响网络系统中未知攻击的检测。本文提出了一种基于哈希特征匹配的攻击检测技术。这项工作的主要目标是在管理程序中使用配置文件匹配方法检测未知攻击。超级管理器的特点是其多功能性,因为它允许利用可用的系统资源。管理程序中的虚拟机(VM)不依赖于主机硬件,因此,管理程序被认为具有优势。此外,管理程序可以直接访问内存、存储和处理器等硬件资源。不过,管理程序更容易受到攻击每个虚拟机的安全威胁。在管理程序中使用 SHA3-512 哈希算法生成哈希值,并使用建议的模型来验证配置文件是恶意的还是良性的。基于散列的配置文件匹配技术的性能与传统散列技术(即 SHA-256 和 MD5 算法)进行了比较。结果表明,拟议的 SHA3-512 算法在惊人的准确率和零误报率方面实现了惊人的性能。仿真结果还显示,与 SHA-256 和 MD5 算法相比,SHA3-512 算法所需的计算时间更短。性能分析验证了基于散列的方法在攻击检测方面具有可靠的性能。使用三种不同的评估指标(即攻击 DR、FPR 和计算时间)确定了散列技术的有效性。仿真结果表明,与 SHA 256 和 MD5 算法相比,现有的 SHA3- 512 算法检测率为 97.24%,误报率为零,计算时间更短。
{"title":"Efficient hashing technique for malicious profile detection at hypervisor environment","authors":"Anumukonda Naga Seshu Kumar, Rajesh Kumar Yadav, Nallanthighal Srinivasa Raghava","doi":"10.1007/s00607-024-01325-7","DOIUrl":"https://doi.org/10.1007/s00607-024-01325-7","url":null,"abstract":"<p>Attack detection in cyber security systems is one of the complex tasks which require domain specific knowledge and cognitive intelligence to detect novel and unknown attacks from large scale network data. This research explores how the network operations and network security affects the detection of unknown attacks in network systems. A hash based profile matching technique is presented in this paper for attack detection. The main objective of this work is to detect unknown attacks using a profile matching approach in Hypervisors. Hypervisors are characterized by their versatile nature since they allow the utilization of available system resources. The virtual machines (VMs) in the hypervisors are not dependent on the host hardware and as a result, hypervisors are considered advantageous. In addition, hypervisors have direct access to the hardware resources such as memory, storage and processors. However, hypervisors are more susceptible to the security threats which attack each and every VM. A SHA3-512 hashing algorithm used for generating hash values in hypervisor and the proposed model is used to verify whether the profile is malicious or benign. The performance of the hashbased profile matching technique is compared with traditional hash techniques namely SHA-256 and MD5 algorithm. Results show that the proposed SHA3-512 algorithm achieves a phenomenal performance in terms of phenomenal accuracy and zero false positive rates. Simulation results also show that the computation time required by Sha3-512 algorithm is lower compared to SHA-256 and MD5 algorithms. The performance analysis validates that the hash based approach achieves reliable performance for attack detection. The effectiveness of the hashing technique was determined using three different evaluation metrics namely attack DR, FPR, and computational time. Simulation results show that the existing SHA3- 512 algorithm detection rate of 97.24% with zero false positive rate and faster computational time compared to SHA 256 and MD5 algorithms.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"6 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141745628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep recommendation with iteration directional adversarial training 利用迭代定向对抗训练进行深度推荐
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-17 DOI: 10.1007/s00607-024-01326-6
Agyemang Paul, Yuxuan Wan, Zhefu Wu, Boyu Chen, Shufeng Gong

Deep neural networks are vulnerable to attacks, posing significant security concerns across various applications, particularly in computer vision. Adversarial training has demonstrated effectiveness in improving the robustness of deep learning models by incorporating perturbations into the input space during training. Recently, adversarial training has been successfully applied to deep recommender systems. In these systems, user and item embeddings are perturbed through a minimax game, with constraints on perturbation directions, to enhance the model’s robustness and generalization. However, they still fail to defend against iterative attacks, which have shown an over 60% increase in effectiveness in the computer vision domain. Deep recommender systems may therefore be more susceptible to iterative attacks, which might lead to generalization failures. In this paper, we adapt iterative examples for deep recommender systems. Specifically, we propose a Deep Recommender with Iteration Directional Adversarial Training (DRIDAT) that combines attention mechanism and directional adversarial training for recommendations. Firstly, we establish a consumer-product collaborative attention to convey consumers different preferences on their interested products and the distinct preferences of different consumers on the same product they like. Secondly, we train the DRIDAT objective function using adversarial learning to minimize the impact of iterative attack. In addition, the maximum direction attack could push the embedding vector of input attacks towards instances with distinct labels. We mitigate this problem by implementing suitable constraints on the direction of the attack. Finally, we perform a series of evaluations on two prominent datasets. The findings show that our methodology outperforms all other methods for all metrics.

深度神经网络很容易受到攻击,这给各种应用,尤其是计算机视觉应用带来了严重的安全问题。对抗训练通过在训练过程中将扰动纳入输入空间,在提高深度学习模型的鲁棒性方面表现出了有效性。最近,对抗训练已成功应用于深度推荐系统。在这些系统中,通过最小博弈(minimax game)对用户和项目嵌入进行扰动,并对扰动方向进行约束,以增强模型的鲁棒性和泛化能力。然而,它们仍然无法抵御迭代攻击,而在计算机视觉领域,迭代攻击的有效性提高了 60% 以上。因此,深度推荐系统可能更容易受到迭代攻击,从而导致泛化失败。在本文中,我们为深度推荐系统调整了迭代示例。具体来说,我们提出了一种具有迭代定向对抗训练的深度推荐系统(DRIDAT),它结合了注意力机制和定向对抗训练来进行推荐。首先,我们建立了消费者-产品协同关注机制,以传递消费者对其感兴趣产品的不同偏好,以及不同消费者对其喜欢的同一产品的不同偏好。其次,我们利用对抗学习来训练 DRIDAT 目标函数,以尽量减少迭代攻击的影响。此外,最大方向攻击可能会将输入攻击的嵌入向量推向具有不同标签的实例。我们通过对攻击方向实施适当的限制来缓解这一问题。最后,我们在两个著名的数据集上进行了一系列评估。结果表明,在所有指标上,我们的方法都优于所有其他方法。
{"title":"Deep recommendation with iteration directional adversarial training","authors":"Agyemang Paul, Yuxuan Wan, Zhefu Wu, Boyu Chen, Shufeng Gong","doi":"10.1007/s00607-024-01326-6","DOIUrl":"https://doi.org/10.1007/s00607-024-01326-6","url":null,"abstract":"<p>Deep neural networks are vulnerable to attacks, posing significant security concerns across various applications, particularly in computer vision. Adversarial training has demonstrated effectiveness in improving the robustness of deep learning models by incorporating perturbations into the input space during training. Recently, adversarial training has been successfully applied to deep recommender systems. In these systems, user and item embeddings are perturbed through a minimax game, with constraints on perturbation directions, to enhance the model’s robustness and generalization. However, they still fail to defend against iterative attacks, which have shown an over 60% increase in effectiveness in the computer vision domain. Deep recommender systems may therefore be more susceptible to iterative attacks, which might lead to generalization failures. In this paper, we adapt iterative examples for deep recommender systems. Specifically, we propose a Deep Recommender with Iteration Directional Adversarial Training (DRIDAT) that combines attention mechanism and directional adversarial training for recommendations. Firstly, we establish a consumer-product collaborative attention to convey consumers different preferences on their interested products and the distinct preferences of different consumers on the same product they like. Secondly, we train the DRIDAT objective function using adversarial learning to minimize the impact of iterative attack. In addition, the maximum direction attack could push the embedding vector of input attacks towards instances with distinct labels. We mitigate this problem by implementing suitable constraints on the direction of the attack. Finally, we perform a series of evaluations on two prominent datasets. The findings show that our methodology outperforms all other methods for all metrics.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"54 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated learning for digital healthcare: concepts, applications, frameworks, and challenges 数字化医疗保健的联合学习:概念、应用、框架和挑战
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-10 DOI: 10.1007/s00607-024-01317-7
D. N. Sachin, B. Annappa, Sateesh Ambesange

Various hospitals have adopted digital technologies in the healthcare sector for various healthcare-related applications. Due to the effect of the Covid-19 pandemic, digital transformation has taken place in many domains, especially in the healthcare domain; it has streamlined various healthcare activities. With the advancement in technology concept of telemedicine evolved over the years and led to personalized healthcare and drug discovery. The use of machine learning (ML) technique in healthcare enables healthcare professionals to make a more accurate and early diagnosis. Training these ML models requires a massive amount of data, including patients’ personal data, that need to be protected from unethical use. Sharing these data to train ML models may violate data privacy. A distributed ML paradigm called federated learning (FL) has allowed different medical research institutions, hospitals, and healthcare devices to train ML models without sharing raw data. This survey paper overviews existing research work on FL-related use cases and applications. This paper also discusses the state-of-the-art tools and techniques available for FL research, current shortcomings, and future challenges in using FL in healthcare.

各种医院已在医疗保健领域采用数字技术,用于各种医疗保健相关应用。由于 Covid-19 大流行病的影响,许多领域,尤其是医疗保健领域都发生了数字化转型;它简化了各种医疗保健活动。随着技术的进步,远程医疗的概念也在不断发展,并带来了个性化医疗和药物研发。在医疗保健领域使用机器学习(ML)技术可使医疗保健专业人员做出更准确、更早期的诊断。训练这些 ML 模型需要大量数据,包括患者的个人数据,这些数据需要得到保护,以免被不道德地使用。共享这些数据来训练 ML 模型可能会侵犯数据隐私。一种被称为联合学习(FL)的分布式 ML 范式允许不同的医学研究机构、医院和医疗设备在不共享原始数据的情况下训练 ML 模型。本调查报告概述了 FL 相关用例和应用的现有研究工作。本文还讨论了可用于 FL 研究的最先进工具和技术、当前的不足以及在医疗保健领域使用 FL 的未来挑战。
{"title":"Federated learning for digital healthcare: concepts, applications, frameworks, and challenges","authors":"D. N. Sachin, B. Annappa, Sateesh Ambesange","doi":"10.1007/s00607-024-01317-7","DOIUrl":"https://doi.org/10.1007/s00607-024-01317-7","url":null,"abstract":"<p>Various hospitals have adopted digital technologies in the healthcare sector for various healthcare-related applications. Due to the effect of the Covid-19 pandemic, digital transformation has taken place in many domains, especially in the healthcare domain; it has streamlined various healthcare activities. With the advancement in technology concept of telemedicine evolved over the years and led to personalized healthcare and drug discovery. The use of machine learning (ML) technique in healthcare enables healthcare professionals to make a more accurate and early diagnosis. Training these ML models requires a massive amount of data, including patients’ personal data, that need to be protected from unethical use. Sharing these data to train ML models may violate data privacy. A distributed ML paradigm called federated learning (FL) has allowed different medical research institutions, hospitals, and healthcare devices to train ML models without sharing raw data. This survey paper overviews existing research work on FL-related use cases and applications. This paper also discusses the state-of-the-art tools and techniques available for FL research, current shortcomings, and future challenges in using FL in healthcare.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"21 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploiting recurrent graph neural networks for suffix prediction in predictive monitoring 利用递归图神经网络在预测性监测中进行后缀预测
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-09 DOI: 10.1007/s00607-024-01315-9
Efrén Rama-Maneiro, Juan C. Vidal, Manuel Lama, Pablo Monteagudo-Lago

Predictive monitoring is a subfield of process mining that aims to predict how a running case will unfold in the future. One of its main challenges is forecasting the sequence of activities that will occur from a given point in time —suffix prediction—. Most approaches to the suffix prediction problem learn to predict the suffix by learning how to predict the next activity only, while also disregarding structural information present in the process model. This paper proposes a novel architecture based on an encoder-decoder model with an attention mechanism that decouples the representation learning of the prefixes from the inference phase, predicting only the activities of the suffix. During the inference phase, this architecture is extended with a heuristic search algorithm that selects the most probable suffix according to both the structural information extracted from the process model and the information extracted from the log. Our approach has been tested using 12 public event logs against 6 different state-of-the-art proposals, showing that it significantly outperforms these proposals.

预测性监控是流程挖掘的一个子领域,旨在预测正在运行的案例未来将如何发展。其主要挑战之一是预测从给定时间点开始将发生的活动序列--后缀预测。大多数解决后缀预测问题的方法都是通过学习如何仅预测下一个活动来预测后缀,同时忽略流程模型中存在的结构信息。本文提出了一种基于编码器-解码器模型的新型架构,该架构具有注意力机制,可将前缀的表征学习与推理阶段分离开来,只预测后缀的活动。在推理阶段,该结构通过启发式搜索算法进行扩展,该算法可根据从流程模型中提取的结构信息和从日志中提取的信息选择最有可能的后缀。我们的方法使用 12 个公共事件日志与 6 个不同的最先进方案进行了测试,结果表明它明显优于这些方案。
{"title":"Exploiting recurrent graph neural networks for suffix prediction in predictive monitoring","authors":"Efrén Rama-Maneiro, Juan C. Vidal, Manuel Lama, Pablo Monteagudo-Lago","doi":"10.1007/s00607-024-01315-9","DOIUrl":"https://doi.org/10.1007/s00607-024-01315-9","url":null,"abstract":"<p>Predictive monitoring is a subfield of process mining that aims to predict how a running case will unfold in the future. One of its main challenges is forecasting the sequence of activities that will occur from a given point in time —suffix prediction—. Most approaches to the suffix prediction problem learn to predict the suffix by learning how to predict the next activity only, while also disregarding structural information present in the process model. This paper proposes a novel architecture based on an encoder-decoder model with an attention mechanism that decouples the representation learning of the prefixes from the inference phase, predicting only the activities of the suffix. During the inference phase, this architecture is extended with a heuristic search algorithm that selects the most probable suffix according to both the structural information extracted from the process model and the information extracted from the log. Our approach has been tested using 12 public event logs against 6 different state-of-the-art proposals, showing that it significantly outperforms these proposals.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"35 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing pre-copy live virtual machine migration in cloud computing using machine learning-based prediction model 利用基于机器学习的预测模型优化云计算中的预复制实时虚拟机迁移
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-08 DOI: 10.1007/s00607-024-01318-6
Raseena M. Haris, Mahmoud Barhamgi, Armstrong Nhlabatsi, Khaled M. Khan

One of the preconditions for efficient cloud computing services is the continuous availability of services to clients. However, there are various reasons for temporary service unavailability due to routine maintenance, load balancing, cyber-attacks, power management, fault tolerance, emergency incident response, and resource usage. Live Virtual Machine Migration (LVM) is an option to address service unavailability by moving virtual machines between hosts without disrupting running services. Pre-copy memory migration is a common LVM approach used in cloud systems, but it faces challenges due to the high rate of frequently updated memory pages known as dirty pages. Transferring these dirty pages during pre-copy migration prolongs the overall migration time. If there are large numbers of remaining memory pages after a predefined iteration of page transfer, the stop-and-copy phase is initiated, which significantly increases downtime and negatively impacts service availability. To mitigate this issue, we introduce a prediction-based approach that optimizes the migration process by dynamically halting the iteration phase when the predicted downtime falls below a predefined threshold. Our proposed machine learning method was rigorously evaluated through experiments conducted on a dedicated testbed using KVM/QEMU technology, involving different VM sizes and memory-intensive workloads. A comparative analysis against proposed pre-copy methods and default migration approach reveals a remarkable improvement, with an average 64.91% reduction in downtime for different RAM configurations in high-write-intensive workloads, along with an average reduction in total migration time of approximately 85.81%. These findings underscore the practical advantages of our method in reducing service disruptions during live virtual machine migration in cloud systems.

高效云计算服务的先决条件之一是为客户提供持续可用的服务。然而,由于日常维护、负载平衡、网络攻击、电源管理、容错、紧急事件响应和资源使用等各种原因,导致服务暂时不可用。实时虚拟机迁移(LVM)是在不中断运行服务的情况下,通过在主机间移动虚拟机来解决服务不可用问题的一种选择。预复制内存迁移是云系统中常用的 LVM 方法,但由于频繁更新的内存页(称为脏页)的高更新率,这种方法面临着挑战。在预复制迁移过程中转移这些脏页会延长整体迁移时间。如果在预定的页面传输迭代后存在大量剩余内存页面,则会启动停止和复制阶段,这将大大增加停机时间,并对服务可用性产生负面影响。为缓解这一问题,我们引入了一种基于预测的方法,当预测的停机时间低于预定阈值时,该方法会动态停止迭代阶段,从而优化迁移过程。通过在使用 KVM/QEMU 技术的专用测试平台上进行实验,对我们提出的机器学习方法进行了严格评估,实验涉及不同的虚拟机规模和内存密集型工作负载。通过与建议的预复制方法和默认迁移方法进行比较分析,发现了显著的改进,在高写入密集型工作负载中,不同内存配置的停机时间平均减少了 64.91%,总迁移时间平均减少了约 85.81%。这些发现强调了我们的方法在减少云系统中实时虚拟机迁移期间服务中断方面的实际优势。
{"title":"Optimizing pre-copy live virtual machine migration in cloud computing using machine learning-based prediction model","authors":"Raseena M. Haris, Mahmoud Barhamgi, Armstrong Nhlabatsi, Khaled M. Khan","doi":"10.1007/s00607-024-01318-6","DOIUrl":"https://doi.org/10.1007/s00607-024-01318-6","url":null,"abstract":"<p>One of the preconditions for efficient cloud computing services is the continuous availability of services to clients. However, there are various reasons for temporary service unavailability due to routine maintenance, load balancing, cyber-attacks, power management, fault tolerance, emergency incident response, and resource usage. Live Virtual Machine Migration (LVM) is an option to address service unavailability by moving virtual machines between hosts without disrupting running services. Pre-copy memory migration is a common LVM approach used in cloud systems, but it faces challenges due to the high rate of frequently updated memory pages known as dirty pages. Transferring these dirty pages during pre-copy migration prolongs the overall migration time. If there are large numbers of remaining memory pages after a predefined iteration of page transfer, the stop-and-copy phase is initiated, which significantly increases downtime and negatively impacts service availability. To mitigate this issue, we introduce a prediction-based approach that optimizes the migration process by dynamically halting the iteration phase when the predicted downtime falls below a predefined threshold. Our proposed machine learning method was rigorously evaluated through experiments conducted on a dedicated testbed using KVM/QEMU technology, involving different VM sizes and memory-intensive workloads. A comparative analysis against proposed pre-copy methods and default migration approach reveals a remarkable improvement, with an average 64.91% reduction in downtime for different RAM configurations in high-write-intensive workloads, along with an average reduction in total migration time of approximately 85.81%. These findings underscore the practical advantages of our method in reducing service disruptions during live virtual machine migration in cloud systems.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"40 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1