首页 > 最新文献

Computing最新文献

英文 中文
Phasic parallel-network policy: a deep reinforcement learning framework based on action correlation 相位平行网络策略:基于行动相关性的深度强化学习框架
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-08-06 DOI: 10.1007/s00607-024-01329-3
Jiahao Li, Tianhan Gao, Qingwei Mi

Reinforcement learning algorithms show significant variations in performance across different environments. Optimization for reinforcement learning thus becomes the major research task since the instability and unpredictability of the reinforcement learning algorithms have consistently hindered their generalization capabilities. In this study, we address this issue by optimizing the algorithm itself rather than environment-specific optimizations. We start by tackling the uncertainty caused by the mutual influence of original action interferences, aiming to enhance the overall performance. The Phasic Parallel-Network Policy (PPP), which is a deep reinforcement learning framework. It diverges from the traditional policy actor-critic method by grouping the action space based on action correlations. The PPP incorporates parallel network structures and combines network optimization strategies. With the assistance of the value network, the training process is divided into different specific stages, namely the Extra-group Policy Phase and the Inter-group Optimization Phase. PPP breaks through the traditional unit learning structure. The experimental results indicate that it not only optimizes training effectiveness but also reduces training steps, enhances sample efficiency, and significantly improves stability and generalization.

强化学习算法在不同环境下的性能差异很大。由于强化学习算法的不稳定性和不可预测性一直阻碍着它们的泛化能力,因此优化强化学习算法就成了主要的研究任务。在本研究中,我们通过优化算法本身而不是特定环境的优化来解决这一问题。我们首先解决了原始动作干扰相互影响造成的不确定性,旨在提高整体性能。相位并行网络策略(PPP)是一种深度强化学习框架。它不同于传统的策略行动者批判方法,而是根据行动相关性对行动空间进行分组。PPP 融合了并行网络结构,并结合了网络优化策略。在价值网络的辅助下,训练过程被划分为不同的具体阶段,即组外策略阶段和组间优化阶段。PPP 突破了传统的单元学习结构。实验结果表明,它不仅优化了训练效果,还减少了训练步骤,提高了样本效率,并显著提高了稳定性和泛化能力。
{"title":"Phasic parallel-network policy: a deep reinforcement learning framework based on action correlation","authors":"Jiahao Li, Tianhan Gao, Qingwei Mi","doi":"10.1007/s00607-024-01329-3","DOIUrl":"https://doi.org/10.1007/s00607-024-01329-3","url":null,"abstract":"<p>Reinforcement learning algorithms show significant variations in performance across different environments. Optimization for reinforcement learning thus becomes the major research task since the instability and unpredictability of the reinforcement learning algorithms have consistently hindered their generalization capabilities. In this study, we address this issue by optimizing the algorithm itself rather than environment-specific optimizations. We start by tackling the uncertainty caused by the mutual influence of original action interferences, aiming to enhance the overall performance. The <i>Phasic Parallel-Network Policy</i> (PPP), which is a deep reinforcement learning framework. It diverges from the traditional policy actor-critic method by grouping the action space based on action correlations. The PPP incorporates parallel network structures and combines network optimization strategies. With the assistance of the value network, the training process is divided into different specific stages, namely the Extra-group Policy Phase and the Inter-group Optimization Phase. PPP breaks through the traditional unit learning structure. The experimental results indicate that it not only optimizes training effectiveness but also reduces training steps, enhances sample efficiency, and significantly improves stability and generalization.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141931602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cost, time, energy-aware workflow scheduling using adaptive PSO algorithm in a cloud–fog environment 云雾环境中使用自适应 PSO 算法的成本、时间和能源感知工作流调度系统
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-31 DOI: 10.1007/s00607-024-01322-w
Gyan Singh, Amit K. Chaturvedi

Recent years have seen an exponential rise in data produced by Internet of Things (IoT) applications. Cloud servers were not designed for such extensive data, leading to challenges like increased makespan, cost, bandwidth, energy consumption, and network latency. To address these, the cloud–fog environment has emerged as an extension to cloud servers, offering services closer to IoT devices. Scheduling workflow applications to optimize multiple conflicting objectives in cloud fog is an NP-hard problem. Particle Swarm Optimization (PSO) is a good choice for multi-objective solutions due to its simplicity and rapid convergence. However, it has shortcomings like premature convergence and stagnation. To address these challenges, we formalize a theoretical background for scheduling workflow applications in the cloud–fog environment with multiple conflicting objectives. Subsequently, we propose an adaptive particle swarm optimization (APSO) algorithm with novel enhancements, including an S-shaped sigmoid function to dynamically decrease inertia weight and a linear updating mechanism for cognitive factors. Their integration in cloud–fog environments has not been previously explored. This novel application addresses unique challenges of workflow scheduling in cloud–fog systems, such as heterogeneous resource management, energy consumption, and increased cost. The effectiveness of APSO is evaluated using a real-world scientific workflow in a simulated cloud–fog environment and compared with four meta-heuristics. Our proposed workflow scheduling significantly reduces makespan and energy consumption without compromising overall cost compared to other meta-heuristics.

近年来,物联网(IoT)应用产生的数据呈指数级增长。云服务器并不是为处理如此大量的数据而设计的,这导致了诸如时间跨度、成本、带宽、能耗和网络延迟增加等挑战。为了解决这些问题,云雾环境作为云服务器的扩展而出现,提供更接近物联网设备的服务。在云雾环境中调度工作流应用程序以优化多个相互冲突的目标是一个 NP 难问题。粒子群优化(PSO)因其简单性和快速收敛性,是多目标解决方案的不错选择。然而,它也存在过早收敛和停滞等缺点。为了应对这些挑战,我们正式提出了在云雾环境中调度具有多个冲突目标的工作流应用的理论背景。随后,我们提出了一种自适应粒子群优化(APSO)算法,并对该算法进行了新的改进,包括使用 S 型 sigmoid 函数动态降低惯性权重和认知因素的线性更新机制。在云雾环境中整合这些算法,此前还从未有过探索。这种新颖的应用解决了云雾系统中工作流调度所面临的独特挑战,如异构资源管理、能源消耗和成本增加等。我们使用模拟云雾环境中的真实科学工作流对 APSO 的有效性进行了评估,并与四种元启发式算法进行了比较。与其他元启发式相比,我们提出的工作流调度方法在不影响总体成本的情况下显著降低了时间跨度和能耗。
{"title":"A cost, time, energy-aware workflow scheduling using adaptive PSO algorithm in a cloud–fog environment","authors":"Gyan Singh, Amit K. Chaturvedi","doi":"10.1007/s00607-024-01322-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01322-w","url":null,"abstract":"<p>Recent years have seen an exponential rise in data produced by Internet of Things (IoT) applications. Cloud servers were not designed for such extensive data, leading to challenges like increased makespan, cost, bandwidth, energy consumption, and network latency. To address these, the cloud–fog environment has emerged as an extension to cloud servers, offering services closer to IoT devices. Scheduling workflow applications to optimize multiple conflicting objectives in cloud fog is an NP-hard problem. Particle Swarm Optimization (PSO) is a good choice for multi-objective solutions due to its simplicity and rapid convergence. However, it has shortcomings like premature convergence and stagnation. To address these challenges, we formalize a theoretical background for scheduling workflow applications in the cloud–fog environment with multiple conflicting objectives. Subsequently, we propose an adaptive particle swarm optimization (APSO) algorithm with novel enhancements, including an S-shaped sigmoid function to dynamically decrease inertia weight and a linear updating mechanism for cognitive factors. Their integration in cloud–fog environments has not been previously explored. This novel application addresses unique challenges of workflow scheduling in cloud–fog systems, such as heterogeneous resource management, energy consumption, and increased cost. The effectiveness of APSO is evaluated using a real-world scientific workflow in a simulated cloud–fog environment and compared with four meta-heuristics. Our proposed workflow scheduling significantly reduces makespan and energy consumption without compromising overall cost compared to other meta-heuristics.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic attention guider network 动态注意力引导网络
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-30 DOI: 10.1007/s00607-024-01328-4
Chunguang Yue, Jinbao Li, Qichen Wang, Donghuan Zhang

Hybrid networks, benefiting from both CNNs and Transformers architectures, exhibit stronger feature extraction capabilities compared to standalone CNNs or Transformers. However, in hybrid networks, the lack of attention in CNNs or insufficient refinement in attention mechanisms hinder the highlighting of target regions. Additionally, the computational cost of self-attention in Transformers poses a challenge to further improving network performance. To address these issues, we propose a novel point-to-point Dynamic Attention Guider(DAG) that dynamically generates multi-scale large receptive field attention to guide CNN networks to focus on target regions. Building upon DAG, we introduce a new hybrid network called the Dynamic Attention Guider Network (DAGN), which effectively combines Dynamic Attention Guider Block (DAGB) modules with Transformers to alleviate the computational cost of self-attention in processing high-resolution input images. Extensive experiments demonstrate that the proposed network outperforms existing state-of-the-art models across various downstream tasks. Specifically, the network achieves a Top-1 classification accuracy of 88.3% on ImageNet1k. For object detection and instance segmentation on COCO, it respectively surpasses the best FocalNet-T model by 1.6 (AP^b) and 1.5 (AP^m), while achieving a top performance of 48.2% in semantic segmentation on ADE20K.

与独立的 CNN 或 Transformers 相比,同时受益于 CNN 和 Transformers 架构的混合网络具有更强的特征提取能力。然而,在混合网络中,CNN 缺乏注意力或注意力机制不够精细,都会阻碍目标区域的突出显示。此外,Transformers 中自我关注的计算成本也对进一步提高网络性能构成了挑战。为了解决这些问题,我们提出了一种新颖的点对点动态注意力引导器(DAG),它能动态生成多尺度大感受野注意力,引导 CNN 网络聚焦目标区域。在 DAG 的基础上,我们引入了一种新的混合网络,称为动态注意力引导网络(DAGN),它有效地将动态注意力引导块(DAGB)模块与变换器结合在一起,以减轻处理高分辨率输入图像时自我注意力的计算成本。广泛的实验证明,所提出的网络在各种下游任务中的表现优于现有的一流模型。具体来说,该网络在 ImageNet1k 上达到了 88.3% 的 Top-1 分类准确率。在COCO上的物体检测和实例分割方面,它分别比最佳FocalNet-T模型高出1.6(AP^b)和1.5(AP^m),而在ADE20K上的语义分割方面则达到了48.2%的最高性能。
{"title":"Dynamic attention guider network","authors":"Chunguang Yue, Jinbao Li, Qichen Wang, Donghuan Zhang","doi":"10.1007/s00607-024-01328-4","DOIUrl":"https://doi.org/10.1007/s00607-024-01328-4","url":null,"abstract":"<p>Hybrid networks, benefiting from both CNNs and Transformers architectures, exhibit stronger feature extraction capabilities compared to standalone CNNs or Transformers. However, in hybrid networks, the lack of attention in CNNs or insufficient refinement in attention mechanisms hinder the highlighting of target regions. Additionally, the computational cost of self-attention in Transformers poses a challenge to further improving network performance. To address these issues, we propose a novel point-to-point Dynamic Attention Guider(DAG) that dynamically generates multi-scale large receptive field attention to guide CNN networks to focus on target regions. Building upon DAG, we introduce a new hybrid network called the Dynamic Attention Guider Network (DAGN), which effectively combines Dynamic Attention Guider Block (DAGB) modules with Transformers to alleviate the computational cost of self-attention in processing high-resolution input images. Extensive experiments demonstrate that the proposed network outperforms existing state-of-the-art models across various downstream tasks. Specifically, the network achieves a Top-1 classification accuracy of 88.3% on ImageNet1k. For object detection and instance segmentation on COCO, it respectively surpasses the best FocalNet-T model by 1.6 <span>(AP^b)</span> and 1.5 <span>(AP^m)</span>, while achieving a top performance of 48.2% in semantic segmentation on ADE20K.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Priority-based DAG task offloading and secondary resource allocation in IoT edge computing environments 物联网边缘计算环境中基于优先级的 DAG 任务卸载和二次资源分配
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-29 DOI: 10.1007/s00607-024-01327-5
Yishan Chen, Xiansong Luo, Peng Liang, Junxiao Han, Zhonghui Xu

With the development of IoT, the concept of intelligent services has gradually come to the fore. Intelligent services usually involve a large number of computation intensive tasks with data dependencies that are often modelled as directed acyclic graphs (DAGs), and the offloading of DAG tasks is complex and has proven to be an NP hard challenge. As a key research issue, the task offloading process migrates the computation intensive tasks from resource-constrained IoT devices to nearby edge servers, and pursuing a lower delay and energy consumption. However, data dependencies among tasks are complex, and it is challenging to coordinate the computation intensive tasks among multiple edge servers. In this paper, a flexible and generic DAG task model is built to support the associative task offloading process with complex data dependencies in IoT edge computing environments. Additionally, a priority-based DAG task offloading algorithm and a secondary resource allocation algorithm are proposed to minimize the response delay and improve the resource utilization of edge servers. Experimental results demonstrate that the proposed method can well support the DAG task offloading process with the shortest response delay, while outperforming all the benchmark policies, which is suitable for IoT edge computing environments.

随着物联网的发展,智能服务的概念逐渐凸显出来。智能服务通常涉及大量具有数据依赖关系的计算密集型任务,这些任务通常被建模为有向无环图(DAG),而 DAG 任务的卸载非常复杂,已被证明是一项 NP 难度很高的挑战。作为一个关键研究课题,任务卸载过程将计算密集型任务从资源受限的物联网设备迁移到附近的边缘服务器,并追求更低的延迟和能耗。然而,任务之间的数据依赖关系非常复杂,在多个边缘服务器之间协调计算密集型任务具有挑战性。本文建立了一个灵活通用的 DAG 任务模型,以支持物联网边缘计算环境中具有复杂数据依赖性的关联任务卸载过程。此外,本文还提出了基于优先级的 DAG 任务卸载算法和二次资源分配算法,以最大限度地减少响应延迟并提高边缘服务器的资源利用率。实验结果表明,所提出的方法能以最短的响应延迟很好地支持 DAG 任务卸载过程,同时性能优于所有基准策略,适用于物联网边缘计算环境。
{"title":"Priority-based DAG task offloading and secondary resource allocation in IoT edge computing environments","authors":"Yishan Chen, Xiansong Luo, Peng Liang, Junxiao Han, Zhonghui Xu","doi":"10.1007/s00607-024-01327-5","DOIUrl":"https://doi.org/10.1007/s00607-024-01327-5","url":null,"abstract":"<p>With the development of IoT, the concept of intelligent services has gradually come to the fore. Intelligent services usually involve a large number of computation intensive tasks with data dependencies that are often modelled as directed acyclic graphs (DAGs), and the offloading of DAG tasks is complex and has proven to be an NP hard challenge. As a key research issue, the task offloading process migrates the computation intensive tasks from resource-constrained IoT devices to nearby edge servers, and pursuing a lower delay and energy consumption. However, data dependencies among tasks are complex, and it is challenging to coordinate the computation intensive tasks among multiple edge servers. In this paper, a flexible and generic DAG task model is built to support the associative task offloading process with complex data dependencies in IoT edge computing environments. Additionally, a priority-based DAG task offloading algorithm and a secondary resource allocation algorithm are proposed to minimize the response delay and improve the resource utilization of edge servers. Experimental results demonstrate that the proposed method can well support the DAG task offloading process with the shortest response delay, while outperforming all the benchmark policies, which is suitable for IoT edge computing environments.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of strategies for scalable transaction creation in blockchains 区块链可扩展交易创建策略分析
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-29 DOI: 10.1007/s00607-024-01324-8
Ole Delzer, Richard Hobeck, Ingo Weber, Dominik Kaaser, Michael Sober, Stefan Schulte

The growing popularity of blockchains highlights the need to improve their scalability. While previous research has focused on scaling transaction processing, the scalability of transaction creation remains unexplored. This issue is particularly important for organizations needing to send large volumes of transactions quickly or continuously. Scaling transaction creation is challenging, especially for blockchain platforms like Ethereum, which require transactions to include a sequence number. This paper proposes four different methods to scale transaction creation. Our experimental evaluation assesses the scalability and latency of these methods, identifying two as feasible for scaling transaction creation. Additionally, we provide an in-depth theoretical analysis of these two methods.

区块链的日益普及凸显了提高其可扩展性的必要性。以往的研究主要集中在交易处理的可扩展性上,而交易创建的可扩展性仍未得到探索。对于需要快速或持续发送大量交易的组织来说,这个问题尤为重要。交易创建的扩展具有挑战性,特别是对于以太坊这样的区块链平台,因为它要求交易包含序列号。本文提出了四种不同的方法来扩展交易创建。我们的实验评估对这些方法的可扩展性和延迟进行了评估,确定了两种可行的交易创建扩展方法。此外,我们还对这两种方法进行了深入的理论分析。
{"title":"Analysis of strategies for scalable transaction creation in blockchains","authors":"Ole Delzer, Richard Hobeck, Ingo Weber, Dominik Kaaser, Michael Sober, Stefan Schulte","doi":"10.1007/s00607-024-01324-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01324-8","url":null,"abstract":"<p>The growing popularity of blockchains highlights the need to improve their scalability. While previous research has focused on scaling transaction processing, the scalability of transaction creation remains unexplored. This issue is particularly important for organizations needing to send large volumes of transactions quickly or continuously. Scaling transaction creation is challenging, especially for blockchain platforms like Ethereum, which require transactions to include a sequence number. This paper proposes four different methods to scale transaction creation. Our experimental evaluation assesses the scalability and latency of these methods, identifying two as feasible for scaling transaction creation. Additionally, we provide an in-depth theoretical analysis of these two methods.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
$$mu $$ XL: explainable lead generation with microservices and hypothetical answers $$mu $$ XL:利用微服务和假设性答案生成可解释的线索
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-24 DOI: 10.1007/s00607-024-01321-x
Luís Cruz-Filipe, Sofia Kostopoulou, Fabrizio Montesi, Jonas Vistrup

Lead generation refers to the identification of potential topics (the ‘leads’) of importance for journalists to report on. In this article we present (mu )XL, a new lead generation tool based on a microservice architecture that includes a component of explainable AI. (mu )XL collects and stores historical and real-time data from web sources, like Google Trends, and generates current and future leads. Leads are produced by a novel engine for hypothetical reasoning based on temporal logical rules, which can identify propositions that may hold depending on the outcomes of future events. This engine also supports additional features that are relevant for lead generation, such as user-defined predicates (allowing useful custom atomic propositions to be defined as Java functions) and negation (needed to specify and reason about leads characterized by the absence of specific properties). Our microservice architecture is designed using state-of-the-art methods and tools for API design and implementation, namely API patterns and the Jolie programming language. Thus, our development provides an additional validation of their usefulness in a new application domain (journalism). We also carry out an empirical evaluation of our tool.

线索生成指的是为记者确定重要的潜在报道主题("线索")。在这篇文章中,我们将介绍一个基于微服务架构的新型线索生成工具--(mu )XL,其中包括一个可解释的人工智能组件。(mu)XL 收集并存储来自谷歌趋势等网络来源的历史和实时数据,并生成当前和未来的线索。线索由一个基于时间逻辑规则的新颖假设推理引擎生成,它可以根据未来事件的结果确定可能成立的命题。该引擎还支持与线索生成相关的其他功能,如用户自定义谓词(允许将有用的自定义原子命题定义为 Java 函数)和否定(需要指定和推理以不存在特定属性为特征的线索)。我们的微服务架构设计采用了最先进的 API 设计和实施方法与工具,即 API 模式和 Jolie 编程语言。因此,我们的开发提供了在新应用领域(新闻业)中对其实用性的额外验证。我们还对我们的工具进行了实证评估。
{"title":"$$mu $$ XL: explainable lead generation with microservices and hypothetical answers","authors":"Luís Cruz-Filipe, Sofia Kostopoulou, Fabrizio Montesi, Jonas Vistrup","doi":"10.1007/s00607-024-01321-x","DOIUrl":"https://doi.org/10.1007/s00607-024-01321-x","url":null,"abstract":"<p>Lead generation refers to the identification of potential topics (the ‘leads’) of importance for journalists to report on. In this article we present <span>(mu )</span>XL, a new lead generation tool based on a microservice architecture that includes a component of explainable AI. <span>(mu )</span>XL collects and stores historical and real-time data from web sources, like Google Trends, and generates current and future leads. Leads are produced by a novel engine for hypothetical reasoning based on temporal logical rules, which can identify propositions that may hold depending on the outcomes of future events. This engine also supports additional features that are relevant for lead generation, such as user-defined predicates (allowing useful custom atomic propositions to be defined as Java functions) and negation (needed to specify and reason about leads characterized by the absence of specific properties). Our microservice architecture is designed using state-of-the-art methods and tools for API design and implementation, namely API patterns and the Jolie programming language. Thus, our development provides an additional validation of their usefulness in a new application domain (journalism). We also carry out an empirical evaluation of our tool.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141778641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heuristic algorithm for an optimal solution of fully fuzzy transportation problem 全模糊运输问题最优解的启发式算法
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-21 DOI: 10.1007/s00607-024-01319-5
Nermin Kartli, Erkan Bostanci, Mehmet Serdar Guzel

Several problems involving uncertainties can be modeled with fuzzy numbers according to the type of these uncertainties. It is natural to express the solution to such a problem with fuzzy numbers. In this study, we consider the fully fuzzy transportation problem. All input parameters of the problem are expressed with fuzzy numbers given in the parametric form. We propose a new heuristic algorithm to approximate the fuzzy optimal solution. The fuzzy problem is solved by transforming it into two independent parametric problems with the proposed method. We first divide the interval [0, 1] into a sufficiently large number of equal intervals, then write a linear programming problem for each partition point and solve these problems by transforming them into transportation problems. The proposed algorithm is supported by examples.

根据不确定性的类型,一些涉及不确定性的问题可以用模糊数建模。用模糊数来表示这类问题的解决方案是很自然的。在本研究中,我们考虑的是全模糊运输问题。问题的所有输入参数都用参数形式给出的模糊数表示。我们提出了一种近似模糊最优解的新启发式算法。利用所提出的方法,可以将模糊问题转化为两个独立的参数问题来解决。我们首先将区间 [0, 1] 分割成足够多的相等区间,然后为每个分割点编写一个线性规划问题,并通过将这些问题转化为运输问题来求解。提出的算法有实例支持。
{"title":"Heuristic algorithm for an optimal solution of fully fuzzy transportation problem","authors":"Nermin Kartli, Erkan Bostanci, Mehmet Serdar Guzel","doi":"10.1007/s00607-024-01319-5","DOIUrl":"https://doi.org/10.1007/s00607-024-01319-5","url":null,"abstract":"<p>Several problems involving uncertainties can be modeled with fuzzy numbers according to the type of these uncertainties. It is natural to express the solution to such a problem with fuzzy numbers. In this study, we consider the fully fuzzy transportation problem. All input parameters of the problem are expressed with fuzzy numbers given in the parametric form. We propose a new heuristic algorithm to approximate the fuzzy optimal solution. The fuzzy problem is solved by transforming it into two independent parametric problems with the proposed method. We first divide the interval [0, 1] into a sufficiently large number of equal intervals, then write a linear programming problem for each partition point and solve these problems by transforming them into transportation problems. The proposed algorithm is supported by examples.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141737350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient hashing technique for malicious profile detection at hypervisor environment 用于在管理程序环境中检测恶意配置文件的高效散列技术
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-19 DOI: 10.1007/s00607-024-01325-7
Anumukonda Naga Seshu Kumar, Rajesh Kumar Yadav, Nallanthighal Srinivasa Raghava

Attack detection in cyber security systems is one of the complex tasks which require domain specific knowledge and cognitive intelligence to detect novel and unknown attacks from large scale network data. This research explores how the network operations and network security affects the detection of unknown attacks in network systems. A hash based profile matching technique is presented in this paper for attack detection. The main objective of this work is to detect unknown attacks using a profile matching approach in Hypervisors. Hypervisors are characterized by their versatile nature since they allow the utilization of available system resources. The virtual machines (VMs) in the hypervisors are not dependent on the host hardware and as a result, hypervisors are considered advantageous. In addition, hypervisors have direct access to the hardware resources such as memory, storage and processors. However, hypervisors are more susceptible to the security threats which attack each and every VM. A SHA3-512 hashing algorithm used for generating hash values in hypervisor and the proposed model is used to verify whether the profile is malicious or benign. The performance of the hashbased profile matching technique is compared with traditional hash techniques namely SHA-256 and MD5 algorithm. Results show that the proposed SHA3-512 algorithm achieves a phenomenal performance in terms of phenomenal accuracy and zero false positive rates. Simulation results also show that the computation time required by Sha3-512 algorithm is lower compared to SHA-256 and MD5 algorithms. The performance analysis validates that the hash based approach achieves reliable performance for attack detection. The effectiveness of the hashing technique was determined using three different evaluation metrics namely attack DR, FPR, and computational time. Simulation results show that the existing SHA3- 512 algorithm detection rate of 97.24% with zero false positive rate and faster computational time compared to SHA 256 and MD5 algorithms.

网络安全系统中的攻击检测是一项复杂的任务,需要特定领域的知识和认知智能,才能从大规模网络数据中检测出新型和未知攻击。本研究探讨了网络运行和网络安全如何影响网络系统中未知攻击的检测。本文提出了一种基于哈希特征匹配的攻击检测技术。这项工作的主要目标是在管理程序中使用配置文件匹配方法检测未知攻击。超级管理器的特点是其多功能性,因为它允许利用可用的系统资源。管理程序中的虚拟机(VM)不依赖于主机硬件,因此,管理程序被认为具有优势。此外,管理程序可以直接访问内存、存储和处理器等硬件资源。不过,管理程序更容易受到攻击每个虚拟机的安全威胁。在管理程序中使用 SHA3-512 哈希算法生成哈希值,并使用建议的模型来验证配置文件是恶意的还是良性的。基于散列的配置文件匹配技术的性能与传统散列技术(即 SHA-256 和 MD5 算法)进行了比较。结果表明,拟议的 SHA3-512 算法在惊人的准确率和零误报率方面实现了惊人的性能。仿真结果还显示,与 SHA-256 和 MD5 算法相比,SHA3-512 算法所需的计算时间更短。性能分析验证了基于散列的方法在攻击检测方面具有可靠的性能。使用三种不同的评估指标(即攻击 DR、FPR 和计算时间)确定了散列技术的有效性。仿真结果表明,与 SHA 256 和 MD5 算法相比,现有的 SHA3- 512 算法检测率为 97.24%,误报率为零,计算时间更短。
{"title":"Efficient hashing technique for malicious profile detection at hypervisor environment","authors":"Anumukonda Naga Seshu Kumar, Rajesh Kumar Yadav, Nallanthighal Srinivasa Raghava","doi":"10.1007/s00607-024-01325-7","DOIUrl":"https://doi.org/10.1007/s00607-024-01325-7","url":null,"abstract":"<p>Attack detection in cyber security systems is one of the complex tasks which require domain specific knowledge and cognitive intelligence to detect novel and unknown attacks from large scale network data. This research explores how the network operations and network security affects the detection of unknown attacks in network systems. A hash based profile matching technique is presented in this paper for attack detection. The main objective of this work is to detect unknown attacks using a profile matching approach in Hypervisors. Hypervisors are characterized by their versatile nature since they allow the utilization of available system resources. The virtual machines (VMs) in the hypervisors are not dependent on the host hardware and as a result, hypervisors are considered advantageous. In addition, hypervisors have direct access to the hardware resources such as memory, storage and processors. However, hypervisors are more susceptible to the security threats which attack each and every VM. A SHA3-512 hashing algorithm used for generating hash values in hypervisor and the proposed model is used to verify whether the profile is malicious or benign. The performance of the hashbased profile matching technique is compared with traditional hash techniques namely SHA-256 and MD5 algorithm. Results show that the proposed SHA3-512 algorithm achieves a phenomenal performance in terms of phenomenal accuracy and zero false positive rates. Simulation results also show that the computation time required by Sha3-512 algorithm is lower compared to SHA-256 and MD5 algorithms. The performance analysis validates that the hash based approach achieves reliable performance for attack detection. The effectiveness of the hashing technique was determined using three different evaluation metrics namely attack DR, FPR, and computational time. Simulation results show that the existing SHA3- 512 algorithm detection rate of 97.24% with zero false positive rate and faster computational time compared to SHA 256 and MD5 algorithms.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141745628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep recommendation with iteration directional adversarial training 利用迭代定向对抗训练进行深度推荐
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-17 DOI: 10.1007/s00607-024-01326-6
Agyemang Paul, Yuxuan Wan, Zhefu Wu, Boyu Chen, Shufeng Gong

Deep neural networks are vulnerable to attacks, posing significant security concerns across various applications, particularly in computer vision. Adversarial training has demonstrated effectiveness in improving the robustness of deep learning models by incorporating perturbations into the input space during training. Recently, adversarial training has been successfully applied to deep recommender systems. In these systems, user and item embeddings are perturbed through a minimax game, with constraints on perturbation directions, to enhance the model’s robustness and generalization. However, they still fail to defend against iterative attacks, which have shown an over 60% increase in effectiveness in the computer vision domain. Deep recommender systems may therefore be more susceptible to iterative attacks, which might lead to generalization failures. In this paper, we adapt iterative examples for deep recommender systems. Specifically, we propose a Deep Recommender with Iteration Directional Adversarial Training (DRIDAT) that combines attention mechanism and directional adversarial training for recommendations. Firstly, we establish a consumer-product collaborative attention to convey consumers different preferences on their interested products and the distinct preferences of different consumers on the same product they like. Secondly, we train the DRIDAT objective function using adversarial learning to minimize the impact of iterative attack. In addition, the maximum direction attack could push the embedding vector of input attacks towards instances with distinct labels. We mitigate this problem by implementing suitable constraints on the direction of the attack. Finally, we perform a series of evaluations on two prominent datasets. The findings show that our methodology outperforms all other methods for all metrics.

深度神经网络很容易受到攻击,这给各种应用,尤其是计算机视觉应用带来了严重的安全问题。对抗训练通过在训练过程中将扰动纳入输入空间,在提高深度学习模型的鲁棒性方面表现出了有效性。最近,对抗训练已成功应用于深度推荐系统。在这些系统中,通过最小博弈(minimax game)对用户和项目嵌入进行扰动,并对扰动方向进行约束,以增强模型的鲁棒性和泛化能力。然而,它们仍然无法抵御迭代攻击,而在计算机视觉领域,迭代攻击的有效性提高了 60% 以上。因此,深度推荐系统可能更容易受到迭代攻击,从而导致泛化失败。在本文中,我们为深度推荐系统调整了迭代示例。具体来说,我们提出了一种具有迭代定向对抗训练的深度推荐系统(DRIDAT),它结合了注意力机制和定向对抗训练来进行推荐。首先,我们建立了消费者-产品协同关注机制,以传递消费者对其感兴趣产品的不同偏好,以及不同消费者对其喜欢的同一产品的不同偏好。其次,我们利用对抗学习来训练 DRIDAT 目标函数,以尽量减少迭代攻击的影响。此外,最大方向攻击可能会将输入攻击的嵌入向量推向具有不同标签的实例。我们通过对攻击方向实施适当的限制来缓解这一问题。最后,我们在两个著名的数据集上进行了一系列评估。结果表明,在所有指标上,我们的方法都优于所有其他方法。
{"title":"Deep recommendation with iteration directional adversarial training","authors":"Agyemang Paul, Yuxuan Wan, Zhefu Wu, Boyu Chen, Shufeng Gong","doi":"10.1007/s00607-024-01326-6","DOIUrl":"https://doi.org/10.1007/s00607-024-01326-6","url":null,"abstract":"<p>Deep neural networks are vulnerable to attacks, posing significant security concerns across various applications, particularly in computer vision. Adversarial training has demonstrated effectiveness in improving the robustness of deep learning models by incorporating perturbations into the input space during training. Recently, adversarial training has been successfully applied to deep recommender systems. In these systems, user and item embeddings are perturbed through a minimax game, with constraints on perturbation directions, to enhance the model’s robustness and generalization. However, they still fail to defend against iterative attacks, which have shown an over 60% increase in effectiveness in the computer vision domain. Deep recommender systems may therefore be more susceptible to iterative attacks, which might lead to generalization failures. In this paper, we adapt iterative examples for deep recommender systems. Specifically, we propose a Deep Recommender with Iteration Directional Adversarial Training (DRIDAT) that combines attention mechanism and directional adversarial training for recommendations. Firstly, we establish a consumer-product collaborative attention to convey consumers different preferences on their interested products and the distinct preferences of different consumers on the same product they like. Secondly, we train the DRIDAT objective function using adversarial learning to minimize the impact of iterative attack. In addition, the maximum direction attack could push the embedding vector of input attacks towards instances with distinct labels. We mitigate this problem by implementing suitable constraints on the direction of the attack. Finally, we perform a series of evaluations on two prominent datasets. The findings show that our methodology outperforms all other methods for all metrics.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated learning for digital healthcare: concepts, applications, frameworks, and challenges 数字化医疗保健的联合学习:概念、应用、框架和挑战
IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-07-10 DOI: 10.1007/s00607-024-01317-7
D. N. Sachin, B. Annappa, Sateesh Ambesange

Various hospitals have adopted digital technologies in the healthcare sector for various healthcare-related applications. Due to the effect of the Covid-19 pandemic, digital transformation has taken place in many domains, especially in the healthcare domain; it has streamlined various healthcare activities. With the advancement in technology concept of telemedicine evolved over the years and led to personalized healthcare and drug discovery. The use of machine learning (ML) technique in healthcare enables healthcare professionals to make a more accurate and early diagnosis. Training these ML models requires a massive amount of data, including patients’ personal data, that need to be protected from unethical use. Sharing these data to train ML models may violate data privacy. A distributed ML paradigm called federated learning (FL) has allowed different medical research institutions, hospitals, and healthcare devices to train ML models without sharing raw data. This survey paper overviews existing research work on FL-related use cases and applications. This paper also discusses the state-of-the-art tools and techniques available for FL research, current shortcomings, and future challenges in using FL in healthcare.

各种医院已在医疗保健领域采用数字技术,用于各种医疗保健相关应用。由于 Covid-19 大流行病的影响,许多领域,尤其是医疗保健领域都发生了数字化转型;它简化了各种医疗保健活动。随着技术的进步,远程医疗的概念也在不断发展,并带来了个性化医疗和药物研发。在医疗保健领域使用机器学习(ML)技术可使医疗保健专业人员做出更准确、更早期的诊断。训练这些 ML 模型需要大量数据,包括患者的个人数据,这些数据需要得到保护,以免被不道德地使用。共享这些数据来训练 ML 模型可能会侵犯数据隐私。一种被称为联合学习(FL)的分布式 ML 范式允许不同的医学研究机构、医院和医疗设备在不共享原始数据的情况下训练 ML 模型。本调查报告概述了 FL 相关用例和应用的现有研究工作。本文还讨论了可用于 FL 研究的最先进工具和技术、当前的不足以及在医疗保健领域使用 FL 的未来挑战。
{"title":"Federated learning for digital healthcare: concepts, applications, frameworks, and challenges","authors":"D. N. Sachin, B. Annappa, Sateesh Ambesange","doi":"10.1007/s00607-024-01317-7","DOIUrl":"https://doi.org/10.1007/s00607-024-01317-7","url":null,"abstract":"<p>Various hospitals have adopted digital technologies in the healthcare sector for various healthcare-related applications. Due to the effect of the Covid-19 pandemic, digital transformation has taken place in many domains, especially in the healthcare domain; it has streamlined various healthcare activities. With the advancement in technology concept of telemedicine evolved over the years and led to personalized healthcare and drug discovery. The use of machine learning (ML) technique in healthcare enables healthcare professionals to make a more accurate and early diagnosis. Training these ML models requires a massive amount of data, including patients’ personal data, that need to be protected from unethical use. Sharing these data to train ML models may violate data privacy. A distributed ML paradigm called federated learning (FL) has allowed different medical research institutions, hospitals, and healthcare devices to train ML models without sharing raw data. This survey paper overviews existing research work on FL-related use cases and applications. This paper also discusses the state-of-the-art tools and techniques available for FL research, current shortcomings, and future challenges in using FL in healthcare.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1