首页 > 最新文献

Journal of Grid Computing最新文献

英文 中文
Smart City Transportation: A VANET Edge Computing Model to Minimize Latency and Delay Utilizing 5G Network 智能城市交通:利用 5G 网络最小化延迟和延时的 VANET 边缘计算模型
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-08 DOI: 10.1007/s10723-024-09747-5
Mengqi Wang, Jiayuan Mao, Wei Zhao, Xinya Han, Mengya Li, Chuanjun Liao, Haomiao Sun, Kexin Wang

Smart cities cannot function without autonomous devices that connect wirelessly and enable cellular connectivity and processing. Edge computing bridges mobile devices and the cloud, giving mobile devices access to computing, memory, and communication capabilities via vehicular ad hoc networks (VANET). VANET is a time-constrained technology that can handle requests from vehicles in a shorter amount of time. The most well-known problems with edge computing and VANET are latency and delay. Any congestion or ineffectiveness in this network can result in latency, which affects its overall efficiency. The data processing in smart city affected by latency can produce irregular decision making. Some data, like traffics, congestions needs to be addressed in time. Delay decision making can make application failure and results in wrong information processing. In this study, we created a probability-based hybrid Whale -Dragonfly Optimization (p–H-WDFOA) edge computing model for smart urban vehicle transportation that lowers the delay and latency of edge computing to address such issues. The 5G localized Multi-Access Edge Computing (MEC) servers were additionally employed, significantly reducing the wait and the latency to enhance the edge technology resources and meet the latency and Quality of Service (QoS) criteria. Compared to an experiment employing a pure cloud computing architecture, we reduced data latency by 20%. We also reduced processing time by 35% compared to cloud computing architecture. The proposed method, WDFO-VANET, improves energy consumption and minimizes the communication costs of VANET.

智能城市的运行离不开可实现无线连接、蜂窝连接和处理的自主设备。边缘计算为移动设备和云搭建了桥梁,使移动设备能够通过车载临时网络(VANET)访问计算、内存和通信功能。VANET 是一种受时间限制的技术,可以在较短时间内处理来自车辆的请求。边缘计算和 VANET 最为人熟知的问题是延迟和延时。该网络中的任何拥堵或无效现象都会导致延迟,从而影响其整体效率。受延迟影响,智慧城市中的数据处理会产生不规则的决策。有些数据,如流量、拥堵等需要及时处理。决策延迟会导致应用失败,造成错误的信息处理。在这项研究中,我们为智能城市车辆交通创建了基于概率的混合鲸-蜻蜓优化(p-H-WDFOA)边缘计算模型,该模型可降低边缘计算的延迟和时延,从而解决此类问题。此外,还采用了 5G 本地化多接入边缘计算(MEC)服务器,大大减少了等待时间和延迟,从而增强了边缘技术资源,满足了延迟和服务质量(QoS)标准。与采用纯云计算架构的实验相比,我们减少了 20% 的数据延迟。与云计算架构相比,我们还将处理时间缩短了 35%。所提出的 WDFO-VANET 方法改善了 VANET 的能耗并最大限度地降低了通信成本。
{"title":"Smart City Transportation: A VANET Edge Computing Model to Minimize Latency and Delay Utilizing 5G Network","authors":"Mengqi Wang, Jiayuan Mao, Wei Zhao, Xinya Han, Mengya Li, Chuanjun Liao, Haomiao Sun, Kexin Wang","doi":"10.1007/s10723-024-09747-5","DOIUrl":"https://doi.org/10.1007/s10723-024-09747-5","url":null,"abstract":"<p>Smart cities cannot function without autonomous devices that connect wirelessly and enable cellular connectivity and processing. Edge computing bridges mobile devices and the cloud, giving mobile devices access to computing, memory, and communication capabilities via vehicular ad hoc networks (VANET). VANET is a time-constrained technology that can handle requests from vehicles in a shorter amount of time. The most well-known problems with edge computing and VANET are latency and delay. Any congestion or ineffectiveness in this network can result in latency, which affects its overall efficiency. The data processing in smart city affected by latency can produce irregular decision making. Some data, like traffics, congestions needs to be addressed in time. Delay decision making can make application failure and results in wrong information processing. In this study, we created a probability-based hybrid Whale -Dragonfly Optimization (p–H-WDFOA) edge computing model for smart urban vehicle transportation that lowers the delay and latency of edge computing to address such issues. The 5G localized Multi-Access Edge Computing (MEC) servers were additionally employed, significantly reducing the wait and the latency to enhance the edge technology resources and meet the latency and Quality of Service (QoS) criteria. Compared to an experiment employing a pure cloud computing architecture, we reduced data latency by 20%. We also reduced processing time by 35% compared to cloud computing architecture. The proposed method, WDFO-VANET, improves energy consumption and minimizes the communication costs of VANET.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Marine Goal Optimizer Tuned Deep BiLSTM-Based Self-Configuring Intrusion Detection in Cloud 基于海洋目标优化器调整的云中自配置入侵检测深度 BiLSTM
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-05 DOI: 10.1007/s10723-023-09728-0
S. Bajpai, A. Patankar
{"title":"Marine Goal Optimizer Tuned Deep BiLSTM-Based Self-Configuring Intrusion Detection in Cloud","authors":"S. Bajpai, A. Patankar","doi":"10.1007/s10723-023-09728-0","DOIUrl":"https://doi.org/10.1007/s10723-023-09728-0","url":null,"abstract":"","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139683142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
E-Commerce Logistics and Supply Chain Network Optimization for Cross-Border 跨境电子商务物流和供应链网络优化
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-02 DOI: 10.1007/s10723-023-09737-z
Wenxia Ye

E-commerce is a growing industry that primarily relies on websites to provide services and products to businesses and customers. As a brand-new international trade, cross-border e-commerce offers numerous benefits, including increased accessibility. Even though cross-border e-commerce has a bright future, managing the global supply chain is crucial to surviving the competitive pressure and growing steadily. Traditional purchase volume forecasting uses time-series data and a straightforward prediction methodology. Numerous customer consumption habits, including the number of products or services, product collections, and taxpayer subsidies, influence the platform's sale quantity. The use of the EC supply chain has expanded significantly in the past few years because of the economy's recent rapid growth. The proposed method develops a Short-Term Demand-based Deep Neural Network and Cold Supply Chain Optimization method for predicting commodity purchase volume. The deep neural network technique suggests a cold supply chain demand forecasting framework centred on multilayer Bayesian networks (BNN) to forecast the short-term demand for e-commerce goods. The cold supply chain (CS) optimisation method determines the optimised management inventory. The research findings demonstrate that this study considers various influencing factors and chooses an appropriate forecasting technique. The proposed method outperforms 96.35% of Accuracy, 97% of Precision and 94.89% of Recall.

电子商务是一个不断发展的行业,主要依靠网站向企业和客户提供服务和产品。作为一种全新的国际贸易,跨境电子商务带来了许多好处,其中包括更高的可及性。尽管跨境电子商务前景广阔,但要在竞争压力下生存并稳步发展,管理全球供应链至关重要。传统的购买量预测使用时间序列数据和直接的预测方法。许多客户的消费习惯,包括产品或服务的数量、产品系列和纳税人补贴,都会影响平台的销售量。在过去几年中,由于经济的快速增长,EC 供应链的使用范围显著扩大。所提出的方法开发了一种基于短期需求的深度神经网络和冷供应链优化方法,用于预测商品采购量。深度神经网络技术提出了一种以多层贝叶斯网络(BNN)为核心的冷供应链需求预测框架,用于预测电子商务商品的短期需求。冷供应链(CS)优化方法确定了优化管理库存。研究结果表明,本研究考虑了各种影响因素,并选择了合适的预测技术。所提出的方法的准确率为 96.35%,精确率为 97%,召回率为 94.89%。
{"title":"E-Commerce Logistics and Supply Chain Network Optimization for Cross-Border","authors":"Wenxia Ye","doi":"10.1007/s10723-023-09737-z","DOIUrl":"https://doi.org/10.1007/s10723-023-09737-z","url":null,"abstract":"<p>E-commerce is a growing industry that primarily relies on websites to provide services and products to businesses and customers. As a brand-new international trade, cross-border e-commerce offers numerous benefits, including increased accessibility. Even though cross-border e-commerce has a bright future, managing the global supply chain is crucial to surviving the competitive pressure and growing steadily. Traditional purchase volume forecasting uses time-series data and a straightforward prediction methodology. Numerous customer consumption habits, including the number of products or services, product collections, and taxpayer subsidies, influence the platform's sale quantity. The use of the EC supply chain has expanded significantly in the past few years because of the economy's recent rapid growth. The proposed method develops a Short-Term Demand-based Deep Neural Network and Cold Supply Chain Optimization method for predicting commodity purchase volume. The deep neural network technique suggests a cold supply chain demand forecasting framework centred on multilayer Bayesian networks (BNN) to forecast the short-term demand for e-commerce goods. The cold supply chain (CS) optimisation method determines the optimised management inventory. The research findings demonstrate that this study considers various influencing factors and chooses an appropriate forecasting technique. The proposed method outperforms 96.35% of Accuracy, 97% of Precision and 94.89% of Recall.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139666458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Combined Approach of PUF and Physiological Data for Mutual Authentication and Key Agreement in WMSN 用于 WMSN 相互验证和密钥协议的 PUF 与生理数据组合方法
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-02-02 DOI: 10.1007/s10723-023-09731-5
Shanvendra Rai, Rituparna Paul, Subhasish Banerjee, Preetisudha Meher, Gulab Sah

Wireless Medical Sensor Network (WMSN) is a kind of Ad-hoc Network that is used in the health sector to continuously monitor patients’ health conditions and provide instant medical services, over a distance. This network facilitates the transmission of real-time patient data, sensed by resource-constrained biosensors, to the end user through an open communication channel. Thus, any modification or alteration in such sensed physiological data leads to the wrong diagnosis which may put the life of the patient in danger. Therefore, among many challenges in WMSN, the security is most essential requirement that needs to be addressed. Hence, to maintain the security and privacy of sensitive medical data, this article proposed a lightweight mutual authentication and key agreement (AKA) scheme using Physical Unclonable Functions (PUFs) enabled sensor nodes. Moreover, to make the WMSN more secure and reliable, the physiological data like the electrocardiogram (ECG) of the patients are also considered. In order to establish its accuracy and security, the scheme undergoes validation through the Real or Random (RoR) Model and is further confirmed through simulation using the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. A thorough examination encompassing security, performance, and a comparative assessment with existing related schemes illustrates that the proposed scheme not only exhibits superior resistance to well-known attacks in comparison to others but also upholds a cost-effective strategy at the sensor node, specifically, a reduction of 35.71% in computational cost and 49.12% in communication cost.

无线医疗传感器网络(WMSN)是一种特设网络,用于卫生部门持续监测病人的健康状况,并提供远距离即时医疗服务。该网络通过开放式通信渠道,将资源有限的生物传感器感测到的病人实时数据传输给终端用户。因此,对这些传感生理数据的任何修改或改动都会导致错误的诊断,从而危及病人的生命。因此,在 WMSN 面临的众多挑战中,安全性是需要解决的最基本要求。因此,为了维护敏感医疗数据的安全性和隐私性,本文提出了一种轻量级的相互验证和密钥协议(AKA)方案,该方案使用支持物理不可克隆函数(PUF)的传感器节点。此外,为了使 WMSN 更加安全可靠,还考虑了病人的心电图等生理数据。为了确定其准确性和安全性,该方案通过真实或随机(RoR)模型进行了验证,并通过使用互联网安全协议和应用自动验证(AVISPA)工具进行模拟来进一步确认。对安全性、性能以及与现有相关方案的比较评估等方面进行的全面检查表明,与其他方案相比,所提出的方案不仅能更好地抵御众所周知的攻击,而且还能在传感器节点上坚持一种具有成本效益的策略,具体来说,计算成本降低了 35.71%,通信成本降低了 49.12%。
{"title":"A Combined Approach of PUF and Physiological Data for Mutual Authentication and Key Agreement in WMSN","authors":"Shanvendra Rai, Rituparna Paul, Subhasish Banerjee, Preetisudha Meher, Gulab Sah","doi":"10.1007/s10723-023-09731-5","DOIUrl":"https://doi.org/10.1007/s10723-023-09731-5","url":null,"abstract":"<p>Wireless Medical Sensor Network (WMSN) is a kind of Ad-hoc Network that is used in the health sector to continuously monitor patients’ health conditions and provide instant medical services, over a distance. This network facilitates the transmission of real-time patient data, sensed by resource-constrained biosensors, to the end user through an open communication channel. Thus, any modification or alteration in such sensed physiological data leads to the wrong diagnosis which may put the life of the patient in danger. Therefore, among many challenges in WMSN, the security is most essential requirement that needs to be addressed. Hence, to maintain the security and privacy of sensitive medical data, this article proposed a lightweight mutual authentication and key agreement (AKA) scheme using Physical Unclonable Functions (PUFs) enabled sensor nodes. Moreover, to make the WMSN more secure and reliable, the physiological data like the electrocardiogram (ECG) of the patients are also considered. In order to establish its accuracy and security, the scheme undergoes validation through the Real or Random (RoR) Model and is further confirmed through simulation using the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. A thorough examination encompassing security, performance, and a comparative assessment with existing related schemes illustrates that the proposed scheme not only exhibits superior resistance to well-known attacks in comparison to others but also upholds a cost-effective strategy at the sensor node, specifically, a reduction of 35.71% in computational cost and 49.12% in communication cost.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139666826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Federated Deep Reinforcement Learning-based Low-power Caching Strategy for Cloud-edge Collaboration 基于深度强化学习的联盟式低功耗缓存策略,适用于云边缘协作
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-29 DOI: 10.1007/s10723-023-09730-6
Xinyu Zhang, Zhigang Hu, Yang Liang, Hui Xiao, Aikun Xu, Meiguang Zheng, Chuan Sun

In the era of ubiquitous network devices, an exponential increase in content requests from user equipment (UE) calls for optimized caching strategies within a cloud-edge integration. This approach is critical to handling large numbers of requests. To enhance caching efficiency, federated deep reinforcement learning (FDRL) is widely used to adjust caching policies. Nonetheless, for improved adaptability in dynamic scenarios, FDRL generally demands extended and online deep training, incurring a notable energy overhead when contrasted with rule-based approaches. With the aim of achieving a harmony between caching efficiency and training energy expenditure, we integrate a content request latency model, a deep reinforcement learning model based on markov decision processes (MDP), and a two-stage training energy consumption model. Together, these components define a new average delay and training energy gain (ADTEG) challenge. To address this challenge, we put forth a innovative dynamic federated optimization strategy. This approach refines the pre-training phase through the use of cluster-based strategies and parameter transfer methodologies. The online training phase is improved through a dynamic federated framework and an adaptive local iteration count. The experimental findings affirm that our proposed methodology reduces the training energy outlay while maintaining caching efficacy.

在网络设备无处不在的时代,来自用户设备(UE)的内容请求呈指数级增长,这就要求在云边缘集成中采用优化的缓存策略。这种方法对于处理大量请求至关重要。为了提高缓存效率,联合深度强化学习(FDRL)被广泛用于调整缓存策略。然而,为了提高动态场景中的适应性,FDRL 通常需要扩展和在线深度训练,与基于规则的方法相比,会产生显著的能量开销。为了实现缓存效率和训练能耗之间的协调,我们整合了内容请求延迟模型、基于马尔可夫决策过程(MDP)的深度强化学习模型和两阶段训练能耗模型。这些部分共同定义了一个新的平均延迟和训练能量增益(ADTEG)挑战。为应对这一挑战,我们提出了一种创新的动态联合优化策略。这种方法通过使用基于集群的策略和参数转移方法来完善预训练阶段。通过动态联合框架和自适应局部迭代次数,在线训练阶段得到了改进。实验结果证实,我们提出的方法既减少了训练能量消耗,又保持了缓存功效。
{"title":"A Federated Deep Reinforcement Learning-based Low-power Caching Strategy for Cloud-edge Collaboration","authors":"Xinyu Zhang, Zhigang Hu, Yang Liang, Hui Xiao, Aikun Xu, Meiguang Zheng, Chuan Sun","doi":"10.1007/s10723-023-09730-6","DOIUrl":"https://doi.org/10.1007/s10723-023-09730-6","url":null,"abstract":"<p>In the era of ubiquitous network devices, an exponential increase in content requests from user equipment (UE) calls for optimized caching strategies within a cloud-edge integration. This approach is critical to handling large numbers of requests. To enhance caching efficiency, federated deep reinforcement learning (FDRL) is widely used to adjust caching policies. Nonetheless, for improved adaptability in dynamic scenarios, FDRL generally demands extended and online deep training, incurring a notable energy overhead when contrasted with rule-based approaches. With the aim of achieving a harmony between caching efficiency and training energy expenditure, we integrate a content request latency model, a deep reinforcement learning model based on markov decision processes (MDP), and a two-stage training energy consumption model. Together, these components define a new average delay and training energy gain (ADTEG) challenge. To address this challenge, we put forth a innovative dynamic federated optimization strategy. This approach refines the pre-training phase through the use of cluster-based strategies and parameter transfer methodologies. The online training phase is improved through a dynamic federated framework and an adaptive local iteration count. The experimental findings affirm that our proposed methodology reduces the training energy outlay while maintaining caching efficacy.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139585756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Pallet Racking Examination in Edge Platform Based on MobileNetV2: Towards Smart Manufacturing 基于 MobileNetV2 的边缘平台自动托盘货架检查:迈向智能制造
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-27 DOI: 10.1007/s10723-023-09738-y

Abstract

Pallet racking is a critical element of the production, storage, and distribution networks businesses worldwide use. Ongoing inspections and maintenance are required to ensure the workforce's safety and the stock's protection. Currently, certified inspectors manually examine racks, which causes operational delays, service charges, and missing damages because of human error. As businesses move beyond smart manufacturing, we describe an automated racking assessment method utilizing an integrated framework, MobileNetV2-you only look once (YOLOv5). The proposed method examines the automated pallet tracking system and detects multiple damages based on edge platforms during pallet racking. It employs YOLOv5 in conjunction with the Block Development Mechanism (BDM), which detects defective pallet racks. We propose a device that attaches to the moveable cage of the forklift truck and provides adequate coverage for the neighboring racks. Also, we classify any damage as significant or minor so that floor supervisors can decide whether a replacement is necessary immediately in each circumstance. Instead of conducting annual or quarterly racking inspections, this would give the racking industry a way to continuously monitor the racking, creating a more secure workplace environment. Our suggested method generates a classifier tailored for installation onto edge devices, providing forklift operators.

摘要 托盘式货架是全球企业使用的生产、存储和配送网络中的一个关键要素。为了确保员工的安全和存货的保护,需要进行持续的检查和维护。目前,经过认证的检查员需要手动检查货架,这就造成了操作延误、服务费用和人为失误造成的遗漏损坏。随着企业向智能制造转型,我们介绍了一种利用集成框架 MobileNetV2--只看一次(YOLOv5)的自动货架评估方法。所提出的方法可检查自动托盘跟踪系统,并根据托盘装载过程中的边缘平台检测多种损坏。它将 YOLOv5 与块开发机制 (BDM) 结合使用,后者可检测有缺陷的托盘货架。我们提出了一种装置,该装置可安装在叉车的活动笼上,并为邻近的货架提供足够的覆盖范围。此外,我们还将任何损坏分为重大损坏和轻微损坏,以便楼层主管根据具体情况决定是否需要立即更换。这将为货架行业提供一种持续监控货架的方法,而不是每年或每季度进行一次货架检查,从而创造一个更安全的工作环境。我们建议的方法可以生成一个专门安装在边缘设备上的分类器,为叉车操作员提供便利。
{"title":"Automated Pallet Racking Examination in Edge Platform Based on MobileNetV2: Towards Smart Manufacturing","authors":"","doi":"10.1007/s10723-023-09738-y","DOIUrl":"https://doi.org/10.1007/s10723-023-09738-y","url":null,"abstract":"<h3>Abstract</h3> <p>Pallet racking is a critical element of the production, storage, and distribution networks businesses worldwide use. Ongoing inspections and maintenance are required to ensure the workforce's safety and the stock's protection. Currently, certified inspectors manually examine racks, which causes operational delays, service charges, and missing damages because of human error. As businesses move beyond smart manufacturing, we describe an automated racking assessment method utilizing an integrated framework, MobileNetV2-you only look once (YOLOv5). The proposed method examines the automated pallet tracking system and detects multiple damages based on edge platforms during pallet racking. It employs YOLOv5 in conjunction with the Block Development Mechanism (BDM), which detects defective pallet racks. We propose a device that attaches to the moveable cage of the forklift truck and provides adequate coverage for the neighboring racks. Also, we classify any damage as significant or minor so that floor supervisors can decide whether a replacement is necessary immediately in each circumstance. Instead of conducting annual or quarterly racking inspections, this would give the racking industry a way to continuously monitor the racking, creating a more secure workplace environment. Our suggested method generates a classifier tailored for installation onto edge devices, providing forklift operators.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139585567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybridized Black Widow-Honey Badger Optimization: Swarm Intelligence Strategy for Node Localization Scheme in WSN 黑寡妇-蜜獾混合优化:用于 WSN 节点定位方案的蜂群智能策略
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-26 DOI: 10.1007/s10723-024-09740-y
K Johny Elma, Praveena Rachel Kamala S, Saraswathi T

The evolutionary growth of Wireless Sensor Networks (WSN) exploits a wide range of applications. To deploy the WSN in a larger area, for sensing the environment, the accurate location of the node is a prerequisite. Owing to these traits, the WSN has been effectively implemented with devices. Using various localization techniques, the information related to node location is obtained for unknown nodes. Recently, node localization has employed the standard bio-inspired algorithm to sustain the fast convergence ability of WSN applications. Thus, this paper aims to develop a new hybrid optimization algorithm for solving the node localization problems among the unknown nodes in WSN. This hybrid optimization scheme is developed with two efficient heuristic strategies of Black Widow Optimization (BWO) and Honey Badger Algorithm (HBA), named as Hybridized Black Widow-Honey Badger Optimization (HBW-HBO) to achieve the objective of the framework. The main objective of the developed heuristic-based node localization framework is to minimize the localization error between the actual locations and detected locations of all nodes in WSN. For validating the developed heuristic-based node localization scheme in WSN, it is compared with different existing optimization strategies using different measures. The experimental analysis proves the robust and consistent node localization performance in WSN for the developed scheme than the other comparative algorithms.

无线传感器网络(WSN)的不断发展带来了广泛的应用。要在更大的范围内部署 WSN 并感知环境,节点的精确定位是先决条件。由于这些特点,WSN 已通过设备有效地实现。利用各种定位技术,可以获得未知节点的相关位置信息。最近,节点定位采用了标准的生物启发算法,以维持 WSN 应用的快速收敛能力。因此,本文旨在开发一种新的混合优化算法,用于解决 WSN 中未知节点间的节点定位问题。这种混合优化方案采用了黑寡妇优化(Black Widow Optimization,BWO)和蜜獾算法(Honey Badger Algorithm,HBA)两种高效的启发式策略,命名为混合黑寡妇-蜜獾优化(Hybridized Black Widow-Honey Badger Optimization,HBW-HBO),以实现框架的目标。所开发的基于启发式的节点定位框架的主要目标是最大限度地减少 WSN 中所有节点的实际位置与检测位置之间的定位误差。为了验证所开发的基于启发式的 WSN 节点定位方案,使用不同的测量方法将其与现有的不同优化策略进行了比较。实验分析证明,与其他比较算法相比,所开发的方案在 WSN 中的节点定位性能稳健且一致。
{"title":"Hybridized Black Widow-Honey Badger Optimization: Swarm Intelligence Strategy for Node Localization Scheme in WSN","authors":"K Johny Elma, Praveena Rachel Kamala S, Saraswathi T","doi":"10.1007/s10723-024-09740-y","DOIUrl":"https://doi.org/10.1007/s10723-024-09740-y","url":null,"abstract":"<p>The evolutionary growth of Wireless Sensor Networks (WSN) exploits a wide range of applications. To deploy the WSN in a larger area, for sensing the environment, the accurate location of the node is a prerequisite. Owing to these traits, the WSN has been effectively implemented with devices. Using various localization techniques, the information related to node location is obtained for unknown nodes. Recently, node localization has employed the standard bio-inspired algorithm to sustain the fast convergence ability of WSN applications. Thus, this paper aims to develop a new hybrid optimization algorithm for solving the node localization problems among the unknown nodes in WSN. This hybrid optimization scheme is developed with two efficient heuristic strategies of Black Widow Optimization (BWO) and Honey Badger Algorithm (HBA), named as Hybridized Black Widow-Honey Badger Optimization (HBW-HBO) to achieve the objective of the framework. The main objective of the developed heuristic-based node localization framework is to minimize the localization error between the actual locations and detected locations of all nodes in WSN. For validating the developed heuristic-based node localization scheme in WSN, it is compared with different existing optimization strategies using different measures. The experimental analysis proves the robust and consistent node localization performance in WSN for the developed scheme than the other comparative algorithms.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139585430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DRL-based Task and Computational Offloading for Internet of Vehicles in Decentralized Computing 基于 DRL 的分散计算中的车联网任务和计算卸载
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-25 DOI: 10.1007/s10723-023-09729-z
Ziyang Zhang, Keyu Gu, Zijie Xu

This paper focuses on the problem of computation offloading in a high-mobility Internet of Vehicles (IoVs) environment. The goal is to address the challenges related to latency, energy consumption, and payment cost requirements. The approach considers both moving and parked vehicles as fog nodes, which can assist in offloading computational tasks. However, as the number of vehicles increases, the action space for each agent grows exponentially, posing a challenge for decentralised decision-making. The dynamic nature of vehicular mobility further complicates the network dynamics, requiring joint cooperative behaviour from the learning agents to achieve convergence. The traditional deep reinforcement learning (DRL) approach for offloading in IoVs treats each agent as an independent learner. It ignores the actions of other agents during the training process. This paper utilises a cooperative three-layer decentralised architecture called Vehicle-Assisted Multi-Access Edge Computing (VMEC) to overcome this limitation. The VMEC network consists of three layers: the fog, cloudlet, and cloud layers. In the fog layer, vehicles within associated Roadside Units (RSUs) and neighbouring RSUs participate as fog nodes. The middle layer comprises Mobile Edge Computing (MEC) servers, while the top layer represents the cloud infrastructure. To address the dynamic task offloading problem in VMEC, the paper proposes using a Decentralized Framework of Task and Computational Offloading (DFTCO), which utilises the strength of MADRL and NOMA techniques. This approach considers multiple agents making offloading decisions simultaneously and aims to find the optimal matching between tasks and available resources.

本文重点探讨高移动性车联网(IoVs)环境中的计算卸载问题。目标是解决与延迟、能耗和支付成本要求相关的挑战。该方法将行驶和停泊的车辆都视为雾节点,它们可以协助卸载计算任务。然而,随着车辆数量的增加,每个代理的行动空间也呈指数增长,这给分散决策带来了挑战。车辆流动性的动态性质使网络动态变得更加复杂,需要学习代理的联合合作行为来实现收敛。用于物联网卸载的传统深度强化学习(DRL)方法将每个代理视为独立的学习者。在训练过程中,它忽略了其他代理的行动。本文利用一种名为 "车辆辅助多接入边缘计算(VMEC)"的合作式三层分散架构来克服这一局限。VMEC 网络由三层组成:雾层、小云层和云层。在雾层中,相关路边单元(RSU)内的车辆和邻近的 RSU 作为雾节点参与。中间层由移动边缘计算(MEC)服务器组成,顶层代表云基础设施。为解决 VMEC 中的动态任务卸载问题,本文建议使用任务和计算卸载分散框架(DFTCO),该框架利用了 MADRL 和 NOMA 技术的优势。这种方法考虑了多个代理同时做出卸载决策,旨在找到任务与可用资源之间的最佳匹配。
{"title":"DRL-based Task and Computational Offloading for Internet of Vehicles in Decentralized Computing","authors":"Ziyang Zhang, Keyu Gu, Zijie Xu","doi":"10.1007/s10723-023-09729-z","DOIUrl":"https://doi.org/10.1007/s10723-023-09729-z","url":null,"abstract":"<p>This paper focuses on the problem of computation offloading in a high-mobility Internet of Vehicles (IoVs) environment. The goal is to address the challenges related to latency, energy consumption, and payment cost requirements. The approach considers both moving and parked vehicles as fog nodes, which can assist in offloading computational tasks. However, as the number of vehicles increases, the action space for each agent grows exponentially, posing a challenge for decentralised decision-making. The dynamic nature of vehicular mobility further complicates the network dynamics, requiring joint cooperative behaviour from the learning agents to achieve convergence. The traditional deep reinforcement learning (DRL) approach for offloading in IoVs treats each agent as an independent learner. It ignores the actions of other agents during the training process. This paper utilises a cooperative three-layer decentralised architecture called Vehicle-Assisted Multi-Access Edge Computing (VMEC) to overcome this limitation. The VMEC network consists of three layers: the fog, cloudlet, and cloud layers. In the fog layer, vehicles within associated Roadside Units (RSUs) and neighbouring RSUs participate as fog nodes. The middle layer comprises Mobile Edge Computing (MEC) servers, while the top layer represents the cloud infrastructure. To address the dynamic task offloading problem in VMEC, the paper proposes using a Decentralized Framework of Task and Computational Offloading (DFTCO), which utilises the strength of MADRL and NOMA techniques. This approach considers multiple agents making offloading decisions simultaneously and aims to find the optimal matching between tasks and available resources.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139560757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Autoscaling of Containers and Virtual Machines for Cost Optimization in Container Clusters 在容器集群中联合自动扩展容器和虚拟机以优化成本
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-23 DOI: 10.1007/s10723-023-09732-4

Abstract

Autoscaling enables container cluster orchestrators to automatically adjust computational resources, such as containers and Virtual Machines (VMs), to handle fluctuating workloads effectively. This adaptation can involve modifying the amount of resources (horizontal scaling) or adjusting their computational capacity (vertical scaling). The motivation for our work stems from the limitations of previous autoscaling approaches, which are either partial (scaling containers or VMs, but not both) or excessively complex to be used in real systems. This complexity arises from their use of models with a large number of variables and the addressing of two simultaneous challenges: achieving the optimal deployment for a single scheduling window and managing the transition between successive scheduling windows. We propose an Integer Linear Programming (ILP) model to address the challenge of autoscaling containers and VMs jointly, both horizontally and vertically, to minimize deployment costs. This model is designed to be used with predictive autoscalers and be solved in a reasonable time, even for large clusters. To this end, improvements and reasonable simplifications with respect to previous models have been carried out to drastically reduce the size of the resource allocation problem. Furthermore, the proposed model provides an enhanced representation of system performance in comparison to previous approaches. A tool called Conlloovia has been developed to implement this model. To evaluate its performance, we have conducted a comprehensive assessment, comparing it with two heuristic allocators with different problem sizes. Our findings indicate that Conlloovia consistently demonstrates lower deployment costs in a significant number of cases. Conlloovia has also been evaluated with a real application, using synthetic and real workload traces, as well as different scheduling windows, with deployment costs approximately 20% lower than heuristic allocators.

摘要 自动扩展使容器集群协调器能够自动调整计算资源,如容器和虚拟机(VM),以有效处理波动的工作负载。这种调整可能涉及修改资源数量(水平扩展)或调整其计算能力(垂直扩展)。我们工作的动机源于以往自动缩放方法的局限性,这些方法要么是片面的(缩放容器或虚拟机,但不能同时缩放两者),要么过于复杂,难以在实际系统中使用。造成这种复杂性的原因是,这些方法使用了包含大量变量的模型,并且需要同时应对两个挑战:实现单个调度窗口的最优部署,以及管理连续调度窗口之间的过渡。我们提出了一个整数线性规划(ILP)模型,以解决横向和纵向联合自动扩展容器和虚拟机的难题,从而最大限度地降低部署成本。该模型旨在与预测性自动缩放器一起使用,并在合理的时间内求解,即使对于大型集群也是如此。为此,我们对以前的模型进行了改进和合理简化,大大缩小了资源分配问题的规模。此外,与以前的方法相比,所提出的模型增强了对系统性能的表示。为实现这一模型,我们开发了一个名为 Conlloovia 的工具。为了评估其性能,我们进行了一次全面评估,将其与两个问题规模不同的启发式分配器进行了比较。我们的研究结果表明,Conlloovia 在相当多的情况下始终表现出较低的部署成本。Conlloovia 还通过真实应用进行了评估,使用了合成和真实工作负载跟踪以及不同的调度窗口,其部署成本比启发式分配器低约 20%。
{"title":"Joint Autoscaling of Containers and Virtual Machines for Cost Optimization in Container Clusters","authors":"","doi":"10.1007/s10723-023-09732-4","DOIUrl":"https://doi.org/10.1007/s10723-023-09732-4","url":null,"abstract":"<h3>Abstract</h3> <p>Autoscaling enables container cluster orchestrators to automatically adjust computational resources, such as containers and Virtual Machines (VMs), to handle fluctuating workloads effectively. This adaptation can involve modifying the amount of resources (horizontal scaling) or adjusting their computational capacity (vertical scaling). The motivation for our work stems from the limitations of previous autoscaling approaches, which are either partial (scaling containers or VMs, but not both) or excessively complex to be used in real systems. This complexity arises from their use of models with a large number of variables and the addressing of two simultaneous challenges: achieving the optimal deployment for a single scheduling window and managing the transition between successive scheduling windows. We propose an Integer Linear Programming (ILP) model to address the challenge of autoscaling containers and VMs jointly, both horizontally and vertically, to minimize deployment costs. This model is designed to be used with predictive autoscalers and be solved in a reasonable time, even for large clusters. To this end, improvements and reasonable simplifications with respect to previous models have been carried out to drastically reduce the size of the resource allocation problem. Furthermore, the proposed model provides an enhanced representation of system performance in comparison to previous approaches. A tool called Conlloovia has been developed to implement this model. To evaluate its performance, we have conducted a comprehensive assessment, comparing it with two heuristic allocators with different problem sizes. Our findings indicate that Conlloovia consistently demonstrates lower deployment costs in a significant number of cases. Conlloovia has also been evaluated with a real application, using synthetic and real workload traces, as well as different scheduling windows, with deployment costs approximately 20% lower than heuristic allocators.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139562459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intrusion Detection using Federated Attention Neural Network for Edge Enabled Internet of Things 利用联合注意力神经网络进行边缘物联网入侵检测
IF 5.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-01-20 DOI: 10.1007/s10723-023-09725-3
Xiedong Song, Qinmin Ma

Edge nodes, which are expected to grow into a multi-billion-dollar market, are essential for detection against a variety of cyber threats on Internet-of-Things endpoints. Adopting the current network intrusion detection system with deep learning models (DLM) based on FedACNN is constrained by the resource limitations of this network equipment layer. We solve this issue by creating a unique, lightweight, quick, and accurate edge detection model to identify DLM-based distributed denial service attacks on edge nodes. Our approach can generate real results at a relevant pace even with limited resources, such as low power, memory, and processing capabilities. The Federated Convolution Neural Network (FedACNN) deep learning method uses attention mechanisms to minimise communication delay. The developed model uses a recent cybersecurity dataset deployed on an edge node simulated by a Raspberry Pi (UNSW 2015). Our findings show that, compared to traditional DLM methodologies, our model retains a high accuracy rate of about 99%, even with decreased CPU and memory resource use. Also, it is about three times smaller in volume than the most advanced model while requiring a lot less testing time.

边缘节点有望成长为一个价值数十亿美元的市场,对于检测物联网终端上的各种网络威胁至关重要。采用基于 FedACNN 的深度学习模型(DLM)的当前网络入侵检测系统受到了该网络设备层的资源限制。我们通过创建一种独特、轻量、快速、准确的边缘检测模型来识别边缘节点上基于 DLM 的分布式拒绝服务攻击,从而解决了这一问题。即使资源有限(如低功率、内存和处理能力),我们的方法也能以相应的速度生成真实结果。Federated Convolution Neural Network(FedACNN)深度学习方法采用注意机制,最大限度地减少了通信延迟。所开发的模型使用了最近部署在由树莓派(Raspberry Pi)模拟的边缘节点上的网络安全数据集(新南威尔士大学,2015 年)。我们的研究结果表明,与传统的 DLM 方法相比,我们的模型即使减少了 CPU 和内存资源的使用,仍能保持约 99% 的高准确率。此外,它的体积比最先进的模型小约三倍,而所需的测试时间却大大减少。
{"title":"Intrusion Detection using Federated Attention Neural Network for Edge Enabled Internet of Things","authors":"Xiedong Song, Qinmin Ma","doi":"10.1007/s10723-023-09725-3","DOIUrl":"https://doi.org/10.1007/s10723-023-09725-3","url":null,"abstract":"<p>Edge nodes, which are expected to grow into a multi-billion-dollar market, are essential for detection against a variety of cyber threats on Internet-of-Things endpoints. Adopting the current network intrusion detection system with deep learning models (DLM) based on FedACNN is constrained by the resource limitations of this network equipment layer. We solve this issue by creating a unique, lightweight, quick, and accurate edge detection model to identify DLM-based distributed denial service attacks on edge nodes. Our approach can generate real results at a relevant pace even with limited resources, such as low power, memory, and processing capabilities. The Federated Convolution Neural Network (FedACNN) deep learning method uses attention mechanisms to minimise communication delay. The developed model uses a recent cybersecurity dataset deployed on an edge node simulated by a Raspberry Pi (UNSW 2015). Our findings show that, compared to traditional DLM methodologies, our model retains a high accuracy rate of about 99%, even with decreased CPU and memory resource use. Also, it is about three times smaller in volume than the most advanced model while requiring a lot less testing time.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139509469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Grid Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1