In the current era of information technology, blockchain is widely used in various fields, and the monitoring of the security and status of the blockchain system is of great concern. Online anomaly detection for the real-time stream data plays vital role in monitoring strategy to find abnormal events and status of blockchain system. However, as the high requirements of real-time and online scenario, online anomaly detection faces many problems such as limited training data, distribution drift, and limited update frequency. In this paper, we propose an adaptive stream outlier detection method (ASOD) to overcome the limitations. It first designs a K-nearest neighbor Gaussian mixture model (KNN-GMM) and utilizes online learning strategy. So, it is suitable for online scenarios and does not rely on large training data. The K-nearest neighbor optimization limits the influence of new data locally rather than globally, thus improving the stability. Then, ASOD applies the mechanism of dynamic maintenance of Gaussian components and the strategy of dynamic context control to achieve self-adaptation to the distribution drift. And finally, ASOD adopts a dimensionless distance metric based on Mahalanobis distance and proposes an automatic threshold method to accomplish anomaly detection. In addition, the KNN-GMM provides the life cycle and the anomaly index for continuous tracking and analysis, which facilities the cause analysis and further interpretation and traceability. From the experimental results, it can be seen that ASOD achieves near-optimal F1 and recall on the NAB dataset with an improvement of 6% and 20.3% over the average, compared to baselines with sufficient training data. ASOD has the lowest F1 variance among the five best methods, indicating that it is effective and stable for online anomaly detection on stream data.
在当前的信息技术时代,区块链被广泛应用于各个领域,对区块链系统安全和状态的监控备受关注。实时流数据的在线异常检测在发现区块链系统异常事件和状态的监控策略中发挥着重要作用。然而,由于实时性和在线场景的高要求,在线异常检测面临着训练数据有限、分布漂移和更新频率有限等诸多问题。本文提出了一种自适应流异常点检测方法(ASOD)来克服上述限制。它首先设计了一个 K 近邻高斯混合模型(KNN-GMM),并采用在线学习策略。因此,它适用于在线场景,且不依赖大量训练数据。K-nearest neighbor 优化限制了新数据对局部而非全局的影响,从而提高了稳定性。然后,ASOD 应用高斯成分动态维护机制和动态上下文控制策略,实现对分布漂移的自适应。最后,ASOD 采用了基于 Mahalanobis 距离的无量纲距离度量,并提出了一种自动阈值方法来完成异常检测。此外,KNN-GMM 还提供了生命周期和异常指数,用于持续跟踪和分析,从而便于分析原因,并进一步解释和追溯。从实验结果可以看出,ASOD 在 NAB 数据集上实现了接近最优的 F1 和召回率,与有足够训练数据的基线相比,平均提高了 6% 和 20.3%。在五种最佳方法中,ASOD 的 F1 方差最小,这表明它对流数据的在线异常检测是有效和稳定的。
{"title":"ASOD: an adaptive stream outlier detection method using online strategy","authors":"Zhichao Hu, Xiangzhan Yu, Likun Liu, Yu Zhang, Haining Yu","doi":"10.1186/s13677-024-00682-0","DOIUrl":"https://doi.org/10.1186/s13677-024-00682-0","url":null,"abstract":"In the current era of information technology, blockchain is widely used in various fields, and the monitoring of the security and status of the blockchain system is of great concern. Online anomaly detection for the real-time stream data plays vital role in monitoring strategy to find abnormal events and status of blockchain system. However, as the high requirements of real-time and online scenario, online anomaly detection faces many problems such as limited training data, distribution drift, and limited update frequency. In this paper, we propose an adaptive stream outlier detection method (ASOD) to overcome the limitations. It first designs a K-nearest neighbor Gaussian mixture model (KNN-GMM) and utilizes online learning strategy. So, it is suitable for online scenarios and does not rely on large training data. The K-nearest neighbor optimization limits the influence of new data locally rather than globally, thus improving the stability. Then, ASOD applies the mechanism of dynamic maintenance of Gaussian components and the strategy of dynamic context control to achieve self-adaptation to the distribution drift. And finally, ASOD adopts a dimensionless distance metric based on Mahalanobis distance and proposes an automatic threshold method to accomplish anomaly detection. In addition, the KNN-GMM provides the life cycle and the anomaly index for continuous tracking and analysis, which facilities the cause analysis and further interpretation and traceability. From the experimental results, it can be seen that ASOD achieves near-optimal F1 and recall on the NAB dataset with an improvement of 6% and 20.3% over the average, compared to baselines with sufficient training data. ASOD has the lowest F1 variance among the five best methods, indicating that it is effective and stable for online anomaly detection on stream data.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"65 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-28DOI: 10.1186/s13677-024-00675-z
Chaoyang Zhu
Computational intelligence techniques have emerged as a promising approach for diagnosing various medical conditions, including memory impairment. Increased abuse of psychoactive drugs poses a global public health burden, as repeated exposure to these substances can cause neurodegeneration, premature aging, and negatively affect memory impairment. Many studies in the literature relied on statistical studies, but they remained inaccurate. Some studies relied on physical data because the time factor was not considered, until Artificial Intelligence (AI) techniques came along that proved their worth in this diagnosis. The variable deep neural network method was used to adapt to the intermediate results and re-process the intermediate in case the result is undesirable. Computational intelligence was used in this study to classify a brain image from MRI or CT scans and to show the effectiveness of the dose ratio on health with treatment time, and to diagnose memory impairment in users of psychoactive substances. Understanding the neurotoxic profiles of psychoactive substances and the underlying pathways is hypothesized to be of great importance in improving the risk assessment and treatment of substance use disorders. The results proved the worth of the proposed method in terms of the accuracy of recognition rate as well as the possibility of diagnosis. It can be concluded that the diagnostic efficiency is increased by increasing the number of hidden layers in the neural network and controlling the weights and variables that control the deep learning algorithm. Thus, we conclude that good classification in this field may save human life or early detection of memory impairment.
{"title":"Computational intelligence-based classification system for the diagnosis of memory impairment in psychoactive substance users","authors":"Chaoyang Zhu","doi":"10.1186/s13677-024-00675-z","DOIUrl":"https://doi.org/10.1186/s13677-024-00675-z","url":null,"abstract":"Computational intelligence techniques have emerged as a promising approach for diagnosing various medical conditions, including memory impairment. Increased abuse of psychoactive drugs poses a global public health burden, as repeated exposure to these substances can cause neurodegeneration, premature aging, and negatively affect memory impairment. Many studies in the literature relied on statistical studies, but they remained inaccurate. Some studies relied on physical data because the time factor was not considered, until Artificial Intelligence (AI) techniques came along that proved their worth in this diagnosis. The variable deep neural network method was used to adapt to the intermediate results and re-process the intermediate in case the result is undesirable. Computational intelligence was used in this study to classify a brain image from MRI or CT scans and to show the effectiveness of the dose ratio on health with treatment time, and to diagnose memory impairment in users of psychoactive substances. Understanding the neurotoxic profiles of psychoactive substances and the underlying pathways is hypothesized to be of great importance in improving the risk assessment and treatment of substance use disorders. The results proved the worth of the proposed method in terms of the accuracy of recognition rate as well as the possibility of diagnosis. It can be concluded that the diagnostic efficiency is increased by increasing the number of hidden layers in the neural network and controlling the weights and variables that control the deep learning algorithm. Thus, we conclude that good classification in this field may save human life or early detection of memory impairment.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-26DOI: 10.1186/s13677-024-00681-1
Jiageng Yang, Chuanyi Liu, Binxing Fang
Coverage-guided fuzzing is one of the most popular approaches to detect bugs in programs. Existing work has shown that coverage metrics are a crucial factor in guiding fuzzing exploration of targets. A fine-grained coverage metric can help fuzzing to detect more bugs and trigger more execution states. Cloud-native applications that written by Golang play an important role in the modern computing paradigm. However, existing fuzzers for Golang still employ coarse-grained block coverage metrics, and there is no fuzzer specifically for cloud-native applications, which hinders the bug detection in cloud-native applications. Using fine-grained coverage metrics introduces more seeds and even leads to seed explosion, especially in large targets such as cloud-native applications. Therefore, we employ an accurate edge coverage metric in fuzzer for Golang, which achieves finer test granularity and more accurate coverage information than block coverage metrics. To mitigate the seed explosion problem caused by fine-grained coverage metrics and large target sizes, we propose smart seed selection and adaptive task scheduling algorithms based on a variant of the classical adversarial multi-armed bandit (AMAB) algorithm. Extensive evaluation of our prototype on 16 targets in real-world cloud-native infrastructures shows that our approach detects 233% more bugs than go-fuzz, achieving an average coverage improvement of 100.7%. Our approach effectively mitigates seed explosion by reducing the number of seeds generated by 41% and introduces only 14% performance overhead.
{"title":"Adaptive scheduling-based fine-grained greybox fuzzing for cloud-native applications","authors":"Jiageng Yang, Chuanyi Liu, Binxing Fang","doi":"10.1186/s13677-024-00681-1","DOIUrl":"https://doi.org/10.1186/s13677-024-00681-1","url":null,"abstract":"Coverage-guided fuzzing is one of the most popular approaches to detect bugs in programs. Existing work has shown that coverage metrics are a crucial factor in guiding fuzzing exploration of targets. A fine-grained coverage metric can help fuzzing to detect more bugs and trigger more execution states. Cloud-native applications that written by Golang play an important role in the modern computing paradigm. However, existing fuzzers for Golang still employ coarse-grained block coverage metrics, and there is no fuzzer specifically for cloud-native applications, which hinders the bug detection in cloud-native applications. Using fine-grained coverage metrics introduces more seeds and even leads to seed explosion, especially in large targets such as cloud-native applications. Therefore, we employ an accurate edge coverage metric in fuzzer for Golang, which achieves finer test granularity and more accurate coverage information than block coverage metrics. To mitigate the seed explosion problem caused by fine-grained coverage metrics and large target sizes, we propose smart seed selection and adaptive task scheduling algorithms based on a variant of the classical adversarial multi-armed bandit (AMAB) algorithm. Extensive evaluation of our prototype on 16 targets in real-world cloud-native infrastructures shows that our approach detects 233% more bugs than go-fuzz, achieving an average coverage improvement of 100.7%. Our approach effectively mitigates seed explosion by reducing the number of seeds generated by 41% and introduces only 14% performance overhead.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-24DOI: 10.1186/s13677-024-00680-2
Xiao Zheng, Muhammad Tahir, Khursheed Aurangzeb, Muhammad Shahid Anwar, Muhammad Aamir, Ahmad Farzan, Rizwan Ullah
Mobile edge computing (MEC) reduces the latency for end users to access applications deployed at the edge by offloading tasks to the edge. With the popularity of e-commerce and the expansion of business scale, server load continues to increase, and energy efficiency issues gradually become more prominent. Computation offloading has received widespread attention as a technology that effectively reduces server load. However, how to improve energy efficiency while ensuring computing requirements is an important challenge facing computation offloading. To solve this problem, using non-orthogonal multiple access (NOMA) to increase the efficiency of multi-access wireless transmission, MEC supporting NOMA is investigated in the research. Computing resources will be divided into separate sub-computing that will be handled via e-commerce terminals or transferred to edge sides by reutilizing radio resources, we put forward a Group Switching Matching Algorithm Based on Resource Unit Allocation (GSM-RUA) algorithm that is multi-dimensional. To this end, we first formulate this task allocation problem as a long-term stochastic optimization problem, which we then convert to three short-term deterministic sub-programming problems using Lyapunov optimization, namely, radio resource allocation in a large timescale, computation resource allocating and splitting in a small-time frame. Of the 3 short-term deterministic sub-programming problems, the first sub-programming problem can be remodeled into a 1 to n matching problem, which can be solved using the block-shift-matching-based radio resource allocation method. The latter two sub-programming problems are then transformed into two continuous convex problems by relaxation and then solved easily. We then use simulations to prove that our GSM-RUA algorithm is superior to the state-of-the-art resource management algorithms in terms of energy consumption, efficiency and complexity for e-commerce scenarios.
{"title":"Non-orthogonal multiple access-based MEC for energy-efficient task offloading in e-commerce systems","authors":"Xiao Zheng, Muhammad Tahir, Khursheed Aurangzeb, Muhammad Shahid Anwar, Muhammad Aamir, Ahmad Farzan, Rizwan Ullah","doi":"10.1186/s13677-024-00680-2","DOIUrl":"https://doi.org/10.1186/s13677-024-00680-2","url":null,"abstract":"Mobile edge computing (MEC) reduces the latency for end users to access applications deployed at the edge by offloading tasks to the edge. With the popularity of e-commerce and the expansion of business scale, server load continues to increase, and energy efficiency issues gradually become more prominent. Computation offloading has received widespread attention as a technology that effectively reduces server load. However, how to improve energy efficiency while ensuring computing requirements is an important challenge facing computation offloading. To solve this problem, using non-orthogonal multiple access (NOMA) to increase the efficiency of multi-access wireless transmission, MEC supporting NOMA is investigated in the research. Computing resources will be divided into separate sub-computing that will be handled via e-commerce terminals or transferred to edge sides by reutilizing radio resources, we put forward a Group Switching Matching Algorithm Based on Resource Unit Allocation (GSM-RUA) algorithm that is multi-dimensional. To this end, we first formulate this task allocation problem as a long-term stochastic optimization problem, which we then convert to three short-term deterministic sub-programming problems using Lyapunov optimization, namely, radio resource allocation in a large timescale, computation resource allocating and splitting in a small-time frame. Of the 3 short-term deterministic sub-programming problems, the first sub-programming problem can be remodeled into a 1 to n matching problem, which can be solved using the block-shift-matching-based radio resource allocation method. The latter two sub-programming problems are then transformed into two continuous convex problems by relaxation and then solved easily. We then use simulations to prove that our GSM-RUA algorithm is superior to the state-of-the-art resource management algorithms in terms of energy consumption, efficiency and complexity for e-commerce scenarios.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electronic health record (EHR) cloud system, as a primary tool driving the informatization of medical data, have positively impacted both doctors and patients by providing accurate and complete patient information. However, ensuring the security of EHR cloud system remains a critical issue. Some patients require regular remote medical services, and controlling access to medical data involving patient privacy during specific times is essential. Timed-release encryption (TRE) technology enables the sender to preset a future time T at which the data can be decrypted and accessed. It is a cryptographic primitive with time-dependent properties. Currently, mainstream TRE schemes are based on non-interactive single time server methods. However, if the single time server is attacked or corrupted, it is easy to directly threaten the security applications of TRE. Although some research schemes “distribute” the single time server into multiple ones, they still cannot resist the single point of failure problem. To address this issue, we propose a multiple time servers TRE scheme based on Shamir secret sharing and another variant derived from it. In our proposed schemes, the data receiver does not need to interact with the time servers; instead, they only need to obtain the time trapdoors that exceed or equal the preset threshold value for decryption, which ensures the identity privacy of the data sender and tolerates partial downtime or other failures of some time servers, significantly improving TRE reliability. Security analysis indicates that our proposed schemes demonstrate data confidentiality, verifiability, anti-advance decryption, and robust decryption with multiple time trapdoors, making them more practical. Efficiency analysis indicates that although our schemes have slightly higher computational costs than most efficient existing TRE schemes, such differences are insignificant from a practical application perspective.
电子病历(EHR)云系统作为推动医疗数据信息化的主要工具,通过提供准确、完整的患者信息,对医生和患者都产生了积极影响。然而,确保电子病历云系统的安全性仍然是一个关键问题。有些患者需要定期接受远程医疗服务,因此必须控制在特定时间内对涉及患者隐私的医疗数据的访问。定时释放加密(TRE)技术使发送方能够预设一个未来时间 T,在该时间 T 上可以解密和访问数据。它是一种加密原语,具有随时间变化的特性。目前,主流的 TRE 方案都基于非交互式单一时间服务器方法。然而,如果单一时间服务器受到攻击或损坏,很容易直接威胁到 TRE 的安全应用。虽然一些研究方案将单个时间服务器 "分布 "到多个时间服务器中,但仍无法抵御单点故障问题。针对这一问题,我们提出了一种基于 Shamir 秘密共享的多时间服务器 TRE 方案,以及由其衍生出的另一种变体。在我们提出的方案中,数据接收方不需要与时间服务器交互,而只需要获取超过或等于预设阈值的时间陷阱门进行解密,这样既保证了数据发送方的身份隐私,又能容忍部分时间服务器的部分宕机或其他故障,大大提高了 TRE 的可靠性。安全性分析表明,我们提出的方案具有数据保密性、可验证性、防提前解密性和使用多个时间陷阱门的稳健解密性,因此更加实用。效率分析表明,虽然我们的方案的计算成本略高于现有的大多数高效 TRE 方案,但从实际应用的角度来看,这种差异并不明显。
{"title":"Multiple time servers timed-release encryption based on Shamir secret sharing for EHR cloud system","authors":"Ke Yuan, Ziwei Cheng, Keyan Chen, Bozhen Wang, Junyang Sun, Sufang Zhou, Chunfu Jia","doi":"10.1186/s13677-024-00676-y","DOIUrl":"https://doi.org/10.1186/s13677-024-00676-y","url":null,"abstract":"Electronic health record (EHR) cloud system, as a primary tool driving the informatization of medical data, have positively impacted both doctors and patients by providing accurate and complete patient information. However, ensuring the security of EHR cloud system remains a critical issue. Some patients require regular remote medical services, and controlling access to medical data involving patient privacy during specific times is essential. Timed-release encryption (TRE) technology enables the sender to preset a future time T at which the data can be decrypted and accessed. It is a cryptographic primitive with time-dependent properties. Currently, mainstream TRE schemes are based on non-interactive single time server methods. However, if the single time server is attacked or corrupted, it is easy to directly threaten the security applications of TRE. Although some research schemes “distribute” the single time server into multiple ones, they still cannot resist the single point of failure problem. To address this issue, we propose a multiple time servers TRE scheme based on Shamir secret sharing and another variant derived from it. In our proposed schemes, the data receiver does not need to interact with the time servers; instead, they only need to obtain the time trapdoors that exceed or equal the preset threshold value for decryption, which ensures the identity privacy of the data sender and tolerates partial downtime or other failures of some time servers, significantly improving TRE reliability. Security analysis indicates that our proposed schemes demonstrate data confidentiality, verifiability, anti-advance decryption, and robust decryption with multiple time trapdoors, making them more practical. Efficiency analysis indicates that although our schemes have slightly higher computational costs than most efficient existing TRE schemes, such differences are insignificant from a practical application perspective.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-04DOI: 10.1186/s13677-024-00677-x
Hongxia He, Xi Li, Peng Chen, Juan Chen, Ming Liu, Lei Wu
Cloud environment is a virtual, online, and distributed computing environment that provides users with large-scale services. And cloud monitoring plays an integral role in protecting infrastructures in the cloud environment. Cloud monitoring systems need to closely monitor various KPIs of cloud resources, to accurately detect anomalies. However, due to the complexity and highly dynamic nature of the cloud environment, anomaly detection for these KPIs with various patterns and data quality is a huge challenge, especially those massive unlabeled data. Besides, it’s also difficult to improve the accuracy of the existing anomaly detection methods. To solve these problems, we propose a novel Dynamic Graph Transformer based Parallel Framework (DGT-PF) for efficiently detect system anomalies in cloud infrastructures, which utilizes Transformer with anomaly attention mechanism and Graph Neural Network (GNN) to learn the spatio-temporal features of KPIs to improve the accuracy and timeliness of model anomaly detection. Specifically, we propose an effective dynamic relationship embedding strategy to dynamically learn spatio-temporal features and adaptively generate adjacency matrices, and soft cluster each GNN layer through Diffpooling module. In addition, we also use nonlinear neural network model and AR-MLP model in parallel to obtain better detection accuracy and improve detection performance. The experiment shows that the DGT-PF framework have achieved the highest F1-Score on 5 public datasets, with an average improvement of 21.6% compared to 11 anomaly detection models.
{"title":"Efficiently localizing system anomalies for cloud infrastructures: a novel Dynamic Graph Transformer based Parallel Framework","authors":"Hongxia He, Xi Li, Peng Chen, Juan Chen, Ming Liu, Lei Wu","doi":"10.1186/s13677-024-00677-x","DOIUrl":"https://doi.org/10.1186/s13677-024-00677-x","url":null,"abstract":"Cloud environment is a virtual, online, and distributed computing environment that provides users with large-scale services. And cloud monitoring plays an integral role in protecting infrastructures in the cloud environment. Cloud monitoring systems need to closely monitor various KPIs of cloud resources, to accurately detect anomalies. However, due to the complexity and highly dynamic nature of the cloud environment, anomaly detection for these KPIs with various patterns and data quality is a huge challenge, especially those massive unlabeled data. Besides, it’s also difficult to improve the accuracy of the existing anomaly detection methods. To solve these problems, we propose a novel Dynamic Graph Transformer based Parallel Framework (DGT-PF) for efficiently detect system anomalies in cloud infrastructures, which utilizes Transformer with anomaly attention mechanism and Graph Neural Network (GNN) to learn the spatio-temporal features of KPIs to improve the accuracy and timeliness of model anomaly detection. Specifically, we propose an effective dynamic relationship embedding strategy to dynamically learn spatio-temporal features and adaptively generate adjacency matrices, and soft cluster each GNN layer through Diffpooling module. In addition, we also use nonlinear neural network model and AR-MLP model in parallel to obtain better detection accuracy and improve detection performance. The experiment shows that the DGT-PF framework have achieved the highest F1-Score on 5 public datasets, with an average improvement of 21.6% compared to 11 anomaly detection models.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"70 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141256385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-29DOI: 10.1186/s13677-024-00674-0
Hao Zhong, Dong Yang, Shengdong Shi, Lai Wei, Yanyan Wang
In recent years, knowledge graph technology has been widely applied in various fields such as intelligent auditing, urban transportation planning, legal research, and financial analysis. In traditional auditing methods, there are inefficiencies in data integration and analysis, making it difficult to achieve deep correlation analysis and risk identification among data. Additionally, decision support systems in the auditing process may face issues of insufficient information interpretability and limited predictive capability, thus affecting the quality of auditing and the scientificity of decision-making. However, knowledge graphs, by constructing rich networks of entity relationships, provide deep knowledge support for areas such as intelligent search, recommendation systems, and semantic understanding, significantly improving the accuracy and efficiency of information processing. This presents new opportunities to address the challenges of traditional auditing techniques. In this paper, we investigate the integration of intelligent auditing and knowledge graphs, focusing on the application of knowledge graph technology in auditing work for power engineering projects. We particularly emphasize mainstream key technologies of knowledge graphs, such as data extraction, knowledge fusion, and knowledge graph reasoning. We also introduce the application of knowledge graph technology in intelligent auditing, such as improving auditing efficiency and identifying auditing risks. Furthermore, considering the environment of cloud-edge collaboration to reduce computing latency, knowledge graphs can also play an important role in intelligent auditing. By integrating knowledge graph technology with cloud-edge collaboration, distributed computing and data processing can be achieved, reducing computing latency and improving the response speed and efficiency of intelligent auditing systems. Finally, we summarize the current research status, outlining the challenges faced by knowledge graph technology in the field of intelligent auditing, such as scalability and security. At the same time, we elaborate on the future development trends and opportunities of knowledge graphs in intelligent auditing.
{"title":"From data to insights: the application and challenges of knowledge graphs in intelligent audit","authors":"Hao Zhong, Dong Yang, Shengdong Shi, Lai Wei, Yanyan Wang","doi":"10.1186/s13677-024-00674-0","DOIUrl":"https://doi.org/10.1186/s13677-024-00674-0","url":null,"abstract":"In recent years, knowledge graph technology has been widely applied in various fields such as intelligent auditing, urban transportation planning, legal research, and financial analysis. In traditional auditing methods, there are inefficiencies in data integration and analysis, making it difficult to achieve deep correlation analysis and risk identification among data. Additionally, decision support systems in the auditing process may face issues of insufficient information interpretability and limited predictive capability, thus affecting the quality of auditing and the scientificity of decision-making. However, knowledge graphs, by constructing rich networks of entity relationships, provide deep knowledge support for areas such as intelligent search, recommendation systems, and semantic understanding, significantly improving the accuracy and efficiency of information processing. This presents new opportunities to address the challenges of traditional auditing techniques. In this paper, we investigate the integration of intelligent auditing and knowledge graphs, focusing on the application of knowledge graph technology in auditing work for power engineering projects. We particularly emphasize mainstream key technologies of knowledge graphs, such as data extraction, knowledge fusion, and knowledge graph reasoning. We also introduce the application of knowledge graph technology in intelligent auditing, such as improving auditing efficiency and identifying auditing risks. Furthermore, considering the environment of cloud-edge collaboration to reduce computing latency, knowledge graphs can also play an important role in intelligent auditing. By integrating knowledge graph technology with cloud-edge collaboration, distributed computing and data processing can be achieved, reducing computing latency and improving the response speed and efficiency of intelligent auditing systems. Finally, we summarize the current research status, outlining the challenges faced by knowledge graph technology in the field of intelligent auditing, such as scalability and security. At the same time, we elaborate on the future development trends and opportunities of knowledge graphs in intelligent auditing.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141198279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-28DOI: 10.1186/s13677-024-00663-3
Hamza Sulimani, Rahaf Sulimani, Fahimeh Ramezani, Mohsen Naderpour, Huan Huo, Tony Jan, Mukesh Prasad
Load balancing is crucial in distributed systems like fog computing, where efficiency is paramount. Offloading with different approaches is the key to balancing the load in distributed environments. Static offloading (SoA) falls short in heterogeneous networks, necessitating dynamic offloading to reduce latency in time-sensitive tasks. However, prevalent dynamic offloading (PoA) solutions often come with hidden costs that impact sensitive applications, including decision time, networks congested and distance offloading. This paper introduces the Hybrid Offloading (HybOff) algorithm, which substantially enhances load balancing and resource utilization in fog networks, addressing issues in both static and dynamic approaches while leveraging clustering theory. Its goal is to create an uncomplicated low-cost offloading approach that enhances IoT application performance by eliminating the consequences of hidden costs regardless of network size. Experimental results using the iFogSim simulation tool show that HybOff significantly reduces offloading messages, distance, and decision-offloading consequences. It improves load balancing by 97%, surpassing SoA (64%) and PoA (88%). Additionally, it increases system utilization by an average of 50% and enhances system performance 1.6 times and 1.4 times more than SoA and PoA, respectively. In summary, this paper tries to introduce a new offloading approach in load balancing research in fog environments.
{"title":"HybOff: a Hybrid Offloading approach to improve load balancing in fog environments","authors":"Hamza Sulimani, Rahaf Sulimani, Fahimeh Ramezani, Mohsen Naderpour, Huan Huo, Tony Jan, Mukesh Prasad","doi":"10.1186/s13677-024-00663-3","DOIUrl":"https://doi.org/10.1186/s13677-024-00663-3","url":null,"abstract":"Load balancing is crucial in distributed systems like fog computing, where efficiency is paramount. Offloading with different approaches is the key to balancing the load in distributed environments. Static offloading (SoA) falls short in heterogeneous networks, necessitating dynamic offloading to reduce latency in time-sensitive tasks. However, prevalent dynamic offloading (PoA) solutions often come with hidden costs that impact sensitive applications, including decision time, networks congested and distance offloading. This paper introduces the Hybrid Offloading (HybOff) algorithm, which substantially enhances load balancing and resource utilization in fog networks, addressing issues in both static and dynamic approaches while leveraging clustering theory. Its goal is to create an uncomplicated low-cost offloading approach that enhances IoT application performance by eliminating the consequences of hidden costs regardless of network size. Experimental results using the iFogSim simulation tool show that HybOff significantly reduces offloading messages, distance, and decision-offloading consequences. It improves load balancing by 97%, surpassing SoA (64%) and PoA (88%). Additionally, it increases system utilization by an average of 50% and enhances system performance 1.6 times and 1.4 times more than SoA and PoA, respectively. In summary, this paper tries to introduce a new offloading approach in load balancing research in fog environments.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"70 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-26DOI: 10.1186/s13677-024-00633-9
Danial Shiraly, Ziba Eslami, Nasrollah Pakniat
With the rapid development of cloud computing technology, cloud storage services are becoming more and more mature. However, the storage of sensitive data on remote servers poses privacy risks and is presently a source of concern. Searchable Encryption (SE) is an effective method for protecting sensitive data while preserving server-side searchability. Hierarchical Public key Encryption with Keyword Search (HPEKS), a new variant of SE, allows users with higher access permission to search over encrypted data sent to lower-level users. To the best of our knowledge, there exist only four HPEKS schemes in the literature. Two of them are in traditional public-key setting, and the remaining ones are identity-based public key cryptosystems. Unfortunately, all of the four existing HPEKS schemes are vulnerable against inside Keyword Guessing Attacks (KGAs). Moreover, all of the existing HPEKS schemes are based on the computationally expensive bilinear pairing operation which dramatically increases the computational costs. To overcome these issues, in this paper, we introduce the notion of Hierarchical Identity-Based Authenticated Encryption with Keyword Search (HIBAEKS). We formulate a security model for HIBAEKS and propose an efficient pairing-free HIBAEKS scheme. We then prove that the proposed HIBAEKS scheme is secure under the defined security model and is resistant against KGAs. Finally, we compare our proposed scheme with related constructions regarding security requirements, computational and communication costs to indicate the overall superiority of our proposed scheme.
{"title":"Hierarchical Identity-Based Authenticated Encryption with Keyword Search over encrypted cloud data","authors":"Danial Shiraly, Ziba Eslami, Nasrollah Pakniat","doi":"10.1186/s13677-024-00633-9","DOIUrl":"https://doi.org/10.1186/s13677-024-00633-9","url":null,"abstract":"With the rapid development of cloud computing technology, cloud storage services are becoming more and more mature. However, the storage of sensitive data on remote servers poses privacy risks and is presently a source of concern. Searchable Encryption (SE) is an effective method for protecting sensitive data while preserving server-side searchability. Hierarchical Public key Encryption with Keyword Search (HPEKS), a new variant of SE, allows users with higher access permission to search over encrypted data sent to lower-level users. To the best of our knowledge, there exist only four HPEKS schemes in the literature. Two of them are in traditional public-key setting, and the remaining ones are identity-based public key cryptosystems. Unfortunately, all of the four existing HPEKS schemes are vulnerable against inside Keyword Guessing Attacks (KGAs). Moreover, all of the existing HPEKS schemes are based on the computationally expensive bilinear pairing operation which dramatically increases the computational costs. To overcome these issues, in this paper, we introduce the notion of Hierarchical Identity-Based Authenticated Encryption with Keyword Search (HIBAEKS). We formulate a security model for HIBAEKS and propose an efficient pairing-free HIBAEKS scheme. We then prove that the proposed HIBAEKS scheme is secure under the defined security model and is resistant against KGAs. Finally, we compare our proposed scheme with related constructions regarding security requirements, computational and communication costs to indicate the overall superiority of our proposed scheme.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141171546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Following publication of the original article [1], we have been notified that Acknowledgement declaration was published incorrectly.
It is now:
Acknowledgements
The authors express their gratitude to Huanggang Normal University for supporting this research. Furthermore, they acknowledge the support from King Saud University, Riyadh, Saudi Arabia, through Researchers Supporting Program number (RSPD2024R206).
It should be as per below:
Acknowledgements
The authors express their gratitude to Huanggang Normal University for supporting this research. Furthermore, they acknowledge the support from King Saud University, Riyadh, Saudi Arabia, through Researchers Supporting Program number (RSP2024R206).
Zhang, CNN (2024) Enhancing lung cancer diagnosis with data fusion and mobile edge computing using DenseNet and (2024). 17:13 https://doi.org/10.1186/s13677-024-00597-w
Download references
The authors express their gratitude to Huanggang Normal University for supporting this research. Furthermore, they acknowledge the support from King Saud University, Riyadh, Saudi Arabia, through Researchers Supporting Program number (RSP2024R206).
Authors and Affiliations
Mechanical and Electrical Engineering College, Hainan Vocational University of Science and Technology, Haikou, 571126, China
Chengping Zhang
College of Computer Science, Huanggang Normal University, Huanggang, 438000, China
Muhammad Aamir & Yurong Guan
Department of Software Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 22452, Riyadh, 11495, Saudi Arabia
Muna Al-Razgan
Department of Electrical Engineering, College of Engineering, King Saud University, P.O. Box 800, Riyadh, 11421, Saudi Arabia
Emad Mahrous Awwad
Faculty of Engineering, Chulalongkorn University Bangkok Thailand, Bangkok, Thailand
Rizwan Ullah
School of Information and Communication Engineering, Hainan University, Haikou, Hainan, China
Uzair Aslam Bhatti
Department of Computer Science, Al Ain University, Al Ain, UAE
Yazeed Yasin Ghadi
Authors
Chengping ZhangView author publications
You can also search for this author in PubMedGoogle Scholar
Muhammad AamirView author publications
You can also search for this author in PubMedGoogle Scholar
Yurong GuanView author publications
You can also search for this author in PubMedGoogle Scholar
Muna
原文[1]发表后,我们被告知致谢声明有误,现改为:致谢作者感谢黄冈师范学院对本研究的支持。此外,他们感谢沙特阿拉伯利雅得沙特国王大学通过研究人员支持项目编号(RSPD2024R206)提供的支持。应如下所示:致谢作者感谢黄冈师范学院对本研究的支持。此外,作者还感谢沙特阿拉伯利雅得沙特国王大学通过研究人员支持项目编号(RSP2024R206)提供的支持。Zhang, CNN (2024) Enhancing lung cancer diagnosis with data fusion and mobile edge computing using DenseNet and (2024).17:13 https://doi.org/10.1186/s13677-024-00597-wDownload 参考文献作者感谢黄冈师范学院对本研究的支持。此外,他们感谢沙特阿拉伯利雅得沙特国王大学通过研究人员支持计划编号(RSP2024R206)提供的支持。作者及工作单位海南职业技术学院机电工程学院,中国海口,571126张成平黄冈师范学院计算机科学学院,中国黄冈,438000穆罕默德-阿米尔& 关玉荣沙特国王大学计算机与信息科学学院软件工程系,P. O. Box 22452, Riyadh, Saudi Arabia.O. Box 22452, Riyadh, 11495, Saudi Arabia Muna Al-RazganDepartment of Electrical Engineering, College of Engineering, King Saud University, P.O. Box 800, Riyadh, 11495, Saudi ArabiaBox 800, Riyadh, 11421, Saudi ArabiaEmad Mahrous AwwadFaculty of Engineering, Chulalongkorn University Bangkok Thailand, Bangkok, ThailandRizwan UllahSchool of Information and Communication Engineering, Hainan University, Haikou, Hainan, ChinaUzair Aslam BhattiDepartment of Computer Science, Al Ain University, Al Ain、UAEYazeed Yasin GhadiAuthorsChengping ZhangView author publications您也可以在PubMed Google Scholar中搜索该作者Muhammad AamirView author publications您也可以在PubMed Google Scholar中搜索该作者Yurong GuanView author publications您也可以在PubMed Google Scholar中搜索该作者Muna Al-Razgan查看作者发表的论文您也可以在PubMed Google学术中搜索该作者Emad Mahrous Awwad查看作者发表的论文您也可以在PubMed Google学术中搜索该作者Rizwan Ullah查看作者发表的论文您也可以在PubMed Google学术中搜索该作者Google ScholarUzair Aslam BhattiView 作者发表作品您也可以在 PubMed Google ScholarYazeed Yasin GhadiView 作者发表作品您也可以在 PubMed Google ScholarCorresponding authorCorrespondence to Muhammad Aamir.出版商注释Springer Nature对出版地图中的管辖权主张和机构隶属关系保持中立。原文的在线版本可在以下网址找到:https://doi.org/10.1186/s13677-024-00597-w.Open Access 本文采用知识共享署名 4.0 国际许可协议进行许可,该协议允许以任何媒介或格式使用、共享、改编、分发和复制,只要您适当注明原作者和来源,提供知识共享许可协议的链接,并说明是否进行了修改。本文中的图片或其他第三方材料均包含在文章的知识共享许可协议中,除非在材料的署名栏中另有说明。如果材料未包含在文章的知识共享许可协议中,且您打算使用的材料不符合法律规定或超出许可使用范围,则您需要直接从版权所有者处获得许可。如需查看该许可的副本,请访问 http://creativecommons.org/licenses/by/4.0/.Reprints and permissionsCite this articleZhang, C., Aamir, M., Guan, Y. et al. Correction to:利用DenseNet和CNN通过数据融合和移动边缘计算增强肺癌诊断。J Cloud Comp 13, 111 (2024). https://doi.org/10.1186/s13677-024-00673-1Download citationPublished: 26 May 2024DOI: https://doi.org/10.1186/s13677-024-00673-1Share this articleAnyone you share the following link with will be able to read this content:Get shareable linkSorry, a shareable link is not currently available for this article.Copy to clipboard Provided by the Springer Nature SharedIt content-sharing initiative
{"title":"Correction to: Enhancing lung cancer diagnosis with data fusion and mobile edge computing using DenseNet and CNN","authors":"Chengping Zhang, Muhammad Aamir, Yurong Guan, Muna Al-Razgan, Emad Mahrous Awwad, Rizwan Ullah, Uzair Aslam Bhatti, Yazeed Yasin Ghadi","doi":"10.1186/s13677-024-00673-1","DOIUrl":"https://doi.org/10.1186/s13677-024-00673-1","url":null,"abstract":"<p>Following publication of the original article [1], we have been notified that Acknowledgement declaration was published incorrectly.</p><p>It is now:</p><p>Acknowledgements</p><p>The authors express their gratitude to Huanggang Normal University for supporting this research. Furthermore, they acknowledge the support from King Saud University, Riyadh, Saudi Arabia, through Researchers Supporting Program number (RSPD2024R206).</p><p>It should be as per below:</p><p>Acknowledgements</p><p>The authors express their gratitude to Huanggang Normal University for supporting this research. Furthermore, they acknowledge the support from King Saud University, Riyadh, Saudi Arabia, through Researchers Supporting Program number (RSP2024R206).</p><ol data-track-component=\"outbound reference\"><li data-counter=\"1.\"><p>Zhang, CNN (2024) Enhancing lung cancer diagnosis with data fusion and mobile edge computing using DenseNet and (2024). 17:13 https://doi.org/10.1186/s13677-024-00597-w</p></li></ol><p>Download references<svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" role=\"img\" width=\"16\"><use xlink:href=\"#icon-eds-i-download-medium\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"></use></svg></p><p>The authors express their gratitude to Huanggang Normal University for supporting this research. Furthermore, they acknowledge the support from King Saud University, Riyadh, Saudi Arabia, through Researchers Supporting Program number (RSP2024R206).</p><h3>Authors and Affiliations</h3><ol><li><p>Mechanical and Electrical Engineering College, Hainan Vocational University of Science and Technology, Haikou, 571126, China</p><p>Chengping Zhang</p></li><li><p>College of Computer Science, Huanggang Normal University, Huanggang, 438000, China</p><p>Muhammad Aamir & Yurong Guan</p></li><li><p>Department of Software Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 22452, Riyadh, 11495, Saudi Arabia</p><p>Muna Al-Razgan</p></li><li><p>Department of Electrical Engineering, College of Engineering, King Saud University, P.O. Box 800, Riyadh, 11421, Saudi Arabia</p><p>Emad Mahrous Awwad</p></li><li><p>Faculty of Engineering, Chulalongkorn University Bangkok Thailand, Bangkok, Thailand</p><p>Rizwan Ullah</p></li><li><p>School of Information and Communication Engineering, Hainan University, Haikou, Hainan, China</p><p>Uzair Aslam Bhatti</p></li><li><p>Department of Computer Science, Al Ain University, Al Ain, UAE</p><p>Yazeed Yasin Ghadi</p></li></ol><span>Authors</span><ol><li><span>Chengping Zhang</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li><li><span>Muhammad Aamir</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li><li><span>Yurong Guan</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li><li><span>Muna ","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"70 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141171545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}