Pub Date : 2023-08-01DOI: 10.1007/s12243-023-00978-3
{"title":"Publisher Correction: Introduction to the special issue: 5+G network energy consumption, energy efficiency and environmental impact","authors":"","doi":"10.1007/s12243-023-00978-3","DOIUrl":"10.1007/s12243-023-00978-3","url":null,"abstract":"","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"78 5-6","pages":"253 - 253"},"PeriodicalIF":1.9,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50430173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-24DOI: 10.1007/s12243-023-00972-9
Amer Sallam, Noran Aklan, Norhan Aklan, Taha H. Rassem
The exponential growth of the Internet demands in return new technologies and protocols that can handle the new requirements of such growth efficiently. Such developments have enabled and offered many new services with sophisticated requirements that go beyond the TCP/IP host-centric model capabilities and increase its complexity. Researchers have proposed new architecture called Named-Data Networking (NDN) for Information-Centric Networking (ICN) based on a strict pull-based model as an alternative option to TCP/IP. This model has gained significant attention in the research field. However, this model still suffers from the looped data redundancy problem, which may lead to frequent link failures when dealing with real-time streaming due to the persistent interest packets. In this paper, a push-based model along with a bitmap algorithm has been proposed for improving the ICN efficiency by eliminating such problems. The presented model involved extensive experimental simulations. The experimental results demonstrate the model feasibility by preventing most of the data redundancy and improving the harmonic rein of frequent link failures respectively.
{"title":"A dense memory representation using bitmap data structure for improving NDN push-traffic model","authors":"Amer Sallam, Noran Aklan, Norhan Aklan, Taha H. Rassem","doi":"10.1007/s12243-023-00972-9","DOIUrl":"10.1007/s12243-023-00972-9","url":null,"abstract":"<div><p>The exponential growth of the Internet demands in return new technologies and protocols that can handle the new requirements of such growth efficiently. Such developments have enabled and offered many new services with sophisticated requirements that go beyond the TCP/IP host-centric model capabilities and increase its complexity. Researchers have proposed new architecture called Named-Data Networking (NDN) for Information-Centric Networking (ICN) based on a strict pull-based model as an alternative option to TCP/IP. This model has gained significant attention in the research field. However, this model still suffers from the looped data redundancy problem, which may lead to frequent link failures when dealing with real-time streaming due to the persistent interest packets. In this paper, a push-based model along with a bitmap algorithm has been proposed for improving the ICN efficiency by eliminating such problems. The presented model involved extensive experimental simulations. The experimental results demonstrate the model feasibility by preventing most of the data redundancy and improving the harmonic rein of frequent link failures respectively.</p></div>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"79 1-2","pages":"73 - 83"},"PeriodicalIF":1.8,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78769507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, cloud computing is one of the key enablers for productivity in different domains. However, this technology is still subject to security attacks. This article aims at overcoming the limitations of detecting unknown attacks by “intrusion detection and prevention systems (IDPSs)” while addressing the black-box issue (lack of interpretability) of the widely used machine learning (ML) models in cybersecurity. We propose a “klm-based profiling and preventing security attacks (klm-PPSA)” system (v. 1.1) to detect, profile, and prevent both known and unknown security attacks in cloud environments or even cloud-based IoT. This system is based on klm security factors related to passwords, biometrics, and keystroke techniques. Besides, two sub-schemes of the system were developed based on the updated and improved version of the klm-PPSA scheme (v. 1.1) to analyze the impact of these factors on the performance of the generated models (k-PPSA, km-PPSA, and klm-PPSA). The models were built using two accurate and interpretable ML algorithms: regularized class association rules (RCAR) and classification based on associations (CBA). The empirical results show that klm-PPSA is the best model compared to other models owing to its high performance and attack prediction capability using RCAR/CBA. In addition, RCAR performs better than CBA.
{"title":"klm-PPSA v. 1.1: machine learning-augmented profiling and preventing security attacks in cloud environments","authors":"Nahid Eddermoug, Abdeljebar Mansour, Mohamed Sadik, Essaid Sabir, Mohamed Azmi","doi":"10.1007/s12243-023-00971-w","DOIUrl":"10.1007/s12243-023-00971-w","url":null,"abstract":"<div><p>Nowadays, cloud computing is one of the key enablers for productivity in different domains. However, this technology is still subject to security attacks. This article aims at overcoming the limitations of detecting unknown attacks by “intrusion detection and prevention systems (IDPSs)” while addressing the black-box issue (lack of interpretability) of the widely used machine learning (ML) models in cybersecurity. We propose a “<i>klm</i>-based profiling and preventing security attacks (<i>klm</i>-PPSA)” system (v. 1.1) to detect, profile, and prevent both known and unknown security attacks in cloud environments or even cloud-based IoT. This system is based on <i>klm</i> security factors related to passwords, biometrics, and keystroke techniques. Besides, two sub-schemes of the system were developed based on the updated and improved version of the <i>klm</i>-PPSA scheme (v. 1.1) to analyze the impact of these factors on the performance of the generated models (<i>k</i>-PPSA, <i>km</i>-PPSA, and <i>klm</i>-PPSA). The models were built using two accurate and interpretable ML algorithms: regularized class association rules (RCAR) and classification based on associations (CBA). The empirical results show that <i>klm</i>-PPSA is the best model compared to other models owing to its high performance and attack prediction capability using RCAR/CBA. In addition, RCAR performs better than CBA.</p></div>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"78 11-12","pages":"729 - 755"},"PeriodicalIF":1.9,"publicationDate":"2023-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81833440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-15DOI: 10.1007/s12243-023-00962-x
Sevda Özge Bursa, Özlem Durmaz İncel, Gülfem Işıklar Alptekin
Human activity recognition (HAR) is a research domain that enables continuous monitoring of human behaviors for various purposes, from assisted living to surveillance in smart home environments. These applications generally work with a rich collection of sensor data generated using smartphones and other low-power wearable devices. The amount of collected data can quickly become immense, necessitating time and resource-consuming computations. Deep learning (DL) has recently become a promising trend in HAR. However, it is challenging to train and run DL algorithms on mobile devices due to their limited battery power, memory, and computation units. In this paper, we evaluate and compare the performance of four different deep architectures trained on three datasets from the HAR literature (WISDM, MobiAct, OpenHAR). We use the TensorFlow Lite platform with quantization techniques to convert the models into lighter versions for deployment on mobile devices. We compare the performance of the original models in terms of accuracy, size, and resource usage with their optimized versions. The experiments reveal that the model size and resource consumption can significantly be reduced when optimized with TensorFlow Lite without sacrificing the accuracy of the models.
{"title":"Building Lightweight Deep learning Models with TensorFlow Lite for Human Activity Recognition on Mobile Devices","authors":"Sevda Özge Bursa, Özlem Durmaz İncel, Gülfem Işıklar Alptekin","doi":"10.1007/s12243-023-00962-x","DOIUrl":"10.1007/s12243-023-00962-x","url":null,"abstract":"<div><p>Human activity recognition (HAR) is a research domain that enables continuous monitoring of human behaviors for various purposes, from assisted living to surveillance in smart home environments. These applications generally work with a rich collection of sensor data generated using smartphones and other low-power wearable devices. The amount of collected data can quickly become immense, necessitating time and resource-consuming computations. Deep learning (DL) has recently become a promising trend in HAR. However, it is challenging to train and run DL algorithms on mobile devices due to their limited battery power, memory, and computation units. In this paper, we evaluate and compare the performance of four different deep architectures trained on three datasets from the HAR literature (WISDM, MobiAct, OpenHAR). We use the TensorFlow Lite platform with quantization techniques to convert the models into lighter versions for deployment on mobile devices. We compare the performance of the original models in terms of accuracy, size, and resource usage with their optimized versions. The experiments reveal that the model size and resource consumption can significantly be reduced when optimized with TensorFlow Lite without sacrificing the accuracy of the models.</p></div>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"78 11-12","pages":"687 - 702"},"PeriodicalIF":1.9,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87740879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-11DOI: 10.1007/s12243-023-00969-4
Hazel Murray, David Malone
Human chosen passwords are often predictable. Research has shown that users of similar demographics or choosing passwords for the same website will often choose similar passwords. This knowledge is leveraged by human password guessers who use it to tailor their attacks. In this paper, we demonstrate that a learning algorithm can actively learn these same characteristics of the passwords as it is guessing and that it can leverage this information to adaptively improve its guessing. Furthermore, we show that if we split our candidate wordlists based on these characteristics, then a multi-armed bandit style guessing algorithm can adaptively choose to guess from the wordlist which will maximise successes.
{"title":"Adaptive password guessing: learning language, nationality and dataset source","authors":"Hazel Murray, David Malone","doi":"10.1007/s12243-023-00969-4","DOIUrl":"10.1007/s12243-023-00969-4","url":null,"abstract":"<div><p>Human chosen passwords are often predictable. Research has shown that users of similar demographics or choosing passwords for the same website will often choose similar passwords. This knowledge is leveraged by human password guessers who use it to tailor their attacks. In this paper, we demonstrate that a learning algorithm can actively learn these same characteristics of the passwords as it is guessing and that it can leverage this information to adaptively improve its guessing. Furthermore, we show that if we split our candidate wordlists based on these characteristics, then a multi-armed bandit style guessing algorithm can adaptively choose to guess from the wordlist which will maximise successes.</p></div>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"78 7-8","pages":"385 - 400"},"PeriodicalIF":1.9,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s12243-023-00969-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50471457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-04DOI: 10.1007/s12243-023-00954-x
Preechai Mekbungwan, Adisorn Lertsinsrubtavee, Sukumal Kitisin, Giovanni Pau, Kanchana Kanchanasut
We propose to perform robust distributed computation, such as analysing and filtering raw data in real time, as close as possible to sensors in an environment with intermittent Internet connectivity and resource-constrained computable IoT nodes. To enable this computation, we deploy a named data network (NDN) for IoT applications, which allows data to be referenced by names. The novelty of our work lies in the inclusion of computation functions in each NDN router and allowing functions to be treated as executable Data objects. Function call is expressed as part of the NDN Interest names with proper name prefixes for NDN routing. With the results of the function computation returned as NDN Data packets, a normal NDN is converted to an ActiveNDN node. Distributed function executions can be orchestrated by an ActiveNDN program to perform required computations in the network. In this paper, we describe the design of ActiveNDN with a small prototype network as a proof of concept. We conduct extensive simulation experiments to investigate the performance and effectiveness of ActiveNDN in large-scale wireless IoT networks. Two programmable IoT air quality monitoring applications on our real-world ActiveNDN testbed are described, demonstrating that programmable IoT devices with on-site execution are capable of handling increasingly complex and time-sensitive IoT scenarios.
{"title":"Towards programmable IoT with ActiveNDN","authors":"Preechai Mekbungwan, Adisorn Lertsinsrubtavee, Sukumal Kitisin, Giovanni Pau, Kanchana Kanchanasut","doi":"10.1007/s12243-023-00954-x","DOIUrl":"10.1007/s12243-023-00954-x","url":null,"abstract":"<div><p>We propose to perform robust distributed computation, such as analysing and filtering raw data in real time, as close as possible to sensors in an environment with intermittent Internet connectivity and resource-constrained computable IoT nodes. To enable this computation, we deploy a named data network (NDN) for IoT applications, which allows data to be referenced by names. The novelty of our work lies in the inclusion of computation functions in each NDN router and allowing functions to be treated as executable Data objects. Function call is expressed as part of the NDN Interest names with proper name prefixes for NDN routing. With the results of the function computation returned as NDN Data packets, a normal NDN is converted to an ActiveNDN node. Distributed function executions can be orchestrated by an ActiveNDN program to perform required computations in the network. In this paper, we describe the design of ActiveNDN with a small prototype network as a proof of concept. We conduct extensive simulation experiments to investigate the performance and effectiveness of ActiveNDN in large-scale wireless IoT networks. Two programmable IoT air quality monitoring applications on our real-world ActiveNDN testbed are described, demonstrating that programmable IoT devices with on-site execution are capable of handling increasingly complex and time-sensitive IoT scenarios.</p></div>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"78 11-12","pages":"667 - 684"},"PeriodicalIF":1.9,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85671563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1007/s12243-023-00970-x
Pan Chongrui, Yu Guanding
In millimeter-wave (mmWave) communications, multi-connectivity can enhance the communication capacity while at the cost of increased power consumption. In this paper, we investigate a deep-unfolding-based approach for joint user association and power allocation to maximize the energy efficiency of mmWave networks with multi-connectivity. The problem is formulated as a mixed integer nonlinear fractional optimization problem. First, we develop a three-stage iterative algorithm to achieve an upper bound of the original problem. Then, we unfold the iterative algorithm with a convolutional neural network (CNN)-based accelerator and trainable parameters. Moreover, we propose a CNN-aided greedy algorithm to obtain a feasible solution. The simulation results show that the proposed algorithm can achieve good performance and strong robustness but with much reduced computational complexity.
{"title":"Deep unfolding for energy-efficient resource allocation in mmWave networks with multi-connectivity","authors":"Pan Chongrui, Yu Guanding","doi":"10.1007/s12243-023-00970-x","DOIUrl":"10.1007/s12243-023-00970-x","url":null,"abstract":"<div><p>In millimeter-wave (mmWave) communications, multi-connectivity can enhance the communication capacity while at the cost of increased power consumption. In this paper, we investigate a deep-unfolding-based approach for joint user association and power allocation to maximize the energy efficiency of mmWave networks with multi-connectivity. The problem is formulated as a mixed integer nonlinear fractional optimization problem. First, we develop a three-stage iterative algorithm to achieve an upper bound of the original problem. Then, we unfold the iterative algorithm with a convolutional neural network (CNN)-based accelerator and trainable parameters. Moreover, we propose a CNN-aided greedy algorithm to obtain a feasible solution. The simulation results show that the proposed algorithm can achieve good performance and strong robustness but with much reduced computational complexity.</p></div>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"78 9-10","pages":"627 - 639"},"PeriodicalIF":1.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50432228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-29DOI: 10.1007/s12243-023-00960-z
Kongyang Chen, Yao Huang, Yiwen Wang, Xiaoxue Zhang, Bing Mi, Yu Wang
Due to emerging concerns about public and private privacy issues in smart cities, many countries and organizations are establishing laws and regulations (e.g., GPDR) to protect the data security. One of the most important items is the so-called The Right to be Forgotten, which means that these data should be forgotten by all inappropriate use. To truly forget these data, they should be deleted from all databases that cover them, and also be removed from all machine learning models that are trained on them. The second one is called machine unlearning. One naive method for machine unlearning is to retrain a new model after data removal. However, in the current big data era, this will take a very long time. In this paper, we borrow the idea of Generative Adversarial Network (GAN), and propose a fast machine unlearning method that unlearns data in an adversarial way. Experimental results show that our method produces significant improvement in terms of the forgotten performance, model accuracy, and time cost.
{"title":"Privacy preserving machine unlearning for smart cities","authors":"Kongyang Chen, Yao Huang, Yiwen Wang, Xiaoxue Zhang, Bing Mi, Yu Wang","doi":"10.1007/s12243-023-00960-z","DOIUrl":"10.1007/s12243-023-00960-z","url":null,"abstract":"<div><p>Due to emerging concerns about public and private privacy issues in smart cities, many countries and organizations are establishing laws and regulations (e.g., GPDR) to protect the data security. One of the most important items is the so-called <i>The Right to be Forgotten</i>, which means that these data should be forgotten by all inappropriate use. To truly forget these data, they should be deleted from all databases that cover them, and also be removed from all machine learning models that are trained on them. The second one is called <i>machine unlearning</i>. One naive method for machine unlearning is to retrain a new model after data removal. However, in the current big data era, this will take a very long time. In this paper, we borrow the idea of Generative Adversarial Network (GAN), and propose a fast machine unlearning method that unlearns data in an adversarial way. Experimental results show that our method produces significant improvement in terms of the forgotten performance, model accuracy, and time cost.</p></div>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"79 1-2","pages":"61 - 72"},"PeriodicalIF":1.8,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76305969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-23DOI: 10.1007/s12243-023-00964-9
Rim Jouini, Chiraz Houaidia, Leila Azouz Saidane
The integration of information and communication technologies (ICT) can be of great utility in monitoring and evaluating the elderly’s health condition and its behavior in performing Activities of Daily Living (ADL) in the perspective to avoid, as long as possible, the delays of recourse to health care institutions (e.g., nursing homes and hospitals). In this research, we propose a predictive model for detecting behavioral and health-related changes in a patient who is monitored continuously in an assisted living environment. We focus on keeping track of the dependency level evolution and detecting the loss of autonomy for an elderly person using a Hidden Markov Model based approach. In this predictive process, we were interested in including the correlation between cardiovascular history and hypertension as it is considered the primary risk factor for cardiovascular diseases, stroke, kidney failure and many other diseases. Our simulation was applied to an empirical dataset that concerned 3046 elderly persons monitored over 9 years. The results show that our model accurately evaluates person’s dependency, follows his autonomy evolution over time and thus predicts moments of important changes.
{"title":"Hidden Markov Model for early prediction of the elderly’s dependency evolution in ambient assisted living","authors":"Rim Jouini, Chiraz Houaidia, Leila Azouz Saidane","doi":"10.1007/s12243-023-00964-9","DOIUrl":"10.1007/s12243-023-00964-9","url":null,"abstract":"<div><p>The integration of information and communication technologies (ICT) can be of great utility in monitoring and evaluating the elderly’s health condition and its behavior in performing Activities of Daily Living (ADL) in the perspective to avoid, as long as possible, the delays of recourse to health care institutions (e.g., nursing homes and hospitals). In this research, we propose a predictive model for detecting behavioral and health-related changes in a patient who is monitored continuously in an assisted living environment. We focus on keeping track of the dependency level evolution and detecting the loss of autonomy for an elderly person using a Hidden Markov Model based approach. In this predictive process, we were interested in including the correlation between cardiovascular history and hypertension as it is considered the primary risk factor for cardiovascular diseases, stroke, kidney failure and many other diseases. Our simulation was applied to an empirical dataset that concerned 3046 elderly persons monitored over 9 years. The results show that our model accurately evaluates person’s dependency, follows his autonomy evolution over time and thus predicts moments of important changes.</p></div>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"78 9-10","pages":"599 - 615"},"PeriodicalIF":1.9,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50507439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-19DOI: 10.1007/s12243-023-00965-8
Meryeme Ayache, Ikram El Asri, Jamal N. Al-Karaki, Mohamed Bellouch, Amjad Gawanmeh, Karim Tazzi
The emergence of the Cognitive Internet of Medical Things (CIoMT) during the COVID-19 pandemic has been transformational. The CIoMT is a rapidly evolving technology that uses artificial intelligence, big data, and the Internet of Things (IoT) to provide personalized patient care. The CIoMT can be used to monitor and track vital signs, such as temperature, blood pressure, and heart rate, thus giving healthcare providers real-time information about a patient’s health. However, in such systems, the problem of privacy during data processing or sharing remains. Therefore, federated learning (FL) plays an important role in the Cognitive Internet of Medical Things (CIoMT) by allowing multiple medical devices to securely collaborate in a distributed and privacy-preserving manner. On the other hand, classical centralized FL models have several limitations, such as single points of failure and malicious servers. This paper presents an enhancement of the existing DASS-CARE 2.0 framework by using a blockchain-based federated learning framework. The proposed solution provides a secure and reliable distributed learning platform for medical data sharing and analytics in a multi-organizational environment. The blockchain-based federated learning framework offrs an innovative solution to overcome the challenges encountered in traditional FL. Furthermore, we provide a comprehensive discussion of the advantages of the proposed framework through a comparative study between our DASS-CARE 2.0 and the traditional centralized FL model while taking the aforementioned security challenges into consideration. Overall, the performance of the proposed framework shows significant advantages compared to traditional methods.
在2019冠状病毒病大流行期间,认知医疗物联网(CIoMT)的出现具有变革性。CIoMT是一项快速发展的技术,它利用人工智能、大数据和物联网(IoT)来提供个性化的患者护理。CIoMT可用于监测和跟踪生命体征,如体温、血压和心率,从而为医疗保健提供者提供有关患者健康状况的实时信息。然而,在这样的系统中,数据处理或共享过程中的隐私问题仍然存在。因此,联邦学习(FL)通过允许多个医疗设备以分布式和隐私保护的方式安全地协作,在认知医疗物联网(CIoMT)中发挥着重要作用。另一方面,经典的集中式FL模型有一些局限性,例如单点故障和恶意服务器。本文通过使用基于区块链的联邦学习框架,对现有的das - care 2.0框架进行了增强。该解决方案为多组织环境下的医疗数据共享和分析提供了一个安全可靠的分布式学习平台。基于区块链的联邦学习框架为克服传统FL中遇到的挑战提供了一种创新的解决方案。此外,我们在考虑上述安全挑战的同时,通过对我们的DASS-CARE 2.0和传统集中式FL模型的比较研究,全面讨论了所提出框架的优势。总体而言,与传统方法相比,所提出的框架的性能显示出显着的优势。
{"title":"Enhanced DASS-CARE 2.0: a blockchain-based and decentralized FL framework","authors":"Meryeme Ayache, Ikram El Asri, Jamal N. Al-Karaki, Mohamed Bellouch, Amjad Gawanmeh, Karim Tazzi","doi":"10.1007/s12243-023-00965-8","DOIUrl":"10.1007/s12243-023-00965-8","url":null,"abstract":"<div><p>The emergence of the Cognitive Internet of Medical Things (CIoMT) during the COVID-19 pandemic has been transformational. The CIoMT is a rapidly evolving technology that uses artificial intelligence, big data, and the Internet of Things (IoT) to provide personalized patient care. The CIoMT can be used to monitor and track vital signs, such as temperature, blood pressure, and heart rate, thus giving healthcare providers real-time information about a patient’s health. However, in such systems, the problem of privacy during data processing or sharing remains. Therefore, federated learning (FL) plays an important role in the Cognitive Internet of Medical Things (CIoMT) by allowing multiple medical devices to securely collaborate in a distributed and privacy-preserving manner. On the other hand, classical centralized FL models have several limitations, such as single points of failure and malicious servers. This paper presents an enhancement of the existing DASS-CARE 2.0 framework by using a blockchain-based federated learning framework. The proposed solution provides a secure and reliable distributed learning platform for medical data sharing and analytics in a multi-organizational environment. The blockchain-based federated learning framework offrs an innovative solution to overcome the challenges encountered in traditional FL. Furthermore, we provide a comprehensive discussion of the advantages of the proposed framework through a comparative study between our DASS-CARE 2.0 and the traditional centralized FL model while taking the aforementioned security challenges into consideration. Overall, the performance of the proposed framework shows significant advantages compared to traditional methods.</p></div>","PeriodicalId":50761,"journal":{"name":"Annals of Telecommunications","volume":"78 11-12","pages":"703 - 715"},"PeriodicalIF":1.9,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86594149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}