In wireless sensor network-based water pipeline monitoring (WWPM) systems, a vital requirement emerges: the achievement of low energy consumption. This primary goal arises from the fundamental necessity to ensure the sustained operability of sensor nodes over extended durations, all without the need for frequent battery replacement. Given that sensor nodes in such applications are typically battery-powered and often physically inaccessible, maximizing energy efficiency by minimizing unnecessary energy consumption is of vital importance. This paper presents an experimental study that investigates the impact of a hybrid technique, incorporating distributed computing, hierarchical sensing, and duty cycling, on the energy consumption of a sensor node in prolonging the lifespan of a WWPM system. A custom sensor node is designed using the ESP32 MCU and nRF24L01+ transceiver. Hierarchical sensing is implemented through the use of LSM9DS1 and ADXL344 accelerometers, distributed computing through the implementation of a distributed Kalman filter, and duty cycling through the implementation of interrupt-enabled sleep/wakeup functionality. The experimental results reveal that combining distributed computing, hierarchical sensing and duty cycling reduces energy consumption by a factor of eight compared to the lone implementation of distributed computing.
{"title":"Energy Consumption Reduction in Wireless Sensor Network-Based Water Pipeline Monitoring Systems via Energy Conservation Techniques","authors":"Valery Nkemeni, Fabien Mieyeville, Pierre Tsafack","doi":"10.3390/fi15120402","DOIUrl":"https://doi.org/10.3390/fi15120402","url":null,"abstract":"In wireless sensor network-based water pipeline monitoring (WWPM) systems, a vital requirement emerges: the achievement of low energy consumption. This primary goal arises from the fundamental necessity to ensure the sustained operability of sensor nodes over extended durations, all without the need for frequent battery replacement. Given that sensor nodes in such applications are typically battery-powered and often physically inaccessible, maximizing energy efficiency by minimizing unnecessary energy consumption is of vital importance. This paper presents an experimental study that investigates the impact of a hybrid technique, incorporating distributed computing, hierarchical sensing, and duty cycling, on the energy consumption of a sensor node in prolonging the lifespan of a WWPM system. A custom sensor node is designed using the ESP32 MCU and nRF24L01+ transceiver. Hierarchical sensing is implemented through the use of LSM9DS1 and ADXL344 accelerometers, distributed computing through the implementation of a distributed Kalman filter, and duty cycling through the implementation of interrupt-enabled sleep/wakeup functionality. The experimental results reveal that combining distributed computing, hierarchical sensing and duty cycling reduces energy consumption by a factor of eight compared to the lone implementation of distributed computing.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"2018 24","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139002063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, the Internet of Vehicles (IoV) has garnered significant attention from researchers and automotive industry professionals due to its expanding range of applications and services aimed at enhancing road safety and driver/passenger comfort. However, the massive amount of data spread across this network makes securing it challenging. The IoV network generates, collects, and processes vast amounts of valuable and sensitive data that intruders can manipulate. An intrusion detection system (IDS) is the most typical method to protect such networks. An IDS monitors activity on the road to detect any sign of a security threat and generates an alert if a security anomaly is detected. Applying machine learning methods to large datasets helps detect anomalies, which can be utilized to discover potential intrusions. However, traditional centralized learning algorithms require gathering data from end devices and centralizing it for training on a single device. Vehicle makers and owners may not readily share the sensitive data necessary for training the models. Granting a single device access to enormous volumes of personal information raises significant privacy concerns, as any system-related problems could result in massive data leaks. To alleviate these problems, more secure options, such as Federated Learning (FL), must be explored. A decentralized machine learning technique, FL allows model training on client devices while maintaining user data privacy. Although FL for IDS has made significant progress, to our knowledge, there has been no comprehensive survey specifically dedicated to exploring the applications of FL for IDS in the IoV environment, similar to successful systems research in deep learning. To address this gap, we undertake a well-organized literature review on IDSs based on FL in an IoV environment. We introduce a general taxonomy to describe the FL systems to ensure a coherent structure and guide future research. Additionally, we identify the relevant state of the art in FL-based intrusion detection within the IoV domain, covering the years from FL’s inception in 2016 through 2023. Finally, we identify challenges and future research directions based on the existing literature.
{"title":"Federated Learning for Intrusion Detection Systems in Internet of Vehicles: A General Taxonomy, Applications, and Future Directions","authors":"Jadil Alsamiri, Khalid Alsubhi","doi":"10.3390/fi15120403","DOIUrl":"https://doi.org/10.3390/fi15120403","url":null,"abstract":"In recent years, the Internet of Vehicles (IoV) has garnered significant attention from researchers and automotive industry professionals due to its expanding range of applications and services aimed at enhancing road safety and driver/passenger comfort. However, the massive amount of data spread across this network makes securing it challenging. The IoV network generates, collects, and processes vast amounts of valuable and sensitive data that intruders can manipulate. An intrusion detection system (IDS) is the most typical method to protect such networks. An IDS monitors activity on the road to detect any sign of a security threat and generates an alert if a security anomaly is detected. Applying machine learning methods to large datasets helps detect anomalies, which can be utilized to discover potential intrusions. However, traditional centralized learning algorithms require gathering data from end devices and centralizing it for training on a single device. Vehicle makers and owners may not readily share the sensitive data necessary for training the models. Granting a single device access to enormous volumes of personal information raises significant privacy concerns, as any system-related problems could result in massive data leaks. To alleviate these problems, more secure options, such as Federated Learning (FL), must be explored. A decentralized machine learning technique, FL allows model training on client devices while maintaining user data privacy. Although FL for IDS has made significant progress, to our knowledge, there has been no comprehensive survey specifically dedicated to exploring the applications of FL for IDS in the IoV environment, similar to successful systems research in deep learning. To address this gap, we undertake a well-organized literature review on IDSs based on FL in an IoV environment. We introduce a general taxonomy to describe the FL systems to ensure a coherent structure and guide future research. Additionally, we identify the relevant state of the art in FL-based intrusion detection within the IoV domain, covering the years from FL’s inception in 2016 through 2023. Finally, we identify challenges and future research directions based on the existing literature.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"52 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138975086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software-Defined Networking (SDN) stands as a pivotal paradigm in network implementation, exerting a profound influence on the trajectory of technological advancement. The critical role of security within SDN cannot be overstated, with distributed denial of service (DDoS) emerging as a particularly disruptive threat, capable of causing large-scale disruptions. DDoS operates by generating malicious traffic that mimics normal network activity, leading to service disruptions. It becomes imperative to deploy mechanisms capable of distinguishing between benign and malicious traffic, serving as the initial line of defense against DDoS challenges. In addressing this concern, we propose the utilization of traffic classification as a foundational strategy for combatting DDoS. By categorizing traffic into malicious and normal streams, we establish a crucial first step in the development of effective DDoS mitigation strategies. The deleterious effects of DDoS extend to the point of potentially overwhelming networked servers, resulting in service failures and SDN server downtimes. To investigate and address this issue, our research employs a dataset encompassing both benign and malicious traffic within the SDN environment. A set of 23 features is harnessed for classification purposes, forming the basis for a comprehensive analysis and the development of robust defense mechanisms against DDoS in SDN. Initially, we compare GenClass with three common classification methods, namely the Bayes, K-Nearest Neighbours (KNN), and Random Forest methods. The proposed solution improves the average class error, demonstrating 6.58% error as opposed to the Bayes method error of 32.59%, KNN error of 18.45%, and Random Forest error of 30.70%. Moreover, we utilize classification procedures based on three methods based on grammatical evolution, which are applied to the aforementioned data. In particular, in terms of average class error, GenClass exhibits 6.58%, while NNC and FC2GEN exhibit average class errors of 12.51% and 15.86%, respectively.
{"title":"Distributed Denial of Service Classification for Software-Defined Networking Using Grammatical Evolution","authors":"E. Spyrou, Ioannis Tsoulos, C. Stylios","doi":"10.3390/fi15120401","DOIUrl":"https://doi.org/10.3390/fi15120401","url":null,"abstract":"Software-Defined Networking (SDN) stands as a pivotal paradigm in network implementation, exerting a profound influence on the trajectory of technological advancement. The critical role of security within SDN cannot be overstated, with distributed denial of service (DDoS) emerging as a particularly disruptive threat, capable of causing large-scale disruptions. DDoS operates by generating malicious traffic that mimics normal network activity, leading to service disruptions. It becomes imperative to deploy mechanisms capable of distinguishing between benign and malicious traffic, serving as the initial line of defense against DDoS challenges. In addressing this concern, we propose the utilization of traffic classification as a foundational strategy for combatting DDoS. By categorizing traffic into malicious and normal streams, we establish a crucial first step in the development of effective DDoS mitigation strategies. The deleterious effects of DDoS extend to the point of potentially overwhelming networked servers, resulting in service failures and SDN server downtimes. To investigate and address this issue, our research employs a dataset encompassing both benign and malicious traffic within the SDN environment. A set of 23 features is harnessed for classification purposes, forming the basis for a comprehensive analysis and the development of robust defense mechanisms against DDoS in SDN. Initially, we compare GenClass with three common classification methods, namely the Bayes, K-Nearest Neighbours (KNN), and Random Forest methods. The proposed solution improves the average class error, demonstrating 6.58% error as opposed to the Bayes method error of 32.59%, KNN error of 18.45%, and Random Forest error of 30.70%. Moreover, we utilize classification procedures based on three methods based on grammatical evolution, which are applied to the aforementioned data. In particular, in terms of average class error, GenClass exhibits 6.58%, while NNC and FC2GEN exhibit average class errors of 12.51% and 15.86%, respectively.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"2 4","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139004690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated learning (FL) and blockchains exhibit significant commonality, complementarity, and alignment in various aspects, such as application domains, architectural features, and privacy protection mechanisms. In recent years, there have been notable advancements in combining these two technologies, particularly in data privacy protection, data sharing incentives, and computational performance. Although there are some surveys on blockchain-based federated learning (BFL), these surveys predominantly focus on the BFL framework and its classifications, yet lack in-depth analyses of the pivotal issues addressed by BFL. This work aims to assist researchers in understanding the latest research achievements and development directions in the integration of FL with blockchains. Firstly, we introduced the relevant research in FL and blockchain technology and highlighted the existing shortcomings of FL. Next, we conducted a comparative analysis of existing BFL frameworks, delving into the significant problems in the realm of FL that the combination of blockchain and FL addresses. Finally, we summarized the application prospects of BFL technology in various domains such as the Internet of Things, Industrial Internet of Things, Internet of Vehicles, and healthcare services, as well as the challenges that need to be addressed and future research directions.
{"title":"A Survey on Blockchain-Based Federated Learning","authors":"Lang Wu, Weijian Ruan, Jinhui Hu, Yaobin He","doi":"10.3390/fi15120400","DOIUrl":"https://doi.org/10.3390/fi15120400","url":null,"abstract":"Federated learning (FL) and blockchains exhibit significant commonality, complementarity, and alignment in various aspects, such as application domains, architectural features, and privacy protection mechanisms. In recent years, there have been notable advancements in combining these two technologies, particularly in data privacy protection, data sharing incentives, and computational performance. Although there are some surveys on blockchain-based federated learning (BFL), these surveys predominantly focus on the BFL framework and its classifications, yet lack in-depth analyses of the pivotal issues addressed by BFL. This work aims to assist researchers in understanding the latest research achievements and development directions in the integration of FL with blockchains. Firstly, we introduced the relevant research in FL and blockchain technology and highlighted the existing shortcomings of FL. Next, we conducted a comparative analysis of existing BFL frameworks, delving into the significant problems in the realm of FL that the combination of blockchain and FL addresses. Finally, we summarized the application prospects of BFL technology in various domains such as the Internet of Things, Industrial Internet of Things, Internet of Vehicles, and healthcare services, as well as the challenges that need to be addressed and future research directions.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"40 10","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139007002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Three-dimensional object detection involves estimating the dimensions, orientations, and locations of 3D bounding boxes. Intersection of Union (IoU) loss measures the overlap between predicted 3D box and ground truth 3D bounding boxes. The localization task uses smooth-L1 loss with IoU to estimate the object’s location, and the classification task identifies the object/class category inside each 3D bounding box. Localization suffers a performance gap in cases where the predicted and ground truth boxes overlap significantly less or do not overlap, indicating the boxes are far away, and in scenarios where the boxes are inclusive. Existing axis-aligned IoU losses suffer performance drop in cases of rotated 3D bounding boxes. This research addresses the shortcomings in bounding box regression problems of 3D object detection by introducing an Improved Intersection Over Union (IIoU) loss. The proposed loss function’s performance is experimented on LiDAR-based and Camera-LiDAR-based fusion methods using the KITTI dataset.
{"title":"Addressing the Gaps of IoU Loss in 3D Object Detection with IIoU","authors":"N. Ravi, Mohamed El-Sharkawy","doi":"10.3390/fi15120399","DOIUrl":"https://doi.org/10.3390/fi15120399","url":null,"abstract":"Three-dimensional object detection involves estimating the dimensions, orientations, and locations of 3D bounding boxes. Intersection of Union (IoU) loss measures the overlap between predicted 3D box and ground truth 3D bounding boxes. The localization task uses smooth-L1 loss with IoU to estimate the object’s location, and the classification task identifies the object/class category inside each 3D bounding box. Localization suffers a performance gap in cases where the predicted and ground truth boxes overlap significantly less or do not overlap, indicating the boxes are far away, and in scenarios where the boxes are inclusive. Existing axis-aligned IoU losses suffer performance drop in cases of rotated 3D bounding boxes. This research addresses the shortcomings in bounding box regression problems of 3D object detection by introducing an Improved Intersection Over Union (IIoU) loss. The proposed loss function’s performance is experimented on LiDAR-based and Camera-LiDAR-based fusion methods using the KITTI dataset.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"35 8","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139010245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Javier Gómez, J. Camacho-Escoto, Luis Orozco-Barbosa, Diego Rodriguez-Torres
WiFi is a widely used wireless technology for data transmission. WiFi can also play a crucial role in simultaneously broadcasting content to multiple devices in multimedia transmission for venues such as classrooms, theaters, and stadiums, etc. Broadcasting allows for the efficient dissemination of information to all devices connected to the network, and it becomes crucial to ensure that the WiFi network has sufficient capacity to transmit broadcast multimedia content without interruptions or delays. However, using WiFi for broadcasting presents challenges that can impact user experience, specifically the difficulty of obtaining real-time feedback from potentially hundreds or thousands of users due to potential collisions of feedback messages. This work focuses on providing accurate feedback to the Access Point about the percentage of users not receiving broadcast traffic correctly so it can adjust its Modulation and Coding Scheme (MCS) while transmitting broadcast multimedia content to many users. The proposed method is comprised of two sequential algorithms. In order to reduce the probability of a collision after transmitting each message, an algorithm searches for the best probability value for users to transmit ACK/NACK messages, depending on whether messages are received correctly or not. This feedback allows the Access Point to estimate the number of STAs correctly/incorrectly receiving the messages being transmitted. A second algorithm uses this estimation so the Access Point can select the best MCS while maintaining the percentage of users not receiving broadcast content correctly within acceptable margins, thus providing users with the best possible content quality. We implemented the proposed method in the ns-3 simulator, and the results show it yields quick, reliable feedback to the Access Point that was then able to adjust to the best possible MCS in only a few seconds, regardless of the user density and dimensions of the scenario.
{"title":"PROFEE: A Probabilistic-Feedback Based Speed Rate Adaption for IEEE 802.11bc","authors":"Javier Gómez, J. Camacho-Escoto, Luis Orozco-Barbosa, Diego Rodriguez-Torres","doi":"10.3390/fi15120396","DOIUrl":"https://doi.org/10.3390/fi15120396","url":null,"abstract":"WiFi is a widely used wireless technology for data transmission. WiFi can also play a crucial role in simultaneously broadcasting content to multiple devices in multimedia transmission for venues such as classrooms, theaters, and stadiums, etc. Broadcasting allows for the efficient dissemination of information to all devices connected to the network, and it becomes crucial to ensure that the WiFi network has sufficient capacity to transmit broadcast multimedia content without interruptions or delays. However, using WiFi for broadcasting presents challenges that can impact user experience, specifically the difficulty of obtaining real-time feedback from potentially hundreds or thousands of users due to potential collisions of feedback messages. This work focuses on providing accurate feedback to the Access Point about the percentage of users not receiving broadcast traffic correctly so it can adjust its Modulation and Coding Scheme (MCS) while transmitting broadcast multimedia content to many users. The proposed method is comprised of two sequential algorithms. In order to reduce the probability of a collision after transmitting each message, an algorithm searches for the best probability value for users to transmit ACK/NACK messages, depending on whether messages are received correctly or not. This feedback allows the Access Point to estimate the number of STAs correctly/incorrectly receiving the messages being transmitted. A second algorithm uses this estimation so the Access Point can select the best MCS while maintaining the percentage of users not receiving broadcast content correctly within acceptable margins, thus providing users with the best possible content quality. We implemented the proposed method in the ns-3 simulator, and the results show it yields quick, reliable feedback to the Access Point that was then able to adjust to the best possible MCS in only a few seconds, regardless of the user density and dimensions of the scenario.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"509 ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138983237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shrouk A. Ali, Shaimaa Ahmed Elsaid, Abdelhamied A. Ateya, Muhammed ElAffendi, A. El-latif
The concept of smart cities, which aim to enhance the quality of urban life through innovative technologies and policies, has gained significant momentum in recent years. As we approach the era of next-generation smart cities, it becomes crucial to explore the key enabling technologies that will shape their development. This work reviews the leading technologies driving the future of smart cities. The work begins by introducing the main requirements of different smart city applications; then, the enabling technologies are presented. This work highlights the transformative potential of the Internet of things (IoT) to facilitate data collection and analysis to improve urban infrastructure and services. As a complementary technology, distributed edge computing brings computational power closer to devices, reducing the reliance on centralized data centers. Another key technology is virtualization, which optimizes resource utilization, enabling multiple virtual environments to run efficiently on shared hardware. Software-defined networking (SDN) emerges as a pivotal technology that brings flexibility and scalability to smart city networks, allowing for dynamic network management and resource allocation. Artificial intelligence (AI) is another approach for managing smart cities by enabling predictive analytics, automation, and smart decision making based on vast amounts of data. Lastly, the blockchain is introduced as a promising approach for smart cities to achieve the required security. The review concludes by identifying potential research directions to address the challenges and complexities brought about by integrating these key enabling technologies.
{"title":"Enabling Technologies for Next-Generation Smart Cities: A Comprehensive Review and Research Directions","authors":"Shrouk A. Ali, Shaimaa Ahmed Elsaid, Abdelhamied A. Ateya, Muhammed ElAffendi, A. El-latif","doi":"10.3390/fi15120398","DOIUrl":"https://doi.org/10.3390/fi15120398","url":null,"abstract":"The concept of smart cities, which aim to enhance the quality of urban life through innovative technologies and policies, has gained significant momentum in recent years. As we approach the era of next-generation smart cities, it becomes crucial to explore the key enabling technologies that will shape their development. This work reviews the leading technologies driving the future of smart cities. The work begins by introducing the main requirements of different smart city applications; then, the enabling technologies are presented. This work highlights the transformative potential of the Internet of things (IoT) to facilitate data collection and analysis to improve urban infrastructure and services. As a complementary technology, distributed edge computing brings computational power closer to devices, reducing the reliance on centralized data centers. Another key technology is virtualization, which optimizes resource utilization, enabling multiple virtual environments to run efficiently on shared hardware. Software-defined networking (SDN) emerges as a pivotal technology that brings flexibility and scalability to smart city networks, allowing for dynamic network management and resource allocation. Artificial intelligence (AI) is another approach for managing smart cities by enabling predictive analytics, automation, and smart decision making based on vast amounts of data. Lastly, the blockchain is introduced as a promising approach for smart cities to achieve the required security. The review concludes by identifying potential research directions to address the challenges and complexities brought about by integrating these key enabling technologies.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"581 ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138983125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aldo Hernandez-Suarez, G. Sánchez-Pérez, L. K. Toscano-Medina, Hector Perez-Meana, J. Portillo-Portillo, J. Olivares-Mercado
The rapid evolution of the Internet of Everything (IoE) has significantly enhanced global connectivity and multimedia content sharing, simultaneously escalating the unauthorized distribution of multimedia content, posing risks to intellectual property rights. In 2022 alone, about 130 billion accesses to potentially non-compliant websites were recorded, underscoring the challenges for industries reliant on copyright-protected assets. Amidst prevailing uncertainties and the need for technical and AI-integrated solutions, this study introduces two pivotal contributions. First, it establishes a novel taxonomy aimed at safeguarding and identifying IoE-based content infringements. Second, it proposes an innovative architecture combining IoE components with automated sensors to compile a dataset reflective of potential copyright breaches. This dataset is analyzed using a Bidirectional Encoder Representations from Transformers-based advanced Natural Language Processing (NLP) algorithm, further fine-tuned by a dense neural network (DNN), achieving a remarkable 98.71% accuracy in pinpointing websites that violate copyright.
{"title":"Methodological Approach for Identifying Websites with Infringing Content via Text Transformers and Dense Neural Networks","authors":"Aldo Hernandez-Suarez, G. Sánchez-Pérez, L. K. Toscano-Medina, Hector Perez-Meana, J. Portillo-Portillo, J. Olivares-Mercado","doi":"10.3390/fi15120397","DOIUrl":"https://doi.org/10.3390/fi15120397","url":null,"abstract":"The rapid evolution of the Internet of Everything (IoE) has significantly enhanced global connectivity and multimedia content sharing, simultaneously escalating the unauthorized distribution of multimedia content, posing risks to intellectual property rights. In 2022 alone, about 130 billion accesses to potentially non-compliant websites were recorded, underscoring the challenges for industries reliant on copyright-protected assets. Amidst prevailing uncertainties and the need for technical and AI-integrated solutions, this study introduces two pivotal contributions. First, it establishes a novel taxonomy aimed at safeguarding and identifying IoE-based content infringements. Second, it proposes an innovative architecture combining IoE components with automated sensors to compile a dataset reflective of potential copyright breaches. This dataset is analyzed using a Bidirectional Encoder Representations from Transformers-based advanced Natural Language Processing (NLP) algorithm, further fine-tuned by a dense neural network (DNN), achieving a remarkable 98.71% accuracy in pinpointing websites that violate copyright.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"579 ","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138983126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The existing research on dependent task offloading and resource allocation assumes that edge servers can provide computational and communication resources free of charge. This paper proposes a two-stage resource allocation method to address this issue. In the first stage, users incentivize edge servers to provide resources. We formulate the incentive problem in this stage as a multivariate Stackelberg game, which takes into account both computational and communication resources. In addition, we also analyze the uniqueness of the Stackelberg equilibrium under information sharing conditions. Considering the privacy issues of the participants, the research is extended to scenarios without information sharing, where the multivariable game problem is modeled as a partially observable Markov decision process (POMDP). In order to obtain the optimal incentive decision in this scenario, a reinforcement learning algorithm based on the learning game is designed. In the second stage, we propose a greedy-based deep reinforcement learning algorithm that is aimed at minimizing task execution time by optimizing resource and task allocation strategies. Finally, the simulation results demonstrate that the algorithm designed for non-information sharing scenarios can effectively approximate the theoretical Stackelberg equilibrium, and its performance is found to be better than that of the other three benchmark methods. After the allocation of resources and sub-tasks by the greedy-based deep reinforcement learning algorithm, the execution delay of the dependent task is significantly lower than that in local processing.
{"title":"A Learning Game-Based Approach to Task-Dependent Edge Resource Allocation","authors":"Zuopeng Li, Hengshuai Ju, Zepeng Ren","doi":"10.3390/fi15120395","DOIUrl":"https://doi.org/10.3390/fi15120395","url":null,"abstract":"The existing research on dependent task offloading and resource allocation assumes that edge servers can provide computational and communication resources free of charge. This paper proposes a two-stage resource allocation method to address this issue. In the first stage, users incentivize edge servers to provide resources. We formulate the incentive problem in this stage as a multivariate Stackelberg game, which takes into account both computational and communication resources. In addition, we also analyze the uniqueness of the Stackelberg equilibrium under information sharing conditions. Considering the privacy issues of the participants, the research is extended to scenarios without information sharing, where the multivariable game problem is modeled as a partially observable Markov decision process (POMDP). In order to obtain the optimal incentive decision in this scenario, a reinforcement learning algorithm based on the learning game is designed. In the second stage, we propose a greedy-based deep reinforcement learning algorithm that is aimed at minimizing task execution time by optimizing resource and task allocation strategies. Finally, the simulation results demonstrate that the algorithm designed for non-information sharing scenarios can effectively approximate the theoretical Stackelberg equilibrium, and its performance is found to be better than that of the other three benchmark methods. After the allocation of resources and sub-tasks by the greedy-based deep reinforcement learning algorithm, the execution delay of the dependent task is significantly lower than that in local processing.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"67 6","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138590883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luca Sabatucci, A. Augello, Giuseppe Caggianese, Luigi Gallo
Researchers are exploring methods that exploit digital twins as all-purpose abstractions for sophisticated modelling and simulation, bringing elements of the real world into the virtual realm. Digital twins are essential elements of the digital transformation of society, which mostly benefit manufacturing, smart cities, healthcare contexts, and in general systems that include humans in the loop. As the metaverse concept continues to evolve, the line separating the virtual and the real will progressively fade away. Considering the metaverse’s goal to emulate our social reality, it becomes essential to examine the aspects that characterise real-world interaction practices and explicitly model both physical and social contexts. While the unfolding metaverse may reshape these practices in distinct ways from their real-world counterparts, our position is that it is essential to incorporate social theories into the modelling processes of digital twins within the metaverse. In this work, we discuss our perspective by introducing a digital practice model inspired by the theory of social practice. We illustrate this model by exploiting the scenario of a virtual grocery shop designed to help older adults reduce their social isolation.
{"title":"Envisioning Digital Practices in the Metaverse: A Methodological Perspective","authors":"Luca Sabatucci, A. Augello, Giuseppe Caggianese, Luigi Gallo","doi":"10.3390/fi15120394","DOIUrl":"https://doi.org/10.3390/fi15120394","url":null,"abstract":"Researchers are exploring methods that exploit digital twins as all-purpose abstractions for sophisticated modelling and simulation, bringing elements of the real world into the virtual realm. Digital twins are essential elements of the digital transformation of society, which mostly benefit manufacturing, smart cities, healthcare contexts, and in general systems that include humans in the loop. As the metaverse concept continues to evolve, the line separating the virtual and the real will progressively fade away. Considering the metaverse’s goal to emulate our social reality, it becomes essential to examine the aspects that characterise real-world interaction practices and explicitly model both physical and social contexts. While the unfolding metaverse may reshape these practices in distinct ways from their real-world counterparts, our position is that it is essential to incorporate social theories into the modelling processes of digital twins within the metaverse. In this work, we discuss our perspective by introducing a digital practice model inspired by the theory of social practice. We illustrate this model by exploiting the scenario of a virtual grocery shop designed to help older adults reduce their social isolation.","PeriodicalId":37982,"journal":{"name":"Future Internet","volume":"15 9","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138594289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}