The Internet of Things (IoT) has revolutionized digital ecosystems by interconnecting billions of devices across various industries, enabling enhanced automation, real-time monitoring, and data-driven decision-making. However, this expansion has introduced significant security and privacy challenges due to the heterogeneous nature of IoT devices, resource constraints, and the decentralized nature of their architectures. Large Language Models (LLMs) have recently shown promise in improving cybersecurity by enabling automated threat intelligence, anomaly detection, malware classification, and privacy-aware security enforcement. Therefore, this systematic review investigates research published between 2015 and 2025 to examine the intersection of LLMs, IoT security, and privacy. We evaluate state-of-the-art LLM-based security frameworks, highlighting their effectiveness, limitations, and impact on IoT cybersecurity. In addition, this review identifies key research gaps and challenges, providing insight into the scalability, efficiency, and adaptability of LLM-driven security solutions. This work aims to contribute to the advancement of AI-driven IoT security frameworks, supporting the development of resilient and privacy-preserving cybersecurity architectures.
{"title":"The role of Large Language Models in IoT security: A systematic review of advances, challenges, and opportunities","authors":"Saeid Jamshidi , Negar Shahabi , Amin Nikanjam , Kawser Wazed Nafi , Foutse Khomh , Carol Fung","doi":"10.1016/j.iot.2025.101735","DOIUrl":"10.1016/j.iot.2025.101735","url":null,"abstract":"<div><div>The Internet of Things (IoT) has revolutionized digital ecosystems by interconnecting billions of devices across various industries, enabling enhanced automation, real-time monitoring, and data-driven decision-making. However, this expansion has introduced significant security and privacy challenges due to the heterogeneous nature of IoT devices, resource constraints, and the decentralized nature of their architectures. Large Language Models (LLMs) have recently shown promise in improving cybersecurity by enabling automated threat intelligence, anomaly detection, malware classification, and privacy-aware security enforcement. Therefore, this systematic review investigates research published between 2015 and 2025 to examine the intersection of LLMs, IoT security, and privacy. We evaluate state-of-the-art LLM-based security frameworks, highlighting their effectiveness, limitations, and impact on IoT cybersecurity. In addition, this review identifies key research gaps and challenges, providing insight into the scalability, efficiency, and adaptability of LLM-driven security solutions. This work aims to contribute to the advancement of AI-driven IoT security frameworks, supporting the development of resilient and privacy-preserving cybersecurity architectures.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101735"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144920294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-09-11DOI: 10.1016/j.iot.2025.101740
Julio D. Arjona, José A. Barriga, Fernando Díaz Cantero, Jose M. Chaves-González, Pedro J. Clemente
The Internet of Things (IoT) is rapidly transforming the modern world by connecting billions of devices that generate a continuous flow of data. Its rapid expansion has broadened applications from home automation to nearly every industry. Selecting an appropriate IoT architecture is increasingly challenging due to the complexity of components, interactions, and diverse application requirements. Ensuring that an IoT architecture meets performance, reliability, and resource constraints before deployment is essential for avoiding inefficiencies and bottlenecks. Emulating IoT architectures during the design phase allows architects to test different IoT architectures and assess system performance under various conditions. This helps ensure optimal resource utilization and responsiveness. Several simulation approaches, particularly those based on Model-Driven Development (MDD), have been proposed to model and analyse IoT environments. These methodologies provide high-level abstractions of complex IoT architectures, enabling systematic experimentation and evaluation. This work introduces a novel methodology and tools for simulating and refining IoT architectures using a evolutionary approach. They allow architects to assess and improve performance based on configurable parameters such as latency and CPU usage. Through iterative IoT architecture modifications and testing, the proposed MDD-based approach optimizes system design, ensuring that the final architecture is fine-tuned to deliver the best possible performance for a given application. The proposal is validated by a case study related with a smart parking.
{"title":"Optimizing IoT architectures by using model driven approach and evolution strategy","authors":"Julio D. Arjona, José A. Barriga, Fernando Díaz Cantero, Jose M. Chaves-González, Pedro J. Clemente","doi":"10.1016/j.iot.2025.101740","DOIUrl":"10.1016/j.iot.2025.101740","url":null,"abstract":"<div><div>The Internet of Things (IoT) is rapidly transforming the modern world by connecting billions of devices that generate a continuous flow of data. Its rapid expansion has broadened applications from home automation to nearly every industry. Selecting an appropriate IoT architecture is increasingly challenging due to the complexity of components, interactions, and diverse application requirements. Ensuring that an IoT architecture meets performance, reliability, and resource constraints before deployment is essential for avoiding inefficiencies and bottlenecks. Emulating IoT architectures during the design phase allows architects to test different IoT architectures and assess system performance under various conditions. This helps ensure optimal resource utilization and responsiveness. Several simulation approaches, particularly those based on Model-Driven Development (MDD), have been proposed to model and analyse IoT environments. These methodologies provide high-level abstractions of complex IoT architectures, enabling systematic experimentation and evaluation. This work introduces a novel methodology and tools for simulating and refining IoT architectures using a evolutionary approach. They allow architects to assess and improve performance based on configurable parameters such as latency and CPU usage. Through iterative IoT architecture modifications and testing, the proposed MDD-based approach optimizes system design, ensuring that the final architecture is fine-tuned to deliver the best possible performance for a given application. The proposal is validated by a case study related with a smart parking.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101740"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-10-20DOI: 10.1016/j.iot.2025.101800
Ali Alssaiari , Maher Alharby , Qasim Jan , Shahid Hussain , Sana Ullah
Urbanisation and digital transformation have led to the development of smart city applications that rely on the efficiency of interconnected Internet of Things devices, which are often resource-constrained. This situation presents challenges in energy efficiency and cybersecurity. Although current AI-based solutions enhance cybersecurity, they may consume significant resources, potentially worsening energy efficiency. To address these challenges, there is a need for advanced mechanisms that balance resource utilisation and energy consumption while maintaining cybersecurity. This paper introduces an integrated approach of Deep Learning and the Black Hole Algorithm (BHA) to optimise energy use without compromising security within the smart city ecosystem. Our methodology employs Long Short-Term Memory networks for deep learning to capture IoT energy consumption patterns and incorporate contextual markers for effective anomaly detection. Simultaneously, BHA serves as a metaheuristic optimisation technique to find optimal control decisions. This dual strategy aims to reduce anomalies in IoT networks while improving energy efficiency, resulting in enhanced smart city applications. The effectiveness of this approach is demonstrated using an IoT-based smart city dataset, achieving anomaly detection with accuracy (99.60 %), precision (99.53 %), recall (99.40 %), and an F-measure (99.80 %). In addition, energy efficiency of 66.67 %, 71.43 %, 73.33 %, 77.78 %, and 63.64 % was achieved compared to the state-of-the-art methods in smart city applications.
{"title":"Balancing anomaly detection and energy efficiency in smart city IoT networks using hybrid deep learning and black hole algorithm","authors":"Ali Alssaiari , Maher Alharby , Qasim Jan , Shahid Hussain , Sana Ullah","doi":"10.1016/j.iot.2025.101800","DOIUrl":"10.1016/j.iot.2025.101800","url":null,"abstract":"<div><div>Urbanisation and digital transformation have led to the development of smart city applications that rely on the efficiency of interconnected Internet of Things devices, which are often resource-constrained. This situation presents challenges in energy efficiency and cybersecurity. Although current AI-based solutions enhance cybersecurity, they may consume significant resources, potentially worsening energy efficiency. To address these challenges, there is a need for advanced mechanisms that balance resource utilisation and energy consumption while maintaining cybersecurity. This paper introduces an integrated approach of Deep Learning and the Black Hole Algorithm (BHA) to optimise energy use without compromising security within the smart city ecosystem. Our methodology employs Long Short-Term Memory networks for deep learning to capture IoT energy consumption patterns and incorporate contextual markers for effective anomaly detection. Simultaneously, BHA serves as a metaheuristic optimisation technique to find optimal control decisions. This dual strategy aims to reduce anomalies in IoT networks while improving energy efficiency, resulting in enhanced smart city applications. The effectiveness of this approach is demonstrated using an IoT-based smart city dataset, achieving anomaly detection with accuracy (99.60 %), precision (99.53 %), recall (99.40 %), and an F-measure (99.80 %). In addition, energy efficiency of 66.67 %, 71.43 %, 73.33 %, 77.78 %, and 63.64 % was achieved compared to the state-of-the-art methods in smart city applications.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101800"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-09-25DOI: 10.1016/j.iot.2025.101778
Avni Rustemi , Fisnik Dalipi
The integration of the Internet of Things (IoT), blockchain technology (BT), and Artificial Intelligence (AI) is transforming agriculture into a smart, data-driven system designed to enhance productivity, transparency, and automation. Population growth and limited resources make these technologies increasingly critical, especially in regions with scarce water, nutrients, or fertile soil. IoT provides real-time monitoring and physical data collection through sensors and edge devices, BT ensures data security, traceability, and transparency across supply chains, while AI enables predictive analytics and automated decision-making, reducing direct farmer intervention. This systematic literature review is focusing on the IoT implementations in the agriculture ecosystem, with the sole aim of increasing agricultural productivity and efficiency. Furthermore, it analyzes the interplay of IoT, AI, and BT in agriculture, with the emphasis on the measurable impacts, security of communication protocols, socio-technical implications, and automation and decision-making, among others. Despite their promise, integration faces notable barriers such as data privacy, interoperability, real-time processing, and implementation costs. Using the PRISMA framework, 35 studies were selected from an initial pool of 977 articles published between 2019 and 2025. A rigorous quality assessment extracted insights on integration strategies, technical limitations, and practical applications. The review highlights opportunities and challenges in adopting IoT, AI, and BT for sustainable smart agriculture. It concludes with recommendations for researchers, policymakers, technology developers, and practitioners to address current gaps, strengthen security and interoperability, and guide future advancements toward resilient and efficient agricultural systems.
{"title":"Synergizing IoT, AI, and blockchain for smart agriculture: Challenges, opportunities, and future directions","authors":"Avni Rustemi , Fisnik Dalipi","doi":"10.1016/j.iot.2025.101778","DOIUrl":"10.1016/j.iot.2025.101778","url":null,"abstract":"<div><div>The integration of the Internet of Things (IoT), blockchain technology (BT), and Artificial Intelligence (AI) is transforming agriculture into a smart, data-driven system designed to enhance productivity, transparency, and automation. Population growth and limited resources make these technologies increasingly critical, especially in regions with scarce water, nutrients, or fertile soil. IoT provides real-time monitoring and physical data collection through sensors and edge devices, BT ensures data security, traceability, and transparency across supply chains, while AI enables predictive analytics and automated decision-making, reducing direct farmer intervention. This systematic literature review is focusing on the IoT implementations in the agriculture ecosystem, with the sole aim of increasing agricultural productivity and efficiency. Furthermore, it analyzes the interplay of IoT, AI, and BT in agriculture, with the emphasis on the measurable impacts, security of communication protocols, socio-technical implications, and automation and decision-making, among others. Despite their promise, integration faces notable barriers such as data privacy, interoperability, real-time processing, and implementation costs. Using the PRISMA framework, 35 studies were selected from an initial pool of 977 articles published between 2019 and 2025. A rigorous quality assessment extracted insights on integration strategies, technical limitations, and practical applications. The review highlights opportunities and challenges in adopting IoT, AI, and BT for sustainable smart agriculture. It concludes with recommendations for researchers, policymakers, technology developers, and practitioners to address current gaps, strengthen security and interoperability, and guide future advancements toward resilient and efficient agricultural systems.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101778"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-09-26DOI: 10.1016/j.iot.2025.101766
Erika Rosas , Benjamín Arratia , Ángel Martín Furones , Javier Prades , Pietro Manzoni , José M. Cecilia
Accurate water level monitoring in remote and harsh environments is critical for managing water resources, assessing climate impacts, and anticipating flood risks. Traditional in situ sensors often fail in these contexts due to corrosion, biofouling, or limited access for maintenance. Global Navigation Satellite System Interferometric Reflectometry (GNSS-IR) offers a passive, low-cost alternative by extracting water level information from multipath reflections of GNSS signals. However, using multi-constellation GNSS-IR for near real-time monitoring is challenging due to its high computational and communication demands, especially in low-power, low-connectivity areas.
This paper presents a novel edge computing-based GNSS-IR system designed for deployment in harsh environments. The system, validated in the highly saline La Mata–Torrevieja Natural Park (Spain), integrates a low-cost GNSS receiver and a modular gateway that executes the GNSS-IR processing locally. To efficiently transmit results over long distances, it uses the AlLoRa protocol, an advanced LPWAN solution optimized for high-throughput, low-power communication. By eliminating the need for raw data transmission and enabling local analytics, the system reduces bandwidth, enhances responsiveness, and supports continuous operation in constrained conditions. Experimental validation demonstrates the system’s effectiveness in achieving near real-time water level estimation with minimal infrastructure.
{"title":"Edge-enabled GNSS-IR for efficient water level monitoring in harsh environments","authors":"Erika Rosas , Benjamín Arratia , Ángel Martín Furones , Javier Prades , Pietro Manzoni , José M. Cecilia","doi":"10.1016/j.iot.2025.101766","DOIUrl":"10.1016/j.iot.2025.101766","url":null,"abstract":"<div><div>Accurate water level monitoring in remote and harsh environments is critical for managing water resources, assessing climate impacts, and anticipating flood risks. Traditional in situ sensors often fail in these contexts due to corrosion, biofouling, or limited access for maintenance. Global Navigation Satellite System Interferometric Reflectometry (GNSS-IR) offers a passive, low-cost alternative by extracting water level information from multipath reflections of GNSS signals. However, using multi-constellation GNSS-IR for near real-time monitoring is challenging due to its high computational and communication demands, especially in low-power, low-connectivity areas.</div><div>This paper presents a novel edge computing-based GNSS-IR system designed for deployment in harsh environments. The system, validated in the highly saline La Mata–Torrevieja Natural Park (Spain), integrates a low-cost GNSS receiver and a modular gateway that executes the GNSS-IR processing locally. To efficiently transmit results over long distances, it uses the AlLoRa protocol, an advanced LPWAN solution optimized for high-throughput, low-power communication. By eliminating the need for raw data transmission and enabling local analytics, the system reduces bandwidth, enhances responsiveness, and supports continuous operation in constrained conditions. Experimental validation demonstrates the system’s effectiveness in achieving near real-time water level estimation with minimal infrastructure.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101766"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-09-25DOI: 10.1016/j.iot.2025.101739
Patrick Sapel, Anna Garoufali, Christian Hopmann
A fundamental aspect of Industry 4.0 is interoperable asset-to-asset communication, essential for creating cross-company “lab of labs”. Such collaboration enables seamless data exchange across companies, streamlining manual processes like evaluating the capability of assets for specific manufacturing processes. While foundational technologies for asset interoperability exist, their integration and application in industrial contexts remain limited. Our research explores the integration of ontologies, which structure domain knowledge, and Asset Administration Shells (AAS), which represent assets in a standardized manner, to facilitate industrial interoperability. We have developed an architecture using an ontology-based graph database populated with AAS data, allowing automatic linking of AAS instances to corresponding class nodes. To demonstrate practical value, we have implemented this architecture using standardized software and tools, applying it to assess technical capabilities for a customer request in injection molding. Results confirm the potential for asset-to-asset communication in industry via graph databases, with benefits in flexible and scalable data management. However, limitations include unaddressed data safety and security concerns, as well as the need for updated database entries when AAS instances change. Additionally, challenges in scaling to integrate other domain ontologies should be tackled in future research. This work lays a foundation for advancing interoperable, cross-company data-sharing ecosystems.
{"title":"Leveraging ontologies and Asset Administration Shells for decision-support: A case study on production planning within the injection molding domain","authors":"Patrick Sapel, Anna Garoufali, Christian Hopmann","doi":"10.1016/j.iot.2025.101739","DOIUrl":"10.1016/j.iot.2025.101739","url":null,"abstract":"<div><div>A fundamental aspect of Industry 4.0 is interoperable asset-to-asset communication, essential for creating cross-company “lab of labs”. Such collaboration enables seamless data exchange across companies, streamlining manual processes like evaluating the capability of assets for specific manufacturing processes. While foundational technologies for asset interoperability exist, their integration and application in industrial contexts remain limited. Our research explores the integration of ontologies, which structure domain knowledge, and Asset Administration Shells (AAS), which represent assets in a standardized manner, to facilitate industrial interoperability. We have developed an architecture using an ontology-based graph database populated with AAS data, allowing automatic linking of AAS instances to corresponding class nodes. To demonstrate practical value, we have implemented this architecture using standardized software and tools, applying it to assess technical capabilities for a customer request in injection molding. Results confirm the potential for asset-to-asset communication in industry via graph databases, with benefits in flexible and scalable data management. However, limitations include unaddressed data safety and security concerns, as well as the need for updated database entries when AAS instances change. Additionally, challenges in scaling to integrate other domain ontologies should be tackled in future research. This work lays a foundation for advancing interoperable, cross-company data-sharing ecosystems.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101739"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rise of self-driving and connected vehicles is reshaping modern transportation, combining advanced communication with intelligent decision-making to revolutionize road safety and traffic flow. However, the open and dynamic nature of vehicular ad hoc networks (VANETs) exposes them to significant security threats, including false data injection and message tampering, which can disrupt trust and cooperation among nodes. In this work, we propose a novel cooperative misbehavior detection mechanism that integrates a Pre-Bayesian Q-learning framework with a majority game model to effectively identify and isolate malicious nodes. Our approach introduces dynamic coalition formation to exclude nodes with low trust values, and incorporates message classification by source to assess the reliability of information from roadside units (RSUs), same-manufacturer vehicles, and different-manufacturer vehicles. Iterative belief updates dynamically adjust trust levels among nodes, while ex-post validation ensures stable and consistent decision-making. Extensive simulations demonstrate that our model achieves high accuracy in distinguishing benign and malicious nodes, even in scenarios with up to 45 % adversarial influence. The results confirm that combining Q-learning with dynamic coalitions and message classification significantly enhances resilience, reliability, and consensus in VANETs under adversarial conditions. This framework provides a scalable and adaptable solution for securing connected autonomous systems and strengthening trust in real-world intelligent transportation networks.
{"title":"Trust-aware and game-theoretic cooperative detection of misbehavior in connected vehicles","authors":"Adil Attiaoui , Mouna Elmachkour , Abdellatif Kobbane , Marwane Ayaida , Hamidou Tembine","doi":"10.1016/j.iot.2025.101799","DOIUrl":"10.1016/j.iot.2025.101799","url":null,"abstract":"<div><div>The rise of self-driving and connected vehicles is reshaping modern transportation, combining advanced communication with intelligent decision-making to revolutionize road safety and traffic flow. However, the open and dynamic nature of vehicular ad hoc networks (VANETs) exposes them to significant security threats, including false data injection and message tampering, which can disrupt trust and cooperation among nodes. In this work, we propose a novel cooperative misbehavior detection mechanism that integrates a Pre-Bayesian Q-learning framework with a majority game model to effectively identify and isolate malicious nodes. Our approach introduces dynamic coalition formation to exclude nodes with low trust values, and incorporates message classification by source to assess the reliability of information from roadside units (RSUs), same-manufacturer vehicles, and different-manufacturer vehicles. Iterative belief updates dynamically adjust trust levels among nodes, while ex-post validation ensures stable and consistent decision-making. Extensive simulations demonstrate that our model achieves high accuracy in distinguishing benign and malicious nodes, even in scenarios with up to 45 % adversarial influence. The results confirm that combining Q-learning with dynamic coalitions and message classification significantly enhances resilience, reliability, and consensus in VANETs under adversarial conditions. This framework provides a scalable and adaptable solution for securing connected autonomous systems and strengthening trust in real-world intelligent transportation networks.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101799"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates the feasibility of using LoRaWAN as the communication protocol for a Spectrum Sensing Provider (SSP) in Cognitive Radio (CR) networks. We evaluate LoRaWAN capability to deliver reliable spectrum detection services by analyzing the impact of key protocol parameters such as duty cycle restrictions, gateway capacity, and network interference on delivering the sensing outcome in Cooperative Spectrum Sensing (CSS) scenarios. Additionally, we propose a novel cost function for selecting CSS groups, optimizing the trade-off between energy consumption and channel availability, along with a greedy scheduling algorithm to enhance sensing timeliness. Numerical analysis shows that our cost function may improve spectral and energy efficiency by 50% compared to classical SNR-based approaches, while the greedy algorithm effectively balances the SSP’s response to service requests. Our findings highlight that despite LoRaWAN constraints, increasing the number of users and detected channels significantly enhances SSP performance, enabling it to meet diverse spectrum sensing demands more efficiently.
{"title":"LoRa-SPaaS: Spectrum sensing as a service using LoRaWAN: Resources management and practical considerations","authors":"Abbass Nasser , Hussein Al Haj Hassan , Alaaeddine Ramadan , Chamseddine Zaki , Nada Sarkis , Jad Abou Chaaya , Ali Mansour","doi":"10.1016/j.iot.2025.101750","DOIUrl":"10.1016/j.iot.2025.101750","url":null,"abstract":"<div><div>This paper investigates the feasibility of using LoRaWAN as the communication protocol for a Spectrum Sensing Provider (SSP) in Cognitive Radio (CR) networks. We evaluate LoRaWAN capability to deliver reliable spectrum detection services by analyzing the impact of key protocol parameters such as duty cycle restrictions, gateway capacity, and network interference on delivering the sensing outcome in Cooperative Spectrum Sensing (CSS) scenarios. Additionally, we propose a novel cost function for selecting CSS groups, optimizing the trade-off between energy consumption and channel availability, along with a greedy scheduling algorithm to enhance sensing timeliness. Numerical analysis shows that our cost function may improve spectral and energy efficiency by 50% compared to classical SNR-based approaches, while the greedy algorithm effectively balances the SSP’s response to service requests. Our findings highlight that despite LoRaWAN constraints, increasing the number of users and detected channels significantly enhances SSP performance, enabling it to meet diverse spectrum sensing demands more efficiently.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101750"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-09-13DOI: 10.1016/j.iot.2025.101731
Stefan Pedratscher , Zahra Najafabadi Samani , Juan Aznar Poveda , Thomas Fahringer , Marlon Etheredge , Abolfazl Younesi , Juan Jose Durillo Barrionuevo , Peter Thoman
With the growing volume of data generated by IoT devices and user-driven services, stream processing has become essential for handling continuous, real-time data. However, fluctuating workloads and the dynamic nature of data streams make it difficult to maintain consistent performance over time, requiring adaptive resource allocation and frequent configuration tuning. Running multiple data stream processing pipelines on shared resources further exacerbates the problem by increasing contention, leading to higher end-to-end latency and reduced performance stability. Most existing approaches focus on tuning individual configuration parameters in isolation and overlook interactions between concurrently running data pipelines. To address these limitations, we present STREAMLINE, a dynamic multi-layer auto-tuning framework designed for stream processing environments. STREAMLINE uses transformers to predict future workloads and an evolutionary algorithm to automatically tune configuration parameters. It also includes a resource-efficient scheduler that efficiently assigns operators to resources across a compute cluster. Our dynamic update mechanism minimizes downtime and preserves state during configuration parameter and scheduling changes. We evaluate STREAMLINE on the Grid’5000 testbed using real-time IoT and streaming benchmarks. Results show that STREAMLINE outperforms state-of-the-art methods, improving throughput, end-to-end latency, and CPU utilization by up to 4 , 10 , and 9 , respectively, while reducing costs by up to 10 .
{"title":"STREAMLINE: Dynamic and Resource-Efficient Auto-Tuning of Stream Processing Data Pipeline Ensembles","authors":"Stefan Pedratscher , Zahra Najafabadi Samani , Juan Aznar Poveda , Thomas Fahringer , Marlon Etheredge , Abolfazl Younesi , Juan Jose Durillo Barrionuevo , Peter Thoman","doi":"10.1016/j.iot.2025.101731","DOIUrl":"10.1016/j.iot.2025.101731","url":null,"abstract":"<div><div>With the growing volume of data generated by IoT devices and user-driven services, stream processing has become essential for handling continuous, real-time data. However, fluctuating workloads and the dynamic nature of data streams make it difficult to maintain consistent performance over time, requiring adaptive resource allocation and frequent configuration tuning. Running multiple data stream processing pipelines on shared resources further exacerbates the problem by increasing contention, leading to higher end-to-end latency and reduced performance stability. Most existing approaches focus on tuning individual configuration parameters in isolation and overlook interactions between concurrently running data pipelines. To address these limitations, we present STREAMLINE, a dynamic multi-layer auto-tuning framework designed for stream processing environments. STREAMLINE uses transformers to predict future workloads and an evolutionary algorithm to automatically tune configuration parameters. It also includes a resource-efficient scheduler that efficiently assigns operators to resources across a compute cluster. Our dynamic update mechanism minimizes downtime and preserves state during configuration parameter and scheduling changes. We evaluate STREAMLINE on the Grid’5000 testbed using real-time IoT and streaming benchmarks. Results show that STREAMLINE outperforms state-of-the-art methods, improving throughput, end-to-end latency, and CPU utilization by up to 4<span><math><mo>×</mo></math></span> , 10<span><math><mo>×</mo></math></span> , and 9<span><math><mo>×</mo></math></span> , respectively, while reducing costs by up to 10<span><math><mo>×</mo></math></span> .</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101731"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-10-31DOI: 10.1016/j.iot.2025.101818
A. Villafranca, Maria-Dolores Cano
This paper delivers three core innovations for Internet of Things (IoT) intrusion detection in sustainable agriculture: (1) a unified preprocessing pipeline integrating StandardScaler, undersampling, SMOTE, Tomek Links, and 10-fold cross-validation, (2) a lightweight, dataset-agnostic DNN architecture (256–128–64–Softmax) achieving ≥97 % accuracy without per-dataset tuning, and (3) a curated benchmark of 18 IoT-IDS datasets including the Farm-Flow greenhouse trace with full metadata. Our model achieved 99.14 % average accuracy across 18 datasets, including 99.25 % precision on BoT-IoT, 99.99 % on CICIDS2017, and perfect 100 % scores on N-BaIoT, Car-Hacking, and CIC-IoT2022, demonstrating robust intrusion detection while maintaining only ∼1.2 M parameters for resource-constrained deployment. Experimental results demonstrate that our Deep Neural Network (DNN) model, through automatic hierarchical feature extraction, outperforms specialized architectures in heterogeneous scenarios while reducing reliance on manual feature engineering. Although Machine Learning (ML)-based methods and distributed approaches offer advantages in privacy and local processing, they face computational constraints and synchronization challenges that limit scalability. These findings confirm the effectiveness and adaptability of the proposed model, establishing it as a reliable and scalable solution for enhancing IoT network security in real-world deployments. Modern greenhouses, dairy farms, and cold-chain facilities, where cyber-attacks threaten water and energy efficiency gains, benefit from this edge-deployable approach that restores security and trustworthiness to smart-agriculture IoT networks.
{"title":"A lightweight edge-DL intrusion detection system for IoT sustainable smart-agriculture","authors":"A. Villafranca, Maria-Dolores Cano","doi":"10.1016/j.iot.2025.101818","DOIUrl":"10.1016/j.iot.2025.101818","url":null,"abstract":"<div><div>This paper delivers three core innovations for Internet of Things (IoT) intrusion detection in sustainable agriculture: (1) a unified preprocessing pipeline integrating StandardScaler, undersampling, SMOTE, Tomek Links, and 10-fold cross-validation, (2) a lightweight, dataset-agnostic DNN architecture (256–128–64–Softmax) achieving ≥97 % accuracy without per-dataset tuning, and (3) a curated benchmark of 18 IoT-IDS datasets including the Farm-Flow greenhouse trace with full metadata. Our model achieved 99.14 % average accuracy across 18 datasets, including 99.25 % precision on BoT-IoT, 99.99 % on CICIDS2017, and perfect 100 % scores on N-BaIoT, Car-Hacking, and CIC-IoT2022, demonstrating robust intrusion detection while maintaining only ∼1.2 M parameters for resource-constrained deployment. Experimental results demonstrate that our Deep Neural Network (DNN) model, through automatic hierarchical feature extraction, outperforms specialized architectures in heterogeneous scenarios while reducing reliance on manual feature engineering. Although Machine Learning (ML)-based methods and distributed approaches offer advantages in privacy and local processing, they face computational constraints and synchronization challenges that limit scalability. These findings confirm the effectiveness and adaptability of the proposed model, establishing it as a reliable and scalable solution for enhancing IoT network security in real-world deployments. Modern greenhouses, dairy farms, and cold-chain facilities, where cyber-attacks threaten water and energy efficiency gains, benefit from this edge-deployable approach that restores security and trustworthiness to smart-agriculture IoT networks.</div></div>","PeriodicalId":29968,"journal":{"name":"Internet of Things","volume":"34 ","pages":"Article 101818"},"PeriodicalIF":7.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}