The use of data centers is ubiquitous, as they support multiple technologies across domains for storing, processing, and disseminating data. IoT applications utilize both cloud data centers and edge data centers based on the nature of the workload. Due to the stringent latency requirements of IoT applications, the workloads are run on hardware accelerators such as FPGAs and GPUs for faster execution. The introduction of such hardware alongside existing variations in the hardware and software configurations of the machines in the data center, increases the heterogeneity of the infrastructure. Optimal job performance necessitates the satisfaction of task placement constraints. This is accomplished through constraint-aware scheduling, where tasks are scheduled on worker nodes with appropriate machine configurations. The presence of placement constraints limits the number of suitable resources available to run a task, leading to queuing delays. As federated schedulers have gained prominence for their speed and scalability, we assess the performance of two such schedulers, Megha and Pigeon, within a constraint-aware context. We extend our previous work on Megha by comparing its performance with a constraint-aware version of the state-of-the-art federated scheduler Pigeon, PigeonC. The results of our experiments with synthetic and real-world cluster traces show that Megha reduces the 99th percentile of job response time delays by a factor of 10 when compared to PigeonC. We also describe enhancements made to Megha’s architecture to improve its scheduling efficiency.
{"title":"Constraint-Aware Federated Scheduling for Data Center Workloads","authors":"Meghana Thiyyakat, Subramaniam Kalambur, Dinkar Sitaram","doi":"10.3390/iot4040023","DOIUrl":"https://doi.org/10.3390/iot4040023","url":null,"abstract":"The use of data centers is ubiquitous, as they support multiple technologies across domains for storing, processing, and disseminating data. IoT applications utilize both cloud data centers and edge data centers based on the nature of the workload. Due to the stringent latency requirements of IoT applications, the workloads are run on hardware accelerators such as FPGAs and GPUs for faster execution. The introduction of such hardware alongside existing variations in the hardware and software configurations of the machines in the data center, increases the heterogeneity of the infrastructure. Optimal job performance necessitates the satisfaction of task placement constraints. This is accomplished through constraint-aware scheduling, where tasks are scheduled on worker nodes with appropriate machine configurations. The presence of placement constraints limits the number of suitable resources available to run a task, leading to queuing delays. As federated schedulers have gained prominence for their speed and scalability, we assess the performance of two such schedulers, Megha and Pigeon, within a constraint-aware context. We extend our previous work on Megha by comparing its performance with a constraint-aware version of the state-of-the-art federated scheduler Pigeon, PigeonC. The results of our experiments with synthetic and real-world cluster traces show that Megha reduces the 99th percentile of job response time delays by a factor of 10 when compared to PigeonC. We also describe enhancements made to Megha’s architecture to improve its scheduling efficiency.","PeriodicalId":6745,"journal":{"name":"2019 II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4.0&IoT)","volume":"343 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135392330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bowling is a target sport that is popular among all age groups with professionals and amateur players. Delivering an accurate and consistent bowling throw into the lane requires the incorporation of motion techniques. Consequently, this research presents a novel IoT Cloud-based system for providing real-time monitoring and coaching services to bowling athletes. The system includes two inertial measurement units (IMUs) sensors for capturing motion data, a mobile application, and a Cloud server for processing the data. First, the quality of each phase of a throw is assessed using a Dynamic Time Warping (DTW)-based algorithm. Second, an on-device-level technique is proposed to identify common bowling errors. Finally, an SVM classification model is employed for assessing the skill level of bowler athletes. We recruited nine right-handed bowlers to perform 50 throws wearing the two sensors and using the proposed system. The results of our experiments suggest that the proposed system can effectively and efficiently assess the quality of the throw, detect common bowling errors, and classify the skill level of the bowler.
{"title":"A Novel Internet of Things-Based System for Ten-Pin Bowling","authors":"Ilias Zosimadis, Ioannis Stamelos","doi":"10.3390/iot4040022","DOIUrl":"https://doi.org/10.3390/iot4040022","url":null,"abstract":"Bowling is a target sport that is popular among all age groups with professionals and amateur players. Delivering an accurate and consistent bowling throw into the lane requires the incorporation of motion techniques. Consequently, this research presents a novel IoT Cloud-based system for providing real-time monitoring and coaching services to bowling athletes. The system includes two inertial measurement units (IMUs) sensors for capturing motion data, a mobile application, and a Cloud server for processing the data. First, the quality of each phase of a throw is assessed using a Dynamic Time Warping (DTW)-based algorithm. Second, an on-device-level technique is proposed to identify common bowling errors. Finally, an SVM classification model is employed for assessing the skill level of bowler athletes. We recruited nine right-handed bowlers to perform 50 throws wearing the two sensors and using the proposed system. The results of our experiments suggest that the proposed system can effectively and efficiently assess the quality of the throw, detect common bowling errors, and classify the skill level of the bowler.","PeriodicalId":6745,"journal":{"name":"2019 II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4.0&IoT)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135810368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The falling cost of IoT cameras, the advancement of AI-based computer vision algorithms, and powerful hardware accelerators for deep learning have enabled the widespread deployment of surveillance cameras with the ability to automatically analyze streaming video feeds to detect events of interest. While streaming video analytics is currently largely performed in the cloud, edge computing has emerged as a pivotal component due to its advantages of low latency, reduced bandwidth, and enhanced privacy. However, a distinct gap persists between state-of-the-art computer vision algorithms and the successful practical implementation of edge-based streaming video analytics systems. This paper presents a comprehensive review of more than 30 research papers published over the last 6 years on IoT edge streaming video analytics (IE-SVA) systems. The papers are analyzed across 17 distinct dimensions. Unlike prior reviews, we examine each system holistically, identifying their strengths and weaknesses in diverse implementations. Our findings suggest that certain critical topics necessary for the practical realization of IE-SVA systems are not sufficiently addressed in current research. Based on these observations, we propose research trajectories across short-, medium-, and long-term horizons. Additionally, we explore trending topics in other computing areas that can significantly impact the evolution of IE-SVA systems.
{"title":"Internet-of-Things Edge Computing Systems for Streaming Video Analytics: Trails Behind and the Paths Ahead","authors":"Arun A. Ravindran","doi":"10.3390/iot4040021","DOIUrl":"https://doi.org/10.3390/iot4040021","url":null,"abstract":"The falling cost of IoT cameras, the advancement of AI-based computer vision algorithms, and powerful hardware accelerators for deep learning have enabled the widespread deployment of surveillance cameras with the ability to automatically analyze streaming video feeds to detect events of interest. While streaming video analytics is currently largely performed in the cloud, edge computing has emerged as a pivotal component due to its advantages of low latency, reduced bandwidth, and enhanced privacy. However, a distinct gap persists between state-of-the-art computer vision algorithms and the successful practical implementation of edge-based streaming video analytics systems. This paper presents a comprehensive review of more than 30 research papers published over the last 6 years on IoT edge streaming video analytics (IE-SVA) systems. The papers are analyzed across 17 distinct dimensions. Unlike prior reviews, we examine each system holistically, identifying their strengths and weaknesses in diverse implementations. Our findings suggest that certain critical topics necessary for the practical realization of IE-SVA systems are not sufficiently addressed in current research. Based on these observations, we propose research trajectories across short-, medium-, and long-term horizons. Additionally, we explore trending topics in other computing areas that can significantly impact the evolution of IE-SVA systems.","PeriodicalId":6745,"journal":{"name":"2019 II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4.0&IoT)","volume":"19 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135266195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Secret sharing schemes are widely used to protect data by breaking the secret into pieces and sharing them amongst various members of a party. In this paper, our objective is to produce a repairable ramp scheme that allows for the retrieval of a share through a collection of members in the event of its loss. Repairable Threshold Schemes (RTSs) can be used in cloud storage and General Data Protection Regulation (GDPR) protocols. Secure and energy-efficient data transfer in sensor-based IoTs is built using ramp-type schemes. Protecting personal privacy and reinforcing the security of electronic identification (eID) cards can be achieved using similar schemes. Desmedt et al. introduced the concept of frameproofness in 2021, which motivated us to further improve our construction with respect to this framework. We introduce a graph theoretic approach to the design for a well-rounded and easy presentation of the idea and clarity of our results. We also highlight the importance of secret sharing schemes for IoT applications, as they distribute the secret amongst several devices. Secret sharing schemes offer superior security in lightweight IoT compared to symmetric key encryption or AE schemes because they do not disclose the entire secret to a single device, but rather distribute it among several devices.
{"title":"IoT-Applicable Generalized Frameproof Combinatorial Designs","authors":"Bimal Kumar Roy, Anandarup Roy","doi":"10.3390/iot4030020","DOIUrl":"https://doi.org/10.3390/iot4030020","url":null,"abstract":"Secret sharing schemes are widely used to protect data by breaking the secret into pieces and sharing them amongst various members of a party. In this paper, our objective is to produce a repairable ramp scheme that allows for the retrieval of a share through a collection of members in the event of its loss. Repairable Threshold Schemes (RTSs) can be used in cloud storage and General Data Protection Regulation (GDPR) protocols. Secure and energy-efficient data transfer in sensor-based IoTs is built using ramp-type schemes. Protecting personal privacy and reinforcing the security of electronic identification (eID) cards can be achieved using similar schemes. Desmedt et al. introduced the concept of frameproofness in 2021, which motivated us to further improve our construction with respect to this framework. We introduce a graph theoretic approach to the design for a well-rounded and easy presentation of the idea and clarity of our results. We also highlight the importance of secret sharing schemes for IoT applications, as they distribute the secret amongst several devices. Secret sharing schemes offer superior security in lightweight IoT compared to symmetric key encryption or AE schemes because they do not disclose the entire secret to a single device, but rather distribute it among several devices.","PeriodicalId":6745,"journal":{"name":"2019 II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4.0&IoT)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136236419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raafat George Saadé, Jun Zhang, Xiaoyong Wang, Hao Liu, Hong Guan
The application of the Internet of Things is increasing in momentum as advances in artificial intelligence exponentially increase its integration. This has caused continuous shifts in the Internet of Things paradigm with increasing levels of complexity. Consequently, researchers, practitioners, and governments continue facing evolving challenges, making it more difficult to adapt. This is especially true in the education sector, which is the focus of this article. The overall purpose of this study is to explore the application of IoT and artificial intelligence in education and, more specifically, learning. Our methodology follows four research questions. We first report the results of a systematic literature review on the Internet of Intelligence of Things (IoIT) in education. Secondly, we develop a corresponding conceptual model, followed thirdly by an exploratory pilot survey conducted on a group of educators from around the world to get insights on their knowledge and use of the Internet of Things in their classroom, thereby providing a better understanding of issues, such as knowledge, use, and their readiness to integrate IoIT. We finally present the application of the IoITE conceptual model in teaching and learning through four use cases. Our review of publications shows that research in the IoITE is scarce. This is even more so if we consider its application to learning. Analysis of the survey results finds that educators, in general, are lacking in their readiness to innovate with the Internet of Things in learning. Use cases highlight IoITE possibilities and its potential to explore and exploit. Challenges are identified and discussed.
{"title":"Challenges and Opportunities in the Internet of Intelligence of Things in Higher Education—Towards Bridging Theory and Practice","authors":"Raafat George Saadé, Jun Zhang, Xiaoyong Wang, Hao Liu, Hong Guan","doi":"10.3390/iot4030019","DOIUrl":"https://doi.org/10.3390/iot4030019","url":null,"abstract":"The application of the Internet of Things is increasing in momentum as advances in artificial intelligence exponentially increase its integration. This has caused continuous shifts in the Internet of Things paradigm with increasing levels of complexity. Consequently, researchers, practitioners, and governments continue facing evolving challenges, making it more difficult to adapt. This is especially true in the education sector, which is the focus of this article. The overall purpose of this study is to explore the application of IoT and artificial intelligence in education and, more specifically, learning. Our methodology follows four research questions. We first report the results of a systematic literature review on the Internet of Intelligence of Things (IoIT) in education. Secondly, we develop a corresponding conceptual model, followed thirdly by an exploratory pilot survey conducted on a group of educators from around the world to get insights on their knowledge and use of the Internet of Things in their classroom, thereby providing a better understanding of issues, such as knowledge, use, and their readiness to integrate IoIT. We finally present the application of the IoITE conceptual model in teaching and learning through four use cases. Our review of publications shows that research in the IoITE is scarce. This is even more so if we consider its application to learning. Analysis of the survey results finds that educators, in general, are lacking in their readiness to innovate with the Internet of Things in learning. Use cases highlight IoITE possibilities and its potential to explore and exploit. Challenges are identified and discussed.","PeriodicalId":6745,"journal":{"name":"2019 II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4.0&IoT)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134913571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things (IoT) and the metaverse are two rapidly evolving technologies that have the potential to shape the future of our digital world. IoT refers to the network of physical devices, vehicles, buildings, and other objects that are connected to the internet and capable of collecting and sharing data. The metaverse, on the other hand, is a virtual world where users can interact with each other and digital objects in real time. In this research paper, we aim to explore the intersection of the IoT and metaverse and the opportunities and challenges that arise from their convergence. We will examine how IoT devices can be integrated into the metaverse to create new and immersive experiences for users. We will also analyse the potential use cases and applications of this technology in various industries such as healthcare, education, and entertainment. Additionally, we will discuss the privacy, security, and ethical concerns that arise from the use of IoT devices in the metaverse. A survey is conducted through a combination of a literature review and a case study analysis. This review will provide insights into the potential impact of IoT and metaverse on society and inform the development of future technologies in this field.
{"title":"Exploring the Confluence of IoT and Metaverse: Future Opportunities and Challenges","authors":"Rameez Asif, Syed Raheel Hassan","doi":"10.3390/iot4030018","DOIUrl":"https://doi.org/10.3390/iot4030018","url":null,"abstract":"The Internet of Things (IoT) and the metaverse are two rapidly evolving technologies that have the potential to shape the future of our digital world. IoT refers to the network of physical devices, vehicles, buildings, and other objects that are connected to the internet and capable of collecting and sharing data. The metaverse, on the other hand, is a virtual world where users can interact with each other and digital objects in real time. In this research paper, we aim to explore the intersection of the IoT and metaverse and the opportunities and challenges that arise from their convergence. We will examine how IoT devices can be integrated into the metaverse to create new and immersive experiences for users. We will also analyse the potential use cases and applications of this technology in various industries such as healthcare, education, and entertainment. Additionally, we will discuss the privacy, security, and ethical concerns that arise from the use of IoT devices in the metaverse. A survey is conducted through a combination of a literature review and a case study analysis. This review will provide insights into the potential impact of IoT and metaverse on society and inform the development of future technologies in this field.","PeriodicalId":6745,"journal":{"name":"2019 II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4.0&IoT)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135886218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the world becomes increasingly urbanized, the development of smart cities and the deployment of IoT applications will play an essential role in addressing urban challenges and shaping sustainable and resilient urban environments. However, there are also challenges to overcome, including privacy and security concerns, and interoperability issues. Addressing these challenges requires collaboration between governments, industry stakeholders, and citizens to ensure the responsible and equitable implementation of IoT technologies in smart cities. The IoT offers a vast array of possibilities for smart city applications, enabling the integration of various devices, sensors, and networks to collect and analyze data in real time. These applications span across different sectors, including transportation, energy management, waste management, public safety, healthcare, and more. By leveraging IoT technologies, cities can optimize their infrastructure, enhance resource allocation, and improve the quality of life for their citizens. In this paper, eight smart city global models have been proposed to guide the development and implementation of IoT applications in smart cities. These models provide frameworks and standards for city planners and stakeholders to design and deploy IoT solutions effectively. We provide a detailed evaluation of these models based on nine smart city evaluation metrics. The challenges to implement smart cities have been mentioned, and recommendations have been stated to overcome these challenges.
{"title":"Global Models of Smart Cities and Potential IoT Applications: A Review","authors":"A. Hassebo, M. Tealab","doi":"10.3390/iot4030017","DOIUrl":"https://doi.org/10.3390/iot4030017","url":null,"abstract":"As the world becomes increasingly urbanized, the development of smart cities and the deployment of IoT applications will play an essential role in addressing urban challenges and shaping sustainable and resilient urban environments. However, there are also challenges to overcome, including privacy and security concerns, and interoperability issues. Addressing these challenges requires collaboration between governments, industry stakeholders, and citizens to ensure the responsible and equitable implementation of IoT technologies in smart cities. The IoT offers a vast array of possibilities for smart city applications, enabling the integration of various devices, sensors, and networks to collect and analyze data in real time. These applications span across different sectors, including transportation, energy management, waste management, public safety, healthcare, and more. By leveraging IoT technologies, cities can optimize their infrastructure, enhance resource allocation, and improve the quality of life for their citizens. In this paper, eight smart city global models have been proposed to guide the development and implementation of IoT applications in smart cities. These models provide frameworks and standards for city planners and stakeholders to design and deploy IoT solutions effectively. We provide a detailed evaluation of these models based on nine smart city evaluation metrics. The challenges to implement smart cities have been mentioned, and recommendations have been stated to overcome these challenges.","PeriodicalId":6745,"journal":{"name":"2019 II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4.0&IoT)","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78952545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Alaghbari, Heng-Siong Lim, M. Saad, Yik Seng Yong
The intrusion detection system (IDS) is a promising technology for ensuring security against cyber-attacks in internet-of-things networks. In conventional IDS, anomaly detection and feature extraction are performed by two different models. In this paper, we propose a new integrated model based on deep autoencoder (AE) for anomaly detection and feature extraction. Firstly, AE is trained based on normal network traffic and used later to detect anomalies. Then, the trained AE model is employed again to extract useful low-dimensional features for anomalous data without the need for a feature extraction training stage, which is required by other methods such as principal components analysis (PCA) and linear discriminant analysis (LDA). After that, the extracted features are used by a machine learning (ML) or deep learning (DL) classifier to determine the type of attack (multi-classification). The performance of the proposed unified approach was evaluated on real IoT datasets called N-BaIoT and MQTTset, which contain normal and malicious network traffics. The proposed AE was compared with other popular anomaly detection techniques such as one-class support vector machine (OC-SVM) and isolation forest (iForest), in terms of performance metrics (accuracy, precision, recall, and F1-score), and execution time. AE was found to identify attacks better than OC-SVM and iForest with fast detection time. The proposed feature extraction method aims to reduce the computation complexity while maintaining the performance metrics of the multi-classifier models as much as possible compared to their counterparts. We tested the model with different ML/DL classifiers such as decision tree, random forest, deep neural network (DNN), conventional neural network (CNN), and hybrid CNN with long short-term memory (LSTM). The experiment results showed the capability of the proposed model to simultaneously detect anomalous events and reduce the dimensionality of the data.
{"title":"Deep Autoencoder-Based Integrated Model for Anomaly Detection and Efficient Feature Extraction in IoT Networks","authors":"K. Alaghbari, Heng-Siong Lim, M. Saad, Yik Seng Yong","doi":"10.3390/iot4030016","DOIUrl":"https://doi.org/10.3390/iot4030016","url":null,"abstract":"The intrusion detection system (IDS) is a promising technology for ensuring security against cyber-attacks in internet-of-things networks. In conventional IDS, anomaly detection and feature extraction are performed by two different models. In this paper, we propose a new integrated model based on deep autoencoder (AE) for anomaly detection and feature extraction. Firstly, AE is trained based on normal network traffic and used later to detect anomalies. Then, the trained AE model is employed again to extract useful low-dimensional features for anomalous data without the need for a feature extraction training stage, which is required by other methods such as principal components analysis (PCA) and linear discriminant analysis (LDA). After that, the extracted features are used by a machine learning (ML) or deep learning (DL) classifier to determine the type of attack (multi-classification). The performance of the proposed unified approach was evaluated on real IoT datasets called N-BaIoT and MQTTset, which contain normal and malicious network traffics. The proposed AE was compared with other popular anomaly detection techniques such as one-class support vector machine (OC-SVM) and isolation forest (iForest), in terms of performance metrics (accuracy, precision, recall, and F1-score), and execution time. AE was found to identify attacks better than OC-SVM and iForest with fast detection time. The proposed feature extraction method aims to reduce the computation complexity while maintaining the performance metrics of the multi-classifier models as much as possible compared to their counterparts. We tested the model with different ML/DL classifiers such as decision tree, random forest, deep neural network (DNN), conventional neural network (CNN), and hybrid CNN with long short-term memory (LSTM). The experiment results showed the capability of the proposed model to simultaneously detect anomalous events and reduce the dimensionality of the data.","PeriodicalId":6745,"journal":{"name":"2019 II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4.0&IoT)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75511592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the high flexibility and low cost of the deployment of UAVs, the application of UAV-assisted data collection has become widespread in the Internet of Things (IoT) systems. Meanwhile, the age of information (AoI) has been adopted as a key metric to evaluate the quality of the collected data. Most of the literature generally focuses on minimizing the age of all information. However, minimizing the overall AoI may lead to high costs and massive energy consumption. In addition, not all types of data need to be updated highly frequently. In this paper, we consider both the diversity of different tasks in terms of the data update period and the AoI of the collected sensing information. An efficient data collection method is proposed to maximize the system utility while ensuring the freshness of the collected information relative to their respective update periods. This problem is NP-hard. With the decomposition, we optimize the upload strategy of sensor nodes at each time slot, as well as the hovering positions and flight speeds of UAVs. Simulation results show that our method ensures the relative freshness of all information and reduces the time-averaged AoI by 96.5%, 44%, 90.4%, and 26% when the number of UAVs is 1 compared to the corresponding EMA, AOA, DROA, and DRL-eFresh, respectively.
随着无人机部署的高灵活性和低成本,无人机辅助数据采集在物联网(IoT)系统中的应用越来越广泛。同时,将信息时代(age of information, AoI)作为评价采集数据质量的关键指标。大多数文献通常侧重于最小化所有信息的年龄。然而,最小化总体AoI可能会导致高成本和大量能源消耗。此外,并非所有类型的数据都需要频繁更新。在本文中,我们考虑了不同任务在数据更新周期和收集到的传感信息的AoI方面的多样性。提出了一种有效的数据收集方法,以最大限度地提高系统效用,同时保证所收集的信息相对于各自的更新周期的新鲜度。这个问题是np困难的。通过分解,优化每个时隙传感器节点的上传策略,以及无人机的悬停位置和飞行速度。仿真结果表明,与相应的EMA、AOA、DROA和drl - fresh方法相比,该方法保证了所有信息的相对新鲜度,在无人机数量为1时,时间平均AoI分别降低了96.5%、44%、90.4%和26%。
{"title":"Efficient Sensing Data Collection with Diverse Age of Information in UAV-Assisted System","authors":"Yanhua Pei, Fen Hou, Guoying Zhang, Bin Lin","doi":"10.3390/iot4030015","DOIUrl":"https://doi.org/10.3390/iot4030015","url":null,"abstract":"With the high flexibility and low cost of the deployment of UAVs, the application of UAV-assisted data collection has become widespread in the Internet of Things (IoT) systems. Meanwhile, the age of information (AoI) has been adopted as a key metric to evaluate the quality of the collected data. Most of the literature generally focuses on minimizing the age of all information. However, minimizing the overall AoI may lead to high costs and massive energy consumption. In addition, not all types of data need to be updated highly frequently. In this paper, we consider both the diversity of different tasks in terms of the data update period and the AoI of the collected sensing information. An efficient data collection method is proposed to maximize the system utility while ensuring the freshness of the collected information relative to their respective update periods. This problem is NP-hard. With the decomposition, we optimize the upload strategy of sensor nodes at each time slot, as well as the hovering positions and flight speeds of UAVs. Simulation results show that our method ensures the relative freshness of all information and reduces the time-averaged AoI by 96.5%, 44%, 90.4%, and 26% when the number of UAVs is 1 compared to the corresponding EMA, AOA, DROA, and DRL-eFresh, respectively.","PeriodicalId":6745,"journal":{"name":"2019 II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4.0&IoT)","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81414476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an in-depth contextualized tutorial on Agricultural IoT (Agri-IoT), covering the fundamental concepts, assessment of routing architectures and protocols, and performance optimization techniques via a systematic survey and synthesis of the related literature. The negative impacts of climate change and the increasing global population on food security and unemployment threats have motivated the adoption of the wireless sensor network (WSN)-based Agri-IoT as an indispensable underlying technology in precision agriculture and greenhouses to improve food production capacities and quality. However, most related Agri-IoT testbed solutions have failed to achieve their performance expectations due to the lack of an in-depth and contextualized reference tutorial that provides a holistic overview of communication technologies, routing architectures, and performance optimization modalities based on users’ expectations. Thus, although IoT applications are founded on a common idea, each use case (e.g., Agri-IoT) varies based on the specific performance and user expectations as well as technological, architectural, and deployment requirements. Likewise, the agricultural setting is a unique and hostile area where conventional IoT technologies do not apply, hence the need for this tutorial. Consequently, this tutorial addresses these via the following contributions: (1) a systematic overview of the fundamental concepts, technologies, and architectural standards of WSN-based Agri-IoT, (2) an evaluation of the technical design requirements of a robust, location-independent, and affordable Agri-IoT, (3) a comprehensive survey of the benchmarking fault-tolerance techniques, communication standards, routing and medium access control (MAC) protocols, and WSN-based Agri-IoT testbed solutions, and (4) an in-depth case study on how to design a self-healing, energy-efficient, affordable, adaptive, stable, autonomous, and cluster-based WSN-specific Agri-IoT from a proposed taxonomy of multi-objective optimization (MOO) metrics that can guarantee an optimized network performance. Furthermore, this tutorial established new taxonomies of faults, architectural layers, and MOO metrics for cluster-based Agri-IoT (CA-IoT) networks and a three-tier objective framework with remedial measures for designing an efficient associated supervisory protocol for cluster-based Agri-IoT networks.
{"title":"A Tutorial on Agricultural IoT: Fundamental Concepts, Architectures, Routing, and Optimization","authors":"E. Effah, Ousmane Thiaré, A. Wyglinski","doi":"10.3390/iot4030014","DOIUrl":"https://doi.org/10.3390/iot4030014","url":null,"abstract":"This paper presents an in-depth contextualized tutorial on Agricultural IoT (Agri-IoT), covering the fundamental concepts, assessment of routing architectures and protocols, and performance optimization techniques via a systematic survey and synthesis of the related literature. The negative impacts of climate change and the increasing global population on food security and unemployment threats have motivated the adoption of the wireless sensor network (WSN)-based Agri-IoT as an indispensable underlying technology in precision agriculture and greenhouses to improve food production capacities and quality. However, most related Agri-IoT testbed solutions have failed to achieve their performance expectations due to the lack of an in-depth and contextualized reference tutorial that provides a holistic overview of communication technologies, routing architectures, and performance optimization modalities based on users’ expectations. Thus, although IoT applications are founded on a common idea, each use case (e.g., Agri-IoT) varies based on the specific performance and user expectations as well as technological, architectural, and deployment requirements. Likewise, the agricultural setting is a unique and hostile area where conventional IoT technologies do not apply, hence the need for this tutorial. Consequently, this tutorial addresses these via the following contributions: (1) a systematic overview of the fundamental concepts, technologies, and architectural standards of WSN-based Agri-IoT, (2) an evaluation of the technical design requirements of a robust, location-independent, and affordable Agri-IoT, (3) a comprehensive survey of the benchmarking fault-tolerance techniques, communication standards, routing and medium access control (MAC) protocols, and WSN-based Agri-IoT testbed solutions, and (4) an in-depth case study on how to design a self-healing, energy-efficient, affordable, adaptive, stable, autonomous, and cluster-based WSN-specific Agri-IoT from a proposed taxonomy of multi-objective optimization (MOO) metrics that can guarantee an optimized network performance. Furthermore, this tutorial established new taxonomies of faults, architectural layers, and MOO metrics for cluster-based Agri-IoT (CA-IoT) networks and a three-tier objective framework with remedial measures for designing an efficient associated supervisory protocol for cluster-based Agri-IoT networks.","PeriodicalId":6745,"journal":{"name":"2019 II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4.0&IoT)","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82138491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}