This article deals with an artificial intelligence (AI) framework to support Internet-of-everything (IoE) applications over sixth-generation wireless (6G) networks. An integrated IoE-Edge Intelligence ecosystem is designed to effectively face the problems of Virtual Machines (VMs) placement based on their popularity, computation offloading optimization, and system reliability improvement predicting compute nodes faults. The main objective of the article is to increase performance in terms of minimization of worst end-to-end (e2e) delay, percentage of requests in outage, and the enhancement of reliability. The article focuses on the following main issues: (i) proposal of a channel-aware federated learning (FL) approach to forecast the popularity of the VMs required by IoE devices; (ii) use of an AI-based channel conditions forecasting module at the benefits of the FL process; (iii) development of a suitable VMs placement on the basis of their popularity and of an efficient tasks allocation technique based on a modified version of the auction theory (AT) and a proper matching game; (iv) enhancement of the system reliability by an echo-state-network (ESN), located on each computation node and running in the background to predict failures and anticipate tasks migration. Numerical results validate the effectiveness of the proposed strategy for IoE applications over 6G networks.
{"title":"A Channel-aware FL Approach for Virtual Machine Placement in 6G Edge Intelligent Ecosystems","authors":"Benedetta Picano, R. Fantacci","doi":"10.1145/3584705","DOIUrl":"https://doi.org/10.1145/3584705","url":null,"abstract":"This article deals with an artificial intelligence (AI) framework to support Internet-of-everything (IoE) applications over sixth-generation wireless (6G) networks. An integrated IoE-Edge Intelligence ecosystem is designed to effectively face the problems of Virtual Machines (VMs) placement based on their popularity, computation offloading optimization, and system reliability improvement predicting compute nodes faults. The main objective of the article is to increase performance in terms of minimization of worst end-to-end (e2e) delay, percentage of requests in outage, and the enhancement of reliability. The article focuses on the following main issues: (i) proposal of a channel-aware federated learning (FL) approach to forecast the popularity of the VMs required by IoE devices; (ii) use of an AI-based channel conditions forecasting module at the benefits of the FL process; (iii) development of a suitable VMs placement on the basis of their popularity and of an efficient tasks allocation technique based on a modified version of the auction theory (AT) and a proper matching game; (iv) enhancement of the system reliability by an echo-state-network (ESN), located on each computation node and running in the background to predict failures and anticipate tasks migration. Numerical results validate the effectiveness of the proposed strategy for IoE applications over 6G networks.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"42 1","pages":"1 - 20"},"PeriodicalIF":2.7,"publicationDate":"2023-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80955899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adeola Bannis, Shijia Pan, Carlos Ruiz, John Shen, Hae Young Noh, Pei Zhang
IoT (Internet of Things) devices, such as network-enabled wearables, are carried by increasingly more people throughout daily life. Information from multiple devices can be aggregated to gain insights into a person’s behavior or status. For example, an elderly care facility could monitor patients for falls by combining fitness bracelet data with video of the entire class. For this aggregated data to be useful to each person, we need a multi-modality association of the devices’ physical ID (i.e., location, the user holding it, visual appearance) with a virtual ID (e.g., IP address/available services). Existing approaches for multi-modality association often require intentional interaction or direct line-of-sight to the device, which is infeasible for a large number of users or when the device is obscured by clothing. We present IDIoT, a calibration-free passive sensing approach that fuses motion sensor information with camera footage of an area to estimate the body location of motion sensors carried by a user. We characterize results across three baselines to highlight how different fusing methodology results better than earlier IMU-vision fusion algorithms. From this characterization, we determine IDIoT is more robust to errors such as missing frames or miscalibration that frequently occur in IMU-vision matching systems.
物联网(Internet of Things)设备,如支持网络的可穿戴设备,越来越多的人在日常生活中携带。来自多个设备的信息可以聚合起来,以了解一个人的行为或状态。例如,一家老年护理机构可以通过将健身手环数据与整个课程的视频相结合来监测患者的跌倒情况。为了使这些聚合数据对每个人都有用,我们需要将设备的物理ID(例如,位置,持有它的用户,视觉外观)与虚拟ID(例如,IP地址/可用服务)进行多模态关联。现有的多模态关联方法通常需要有意的交互或设备的直接视线,这对于大量用户或设备被衣服遮挡时是不可行的。我们提出了一种无需校准的被动传感方法IDIoT,它将运动传感器信息与一个区域的摄像机镜头融合在一起,以估计用户携带的运动传感器的身体位置。我们描述了三个基线的结果,以突出不同的融合方法如何比早期的imu -视觉融合算法效果更好。根据这一特性,我们确定IDIoT对imu -视觉匹配系统中经常出现的缺失帧或校准错误等错误具有更强的鲁棒性。
{"title":"IDIoT: Multimodal Framework for Ubiquitous Identification and Assignment of Human-carried Wearable Devices","authors":"Adeola Bannis, Shijia Pan, Carlos Ruiz, John Shen, Hae Young Noh, Pei Zhang","doi":"10.1145/3579832","DOIUrl":"https://doi.org/10.1145/3579832","url":null,"abstract":"IoT (Internet of Things) devices, such as network-enabled wearables, are carried by increasingly more people throughout daily life. Information from multiple devices can be aggregated to gain insights into a person’s behavior or status. For example, an elderly care facility could monitor patients for falls by combining fitness bracelet data with video of the entire class. For this aggregated data to be useful to each person, we need a multi-modality association of the devices’ physical ID (i.e., location, the user holding it, visual appearance) with a virtual ID (e.g., IP address/available services). Existing approaches for multi-modality association often require intentional interaction or direct line-of-sight to the device, which is infeasible for a large number of users or when the device is obscured by clothing. We present IDIoT, a calibration-free passive sensing approach that fuses motion sensor information with camera footage of an area to estimate the body location of motion sensors carried by a user. We characterize results across three baselines to highlight how different fusing methodology results better than earlier IMU-vision fusion algorithms. From this characterization, we determine IDIoT is more robust to errors such as missing frames or miscalibration that frequently occur in IMU-vision matching systems.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"15 1","pages":"1 - 25"},"PeriodicalIF":2.7,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81888394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roman Trüb, Reto Da Forno, Andreas Biri, J. Beutel, L. Thiele
In many real-world wireless IoT networks, the application dictates the location of the nodes and therefore the link characteristics are inhomogeneous. Furthermore, nodes may in many scenarios only communicate with the Internet-attached gateway via multiple hops. If an energy-efficient short-range modulation scheme is used, nodes that are reachable only via high-path-loss links cannot communicate. Using a more energy-demanding long-range modulation allows connecting more nodes but would be inefficient for nodes that are easily reachable via low-path-loss links. Combining multiple modulations is challenging, as low-power radios usually only support the use of a single modulation at a time. In this article, we present the Long-Short-Range (LSR) protocol which supports low-power multi-hop communication using multiple modulations and is suited for networks with inhomogeneous link characteristics. To reduce the inherent redundancy of long-range modulations, we present a method to determine the connectivity graph of the network during regular data communication without adding significant overhead. In simulations, we show that LSR allows for reducing power consumption significantly for many scenarios when compared to a state-of-the-art multi-hop communication protocol using a single long-range modulation. We demonstrate the applicability of LSR with an implementation on real hardware and a testbed with long-range links.
{"title":"LSR: Energy-Efficient Multi-Modulation Communication for Inhomogeneous Wireless IoT Networks","authors":"Roman Trüb, Reto Da Forno, Andreas Biri, J. Beutel, L. Thiele","doi":"10.1145/3579366","DOIUrl":"https://doi.org/10.1145/3579366","url":null,"abstract":"In many real-world wireless IoT networks, the application dictates the location of the nodes and therefore the link characteristics are inhomogeneous. Furthermore, nodes may in many scenarios only communicate with the Internet-attached gateway via multiple hops. If an energy-efficient short-range modulation scheme is used, nodes that are reachable only via high-path-loss links cannot communicate. Using a more energy-demanding long-range modulation allows connecting more nodes but would be inefficient for nodes that are easily reachable via low-path-loss links. Combining multiple modulations is challenging, as low-power radios usually only support the use of a single modulation at a time. In this article, we present the Long-Short-Range (LSR) protocol which supports low-power multi-hop communication using multiple modulations and is suited for networks with inhomogeneous link characteristics. To reduce the inherent redundancy of long-range modulations, we present a method to determine the connectivity graph of the network during regular data communication without adding significant overhead. In simulations, we show that LSR allows for reducing power consumption significantly for many scenarios when compared to a state-of-the-art multi-hop communication protocol using a single long-range modulation. We demonstrate the applicability of LSR with an implementation on real hardware and a testbed with long-range links.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"49 1","pages":"1 - 36"},"PeriodicalIF":2.7,"publicationDate":"2023-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82699432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Farooq Dar, M. Liyanage, Marko Radeta, Zhigang Yin, Agustin Zuniga, Sokol Kosta, S. Tarkoma, P. Nurmi, Huber Flores
Underwater environments are emerging as a new frontier for data science thanks to an increase in deployments of underwater sensor technology. Challenges in operating computing underwater combined with a lack of high-speed communication technology covering most aquatic areas means that there is a significant delay between the collection and analysis of data. This in turn limits the scale and complexity of the applications that can operate based on these data. In this article, we develop underwater fog computing support using low-cost micro-clouds and demonstrate how they can be used to deliver cost-effective support for data-heavy underwater applications. We develop a proof-of-concept micro-cloud prototype and use it to perform extensive benchmarks that evaluate the suitability of underwater micro-clouds for diverse underwater data science scenarios. We conduct rigorous tests in both controlled and field deployments, using river and sea waters. We also address technical challenges in enabling underwater fogs, evaluating the performance of different communication interfaces and demonstrating how accelerometers can be used to detect the likelihood of communication failures and determine which communication interface to use. Our work offers a cost-effective way to increase the scale and complexity of underwater data science applications, and demonstrates how off-the-shelf devices can be adopted for this purpose.
{"title":"Upscaling Fog Computing in Oceans for Underwater Pervasive Data Science Using Low-Cost Micro-Clouds","authors":"Farooq Dar, M. Liyanage, Marko Radeta, Zhigang Yin, Agustin Zuniga, Sokol Kosta, S. Tarkoma, P. Nurmi, Huber Flores","doi":"10.1145/3575801","DOIUrl":"https://doi.org/10.1145/3575801","url":null,"abstract":"Underwater environments are emerging as a new frontier for data science thanks to an increase in deployments of underwater sensor technology. Challenges in operating computing underwater combined with a lack of high-speed communication technology covering most aquatic areas means that there is a significant delay between the collection and analysis of data. This in turn limits the scale and complexity of the applications that can operate based on these data. In this article, we develop underwater fog computing support using low-cost micro-clouds and demonstrate how they can be used to deliver cost-effective support for data-heavy underwater applications. We develop a proof-of-concept micro-cloud prototype and use it to perform extensive benchmarks that evaluate the suitability of underwater micro-clouds for diverse underwater data science scenarios. We conduct rigorous tests in both controlled and field deployments, using river and sea waters. We also address technical challenges in enabling underwater fogs, evaluating the performance of different communication interfaces and demonstrating how accelerometers can be used to detect the likelihood of communication failures and determine which communication interface to use. Our work offers a cost-effective way to increase the scale and complexity of underwater data science applications, and demonstrates how off-the-shelf devices can be adopted for this purpose.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"57 1","pages":"1 - 29"},"PeriodicalIF":2.7,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89231177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IoT systems based on Digital Twins (DTs) — virtual copies of physical objects and systems — can be very effective to enable data-driven services and promote better control and decisions, in particular by exploiting distributed approaches where cloud and edge computing cooperate effectively. In this context, digital twins deployed on the edge represents a new strategic element to design a new wave of distributed cyber-physical applications. Existing approaches are generally focused on fragmented and domain-specific monolithic solutions and are mainly associated to model-driven, simulative or descriptive visions. The idea of extending the DTs role to support last-mile digitalization and interoperability through a set of general purpose and well-defined properties and capabilities is still underinvestigated. In this paper, we present the novel Edge Digital Twins (EDT) architectural model and its implementation, enabling the lightweight replication of physical devices providing an efficient digital abstraction layer to support the autonomous and standard collaboration of things and services. We model the core capabilities with respect to the recent definition of the state of the art, present the software architecture and a prototype implementation. Extensive experimental analysis shows the obtained performance in multiple IoT application contexts and compares them with that of state-of-the-art approaches.
{"title":"A Flexible and Modular Architecture for Edge Digital Twin: Implementation and Evaluation","authors":"Marco Picone, M. Mamei, F. Zambonelli","doi":"10.1145/3573206","DOIUrl":"https://doi.org/10.1145/3573206","url":null,"abstract":"IoT systems based on Digital Twins (DTs) — virtual copies of physical objects and systems — can be very effective to enable data-driven services and promote better control and decisions, in particular by exploiting distributed approaches where cloud and edge computing cooperate effectively. In this context, digital twins deployed on the edge represents a new strategic element to design a new wave of distributed cyber-physical applications. Existing approaches are generally focused on fragmented and domain-specific monolithic solutions and are mainly associated to model-driven, simulative or descriptive visions. The idea of extending the DTs role to support last-mile digitalization and interoperability through a set of general purpose and well-defined properties and capabilities is still underinvestigated. In this paper, we present the novel Edge Digital Twins (EDT) architectural model and its implementation, enabling the lightweight replication of physical devices providing an efficient digital abstraction layer to support the autonomous and standard collaboration of things and services. We model the core capabilities with respect to the recent definition of the state of the art, present the software architecture and a prototype implementation. Extensive experimental analysis shows the obtained performance in multiple IoT application contexts and compares them with that of state-of-the-art approaches.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"32 1","pages":"1 - 32"},"PeriodicalIF":2.7,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79084546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Klervie Toczé, Ali J. Fahs, G. Pierre, S. Nadjm-Tehrani
Deciding where to handle services and tasks, as well as provisioning an adequate amount of computing resources for this handling, is a main challenge of edge computing systems. Moreover, latency-sensitive services constrain the type and location of edge devices that can provide the needed resources. When available resources are scarce there is a possibility that some resource allocation requests are denied. In this work, we propose the VioLinn system to tackle the joint problems of task placement, service placement, and edge device provisioning. Dealing with latency-sensitive services is achieved through proximity-aware algorithms that ensure the tasks are handled close to the end-user. Moreover, the concept of spare edge device is introduced to handle sudden load variations in time and space without having to continuously overprovision. Several spare device selection algorithms are proposed with different cost/performance tradeoffs. Evaluations are performed both in a Kubernetes-based testbed and using simulations and show the benefit of using spare devices for handling localized load spikes with higher quality of service (QoS) and lower computing resource usage. The study of the different algorithms shows that it is possible to achieve this increase in QoS with different tradeoffs against cost and performance.
{"title":"VioLinn: Proximity-aware Edge Placementwith Dynamic and Elastic Resource Provisioning","authors":"Klervie Toczé, Ali J. Fahs, G. Pierre, S. Nadjm-Tehrani","doi":"10.1145/3573125","DOIUrl":"https://doi.org/10.1145/3573125","url":null,"abstract":"Deciding where to handle services and tasks, as well as provisioning an adequate amount of computing resources for this handling, is a main challenge of edge computing systems. Moreover, latency-sensitive services constrain the type and location of edge devices that can provide the needed resources. When available resources are scarce there is a possibility that some resource allocation requests are denied. In this work, we propose the VioLinn system to tackle the joint problems of task placement, service placement, and edge device provisioning. Dealing with latency-sensitive services is achieved through proximity-aware algorithms that ensure the tasks are handled close to the end-user. Moreover, the concept of spare edge device is introduced to handle sudden load variations in time and space without having to continuously overprovision. Several spare device selection algorithms are proposed with different cost/performance tradeoffs. Evaluations are performed both in a Kubernetes-based testbed and using simulations and show the benefit of using spare devices for handling localized load spikes with higher quality of service (QoS) and lower computing resource usage. The study of the different algorithms shows that it is possible to achieve this increase in QoS with different tradeoffs against cost and performance.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"29 1","pages":"1 - 31"},"PeriodicalIF":2.7,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83501019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Device-as-a-service (DaaS) Internet of Things (IoT) business model enables distributed IoT devices to sell collected data to other devices, paving the way for machine-to-machine (M2M) economy applications. Cryptocurrencies are widely used by various IoT devices to undertake the main settlement and payment task in the M2M economy. However, the cryptocurrency market, which lacks effective supervision, has fluctuated wildly in the past few years. These fluctuations are breeding grounds for arbitrage in IoT data trading. Therefore, a practical cryptocurrency market supervision framework is very imperative in the process of IoT data trading to ensure that the trading is completed safely and fairly. The difficulty stems from how to combine these unlabeled daily trading data with supervision strategies to punish abnormal users, who disrupt the data trading market in IoT. In this article, we propose a closed-loop hybrid supervision framework based on the unsupervised anomaly detection to solve this problem. The core is to design the multi-modal unsupervised anomaly detection methods on trading prices to identify malicious users. We then design a dedicated control strategy with three levels to defend against various abnormal behaviors, according to the detection results. Furthermore, to guarantee the reliability of this framework, we evaluate the detection rate, accuracy, precision, and time consumption of single-modal and multi-modal detection methods and the contrast algorithm Adaptive KDE [19]. Finally, an effective prototype framework for supervising is established. The extensive evaluations prove that our supervision framework greatly reduces IoT data trading risks and losses.
{"title":"A Closed-loop Hybrid Supervision Framework of Cryptocurrency Transactions for Data Trading in IoT","authors":"Liushun Zhao, Deke Guo, Junjie Xie, Lailong Luo, Yulong Shen","doi":"10.1145/3568171","DOIUrl":"https://doi.org/10.1145/3568171","url":null,"abstract":"The Device-as-a-service (DaaS) Internet of Things (IoT) business model enables distributed IoT devices to sell collected data to other devices, paving the way for machine-to-machine (M2M) economy applications. Cryptocurrencies are widely used by various IoT devices to undertake the main settlement and payment task in the M2M economy. However, the cryptocurrency market, which lacks effective supervision, has fluctuated wildly in the past few years. These fluctuations are breeding grounds for arbitrage in IoT data trading. Therefore, a practical cryptocurrency market supervision framework is very imperative in the process of IoT data trading to ensure that the trading is completed safely and fairly. The difficulty stems from how to combine these unlabeled daily trading data with supervision strategies to punish abnormal users, who disrupt the data trading market in IoT. In this article, we propose a closed-loop hybrid supervision framework based on the unsupervised anomaly detection to solve this problem. The core is to design the multi-modal unsupervised anomaly detection methods on trading prices to identify malicious users. We then design a dedicated control strategy with three levels to defend against various abnormal behaviors, according to the detection results. Furthermore, to guarantee the reliability of this framework, we evaluate the detection rate, accuracy, precision, and time consumption of single-modal and multi-modal detection methods and the contrast algorithm Adaptive KDE [19]. Finally, an effective prototype framework for supervising is established. The extensive evaluations prove that our supervision framework greatly reduces IoT data trading risks and losses.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"16 1","pages":"1 - 26"},"PeriodicalIF":2.7,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90445391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mart Lubbers, P. Koopman, Adrian Ramsingh, Jeremy Singer, P. Trinder
Internet of Things (IoT) software is notoriously complex, conventionally comprising multiple tiers. Traditionally an IoT developer must use multiple programming languages and ensure that the components interoperate correctly. A novel alternative is to use a single tierless language with a compiler that generates the code for each component and ensures their correct interoperation. We report a systematic comparative evaluation of two tierless language technologies for IoT stacks: one for resource-rich sensor nodes (Clean with iTask) and one for resource-constrained sensor nodes (Clean with iTask and mTask). The evaluation is based on four implementations of a typical smart campus application: two tierless and two Python-based tiered. (1) We show that tierless languages have the potential to significantly reduce the development effort for IoT systems, requiring 70% less code than the tiered implementations. Careful analysis attributes this code reduction to reduced interoperation (e.g., two embedded domain-specific languages and one paradigm versus seven languages and two paradigms), automatically generated distributed communication, and powerful IoT programming abstractions. (2) We show that tierless languages have the potential to significantly improve the reliability of IoT systems, describing how Clean iTask/mTask maintains type safety, provides higher-order failure management, and simplifies maintainability. (3) We report the first comparison of a tierless IoT codebase for resource-rich sensor nodes with one for resource-constrained sensor nodes. The comparison shows that they have similar code size (within 7%), and functional structure. (4) We present the first comparison of two tierless IoT languages, one for resource-rich sensor nodes and the other for resource-constrained sensor nodes.
众所周知,物联网(IoT)软件非常复杂,通常由多层组成。传统上,物联网开发人员必须使用多种编程语言,并确保组件正确互操作。一种新颖的替代方案是使用单一的无层语言和编译器,该编译器为每个组件生成代码并确保它们的正确互操作。我们报告了物联网堆栈的两种无层语言技术的系统比较评估:一种用于资源丰富的传感器节点(Clean with iTask),另一种用于资源受限的传感器节点(Clean with iTask和mTask)。评估基于典型智能校园应用的四种实现:两种分层和两种基于python的分层。(1)我们表明,分层语言有可能显著减少物联网系统的开发工作量,比分层实现所需的代码少70%。仔细分析将这种代码减少归因于减少的互操作性(例如,两种嵌入式领域特定语言和一种范式,而不是七种语言和两种范式),自动生成的分布式通信以及强大的物联网编程抽象。(2)我们表明,分层语言具有显著提高物联网系统可靠性的潜力,描述了Clean iTask/mTask如何维护类型安全,提供高阶故障管理,并简化可维护性。(3)我们报告了针对资源丰富的传感器节点的无层物联网代码库与针对资源受限的传感器节点的代码库的首次比较。对比表明,它们的代码大小(在7%以内)和功能结构相似。(4)我们首次比较了两种无层次物联网语言,一种用于资源丰富的传感器节点,另一种用于资源受限的传感器节点。
{"title":"Could Tierless Languages Reduce IoT Development Grief?","authors":"Mart Lubbers, P. Koopman, Adrian Ramsingh, Jeremy Singer, P. Trinder","doi":"10.1145/3572901","DOIUrl":"https://doi.org/10.1145/3572901","url":null,"abstract":"Internet of Things (IoT) software is notoriously complex, conventionally comprising multiple tiers. Traditionally an IoT developer must use multiple programming languages and ensure that the components interoperate correctly. A novel alternative is to use a single tierless language with a compiler that generates the code for each component and ensures their correct interoperation. We report a systematic comparative evaluation of two tierless language technologies for IoT stacks: one for resource-rich sensor nodes (Clean with iTask) and one for resource-constrained sensor nodes (Clean with iTask and mTask). The evaluation is based on four implementations of a typical smart campus application: two tierless and two Python-based tiered. (1) We show that tierless languages have the potential to significantly reduce the development effort for IoT systems, requiring 70% less code than the tiered implementations. Careful analysis attributes this code reduction to reduced interoperation (e.g., two embedded domain-specific languages and one paradigm versus seven languages and two paradigms), automatically generated distributed communication, and powerful IoT programming abstractions. (2) We show that tierless languages have the potential to significantly improve the reliability of IoT systems, describing how Clean iTask/mTask maintains type safety, provides higher-order failure management, and simplifies maintainability. (3) We report the first comparison of a tierless IoT codebase for resource-rich sensor nodes with one for resource-constrained sensor nodes. The comparison shows that they have similar code size (within 7%), and functional structure. (4) We present the first comparison of two tierless IoT languages, one for resource-rich sensor nodes and the other for resource-constrained sensor nodes.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"19 1","pages":"1 - 35"},"PeriodicalIF":2.7,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78402050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wouter Moedt, R. Bernsteiner, M. Hall, Ann L. Fruhling
Worldwide spending on Internet of Things (IoT) applications is forecasted to surpass $1 trillion by 2022. To stay competitive in this growing technological industry segment, lowering costs while increasing productivity and shortening time-to-market will become increasingly important. Adopting Agile Software Development practices for IoT projects may provide this competitive advantage, as it enables organizations to respond to change, while being dynamic and innovative. Applying a mixed-methods approach, agile IoT practitioners around the world and from diverse industries were surveyed and interviewed. Our study recommends that Agile Software Development team makeup, practices, and methods should be tailored to the specific industry, culture, people, and IT application of an organization. People play an important role in the success of agile projects; therefore, our research focuses on identifying the critical attributes of agile teams to maximize success. Our study identified the five critical agile practices: Collective Code Ownership, Continuous Integration, Single Team, Dedicated Customer, and Sprint Planning and found that both technical and soft skills are essential for successful IoT development.
{"title":"Enhancing IoT Project Success through Agile Best Practices","authors":"Wouter Moedt, R. Bernsteiner, M. Hall, Ann L. Fruhling","doi":"10.1145/3568170","DOIUrl":"https://doi.org/10.1145/3568170","url":null,"abstract":"Worldwide spending on Internet of Things (IoT) applications is forecasted to surpass $1 trillion by 2022. To stay competitive in this growing technological industry segment, lowering costs while increasing productivity and shortening time-to-market will become increasingly important. Adopting Agile Software Development practices for IoT projects may provide this competitive advantage, as it enables organizations to respond to change, while being dynamic and innovative. Applying a mixed-methods approach, agile IoT practitioners around the world and from diverse industries were surveyed and interviewed. Our study recommends that Agile Software Development team makeup, practices, and methods should be tailored to the specific industry, culture, people, and IT application of an organization. People play an important role in the success of agile projects; therefore, our research focuses on identifying the critical attributes of agile teams to maximize success. Our study identified the five critical agile practices: Collective Code Ownership, Continuous Integration, Single Team, Dedicated Customer, and Sprint Planning and found that both technical and soft skills are essential for successful IoT development.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"5 1","pages":"1 - 31"},"PeriodicalIF":2.7,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87172173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Absar-Ul-Haque Ahmar, Emekcan Aras, T. D. Nguyen, Sam Michiels, W. Joosen, D. Hughes
Low-power wide-area networks enable large-scale deployments of low-power wireless devices. LoRaWAN is a long-range wireless technology that has emerged as a low-power and low data rate solution to support Internet of Things applications. Although LoRaWAN provides a low-power and cost-efficient networking solution, recent literature shows that it performs poorly in terms of reliability and security in dense deployments due to the uncoordinated (ALOHA-based) nature of the MAC (medium access control) protocol. Furthermore, LoRaWAN is not robust against selective jamming attacks. This article proposes CRAM: a time-synchronized cryptographic frequency hopping MAC protocol designed for the LoRa physical layer. CRAM reduces the contention by fairly exploiting the available frequency space and maximizes the entropy of the channel hopping algorithm. We develop a large physical testbed and a simulator to thoroughly evaluate the proposed protocol. Our evaluations show that CRAM significantly improves reliability and scalability and increases channel utilization while making selective jamming difficult to perform compared to the standard LoRaWAN protocol.
{"title":"Design of a Robust MAC Protocol for LoRa","authors":"Absar-Ul-Haque Ahmar, Emekcan Aras, T. D. Nguyen, Sam Michiels, W. Joosen, D. Hughes","doi":"10.1145/3557048","DOIUrl":"https://doi.org/10.1145/3557048","url":null,"abstract":"Low-power wide-area networks enable large-scale deployments of low-power wireless devices. LoRaWAN is a long-range wireless technology that has emerged as a low-power and low data rate solution to support Internet of Things applications. Although LoRaWAN provides a low-power and cost-efficient networking solution, recent literature shows that it performs poorly in terms of reliability and security in dense deployments due to the uncoordinated (ALOHA-based) nature of the MAC (medium access control) protocol. Furthermore, LoRaWAN is not robust against selective jamming attacks. This article proposes CRAM: a time-synchronized cryptographic frequency hopping MAC protocol designed for the LoRa physical layer. CRAM reduces the contention by fairly exploiting the available frequency space and maximizes the entropy of the channel hopping algorithm. We develop a large physical testbed and a simulator to thoroughly evaluate the proposed protocol. Our evaluations show that CRAM significantly improves reliability and scalability and increases channel utilization while making selective jamming difficult to perform compared to the standard LoRaWAN protocol.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"326 1","pages":"1 - 25"},"PeriodicalIF":2.7,"publicationDate":"2022-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78418872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}