Pub Date : 2022-11-21DOI: 10.1109/SBESC56799.2022.9965063
Fernando L. Oliveira, J. Mattos
Typically the Internet of Things devices are constrained in terms of processing, memory, and energy consumption. Energy consumption is a critical aspect of these devices, being heavily impacted by how programs are developed, and it becomes more evident in interpreted languages that naturally demand more resources. Commonly embedded software development uses Time-triggered (TT) and Event-triggered (ET) architectures to design embedded projects. However, the TT strategy can consume more energy due to the polling method; in contrast, the ET approach can be energy-efficient but cannot deal with multiple events. This paper introduces JSEVAsync, a framework to help developers to design applications using JavaScript language for IoT devices that combine the best parts of TT and ET architectures. This approach uses JavaScript's non-blocking concept as a development interface to structure the algorithms into asynchronous events. To validate it, we compare C- and JavaScript-based applications and analyze the results from the energy consumption perspective. We found that writing code through JSEVAsync can be up to 21% more energy efficient than the traditional method. Moreover, JavaScript can improve design-time aspects such as readability, maintainability, and code reuse.
{"title":"JSEVAsync: An Asynchronous Event-based Framework to Energy Saving on IoT Devices","authors":"Fernando L. Oliveira, J. Mattos","doi":"10.1109/SBESC56799.2022.9965063","DOIUrl":"https://doi.org/10.1109/SBESC56799.2022.9965063","url":null,"abstract":"Typically the Internet of Things devices are constrained in terms of processing, memory, and energy consumption. Energy consumption is a critical aspect of these devices, being heavily impacted by how programs are developed, and it becomes more evident in interpreted languages that naturally demand more resources. Commonly embedded software development uses Time-triggered (TT) and Event-triggered (ET) architectures to design embedded projects. However, the TT strategy can consume more energy due to the polling method; in contrast, the ET approach can be energy-efficient but cannot deal with multiple events. This paper introduces JSEVAsync, a framework to help developers to design applications using JavaScript language for IoT devices that combine the best parts of TT and ET architectures. This approach uses JavaScript's non-blocking concept as a development interface to structure the algorithms into asynchronous events. To validate it, we compare C- and JavaScript-based applications and analyze the results from the energy consumption perspective. We found that writing code through JSEVAsync can be up to 21% more energy efficient than the traditional method. Moreover, JavaScript can improve design-time aspects such as readability, maintainability, and code reuse.","PeriodicalId":130479,"journal":{"name":"2022 XII Brazilian Symposium on Computing Systems Engineering (SBESC)","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134111946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-21DOI: 10.1109/SBESC56799.2022.9965097
Weslen Souza, L. Brisolara, P. Ferreira
Wireless Sensor Networks have been widely used for monitoring and data collection. Technological advances allowed the integration of multimedia devices into these networks, giving rise to new possible applications. The nodes that make up these networks are commonly located in external environments where they are exclusively powered by batteries, thus the network lifetime depends on the charge of these batteries. To maximize the useful life of the network as a whole, load balancing techniques are proposed with the aim of promoting a more homogeneous energy consumption by all the nodes, preventing only a few nodes from working in excess and dying prematurely. Most of the studies identified in the literature for reactive WSN only address networks with overlapping sensing, making the decision of which of the nodes that detected this same event should process it. In this work, networks without sensing overlap are addressed, where a load balancing strategy based on task division is proposed. An event is split into several smaller multimedia processing subtasks, which are distributed to neighboring nodes that are able to perform the processing. Through experiments, we show the improvements of around 29% achieved by the proposed approach, when compared to a WSN that does not use any load balancing technique.
{"title":"Load Balancing Based on Multimedia Task Division for Reactive WSNs: Case Study for Pest Management","authors":"Weslen Souza, L. Brisolara, P. Ferreira","doi":"10.1109/SBESC56799.2022.9965097","DOIUrl":"https://doi.org/10.1109/SBESC56799.2022.9965097","url":null,"abstract":"Wireless Sensor Networks have been widely used for monitoring and data collection. Technological advances allowed the integration of multimedia devices into these networks, giving rise to new possible applications. The nodes that make up these networks are commonly located in external environments where they are exclusively powered by batteries, thus the network lifetime depends on the charge of these batteries. To maximize the useful life of the network as a whole, load balancing techniques are proposed with the aim of promoting a more homogeneous energy consumption by all the nodes, preventing only a few nodes from working in excess and dying prematurely. Most of the studies identified in the literature for reactive WSN only address networks with overlapping sensing, making the decision of which of the nodes that detected this same event should process it. In this work, networks without sensing overlap are addressed, where a load balancing strategy based on task division is proposed. An event is split into several smaller multimedia processing subtasks, which are distributed to neighboring nodes that are able to perform the processing. Through experiments, we show the improvements of around 29% achieved by the proposed approach, when compared to a WSN that does not use any load balancing technique.","PeriodicalId":130479,"journal":{"name":"2022 XII Brazilian Symposium on Computing Systems Engineering (SBESC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122181815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-21DOI: 10.1109/SBESC56799.2022.9964834
José Luís Conradi Hoffmann, Leonardo Passig Horstmann, A. A. Fröhlich
This work proposes a SmartData-based middleware to integrate Autonomous Systems Simulators and external tools. The interface models the data used on a simulator and creates an intermediary layer between the simulator and the external tools by defining the inputs and outputs as SmartData. A message bus is used for communication between SmartData following their interest relations. Messages are exchanged following a specific protocol such as CAN, TSTP, and EtherCat. However, the architecture presented is agnostic of protocol. The presented interface eases the integration of the autonomous system simulation with other simulators (e.g., Network Simulators), Cloud services, fault injection mechanisms, Digital Twins, and Hardware-in-the-loop scenarios. Moreover, this interface allows transparent, runtime component replacement and time synchronization, the modularization of the components of the system, and the addition of security aspects in the simulation. After presenting the interface proposed, we present a case-study application with an autonomous vehicle simulation using CARLA and measure the end-to-end delay and overhead incurred in the simulation.
{"title":"Integrating Autonomous Vehicle Simulation Tools using SmartData","authors":"José Luís Conradi Hoffmann, Leonardo Passig Horstmann, A. A. Fröhlich","doi":"10.1109/SBESC56799.2022.9964834","DOIUrl":"https://doi.org/10.1109/SBESC56799.2022.9964834","url":null,"abstract":"This work proposes a SmartData-based middleware to integrate Autonomous Systems Simulators and external tools. The interface models the data used on a simulator and creates an intermediary layer between the simulator and the external tools by defining the inputs and outputs as SmartData. A message bus is used for communication between SmartData following their interest relations. Messages are exchanged following a specific protocol such as CAN, TSTP, and EtherCat. However, the architecture presented is agnostic of protocol. The presented interface eases the integration of the autonomous system simulation with other simulators (e.g., Network Simulators), Cloud services, fault injection mechanisms, Digital Twins, and Hardware-in-the-loop scenarios. Moreover, this interface allows transparent, runtime component replacement and time synchronization, the modularization of the components of the system, and the addition of security aspects in the simulation. After presenting the interface proposed, we present a case-study application with an autonomous vehicle simulation using CARLA and measure the end-to-end delay and overhead incurred in the simulation.","PeriodicalId":130479,"journal":{"name":"2022 XII Brazilian Symposium on Computing Systems Engineering (SBESC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114250301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-21DOI: 10.1109/SBESC56799.2022.9964584
George S. Oliveira, J. T. Carvalho, P. Plentz
Smart warehouses use robots for pick-up and delivery tasks, often within the Robot as a Service business model where robot costs are calculated by the performed tasks or by a monthly lease. Therefore, warehouses do not have to deal with the technical risks of maintaining the fleet of operational robots. However, it is necessary to pay attention to the fleet's energy costs, fulfillment of orders, and synchronization with the warehouse's logistics sector. This work presents the effect of different robot types in fleets operating in a large simulated smart warehouse using a previously designed state-of-the-art algorithm. Results show that investing in heterogeneous fleets does not produce performance from a particular variety of robots and that the effects of the number of robots on order fulfillment time, energy consumption, and operating costs are directly related to the used algorithm.
智能仓库使用机器人来完成取货和送货任务,通常在机器人即服务(Robot as a Service)的商业模式中,机器人的成本是根据执行的任务或按月租赁计算的。因此,仓库不必处理维护操作机器人车队的技术风险。然而,有必要关注车队的能源成本、订单的履行以及与仓库物流部门的同步。这项工作展示了不同类型的机器人在大型模拟智能仓库中运行的影响,使用了先前设计的最先进的算法。结果表明,投资于异构车队并不会产生特定种类机器人的性能,机器人数量对订单履行时间、能源消耗和运营成本的影响与所使用的算法直接相关。
{"title":"On the Effect of Heterogeneous Robot Fleets on Smart Warehouses' Order Time, Energy, and Operating Costs","authors":"George S. Oliveira, J. T. Carvalho, P. Plentz","doi":"10.1109/SBESC56799.2022.9964584","DOIUrl":"https://doi.org/10.1109/SBESC56799.2022.9964584","url":null,"abstract":"Smart warehouses use robots for pick-up and delivery tasks, often within the Robot as a Service business model where robot costs are calculated by the performed tasks or by a monthly lease. Therefore, warehouses do not have to deal with the technical risks of maintaining the fleet of operational robots. However, it is necessary to pay attention to the fleet's energy costs, fulfillment of orders, and synchronization with the warehouse's logistics sector. This work presents the effect of different robot types in fleets operating in a large simulated smart warehouse using a previously designed state-of-the-art algorithm. Results show that investing in heterogeneous fleets does not produce performance from a particular variety of robots and that the effects of the number of robots on order fulfillment time, energy consumption, and operating costs are directly related to the used algorithm.","PeriodicalId":130479,"journal":{"name":"2022 XII Brazilian Symposium on Computing Systems Engineering (SBESC)","volume":"10 5 Pt 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116886638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-21DOI: 10.1109/SBESC56799.2022.9964726
Maria Eduarda Rosa da Silva, G. Gracioli, G. Araújo
The search for effective methods to obtain an accurate detection of faults in cyber-physical systems grows constantly. Usually, a considerable amount of data generated by sensors is the source of any data-based analysis. In this context, the application of Machine Learning algorithms to identify faults has gained popularity and acceptance due to the high performance and low cost compared to other techniques. To improve the performance of such anomaly detection algorithms and have greater accuracy for failure identification, some strategies can be addressed, such as selecting the features that best describe the failure. For this, Features Selection is performed to identify significant features in a dataset. In this paper we present a comparison of 6 feature selection algorithms that are used to select the best features to detect the knocking noise fault in automotive combustion engines. By collecting and using data from an engine electronic control unit (ECU), we show that features selection can reduce the number of selected features in a failure classifier by 55% (from 9 to 5) with an improvement of the detection precision by 2%.
{"title":"Feature Selection in Machine Learning for Knocking Noise detection","authors":"Maria Eduarda Rosa da Silva, G. Gracioli, G. Araújo","doi":"10.1109/SBESC56799.2022.9964726","DOIUrl":"https://doi.org/10.1109/SBESC56799.2022.9964726","url":null,"abstract":"The search for effective methods to obtain an accurate detection of faults in cyber-physical systems grows constantly. Usually, a considerable amount of data generated by sensors is the source of any data-based analysis. In this context, the application of Machine Learning algorithms to identify faults has gained popularity and acceptance due to the high performance and low cost compared to other techniques. To improve the performance of such anomaly detection algorithms and have greater accuracy for failure identification, some strategies can be addressed, such as selecting the features that best describe the failure. For this, Features Selection is performed to identify significant features in a dataset. In this paper we present a comparison of 6 feature selection algorithms that are used to select the best features to detect the knocking noise fault in automotive combustion engines. By collecting and using data from an engine electronic control unit (ECU), we show that features selection can reduce the number of selected features in a failure classifier by 55% (from 9 to 5) with an improvement of the detection precision by 2%.","PeriodicalId":130479,"journal":{"name":"2022 XII Brazilian Symposium on Computing Systems Engineering (SBESC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125066481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-21DOI: 10.1109/SBESC56799.2022.9964592
Jorge F. Herculano, Willians de P. Pereira, A. S. Sá
Medium Access Control (MAC) based on Time Division Multiple Access (TDMA) sublayer approaches have been proposed to improve Wireless Body Area Networks (WBAN) reliability and efficiency. These approaches inadequately deal with device heterogeneity and the message traffic dynamics in WBANs. We propose an adaptive policy TDMA-based MAC protocol. To improve policy communication between devices, each one receive time slots for transmission, considering their band-width and communication channel interference. Our approach simulations results show significant reliability and performance compared to IEEE 802.15.4, IEEE 802.15.6, and DSBS (Dynamic Scheduling Based on Sleeping Slots) protocols. The protocol decreased message loss while maintaining low power consumption and low latency in experiment simulation scenarios.
{"title":"An Adaptive TDMA Approach for Improving Reliability and Performance in WBAN under Heterogeneous Traffic and Interference","authors":"Jorge F. Herculano, Willians de P. Pereira, A. S. Sá","doi":"10.1109/SBESC56799.2022.9964592","DOIUrl":"https://doi.org/10.1109/SBESC56799.2022.9964592","url":null,"abstract":"Medium Access Control (MAC) based on Time Division Multiple Access (TDMA) sublayer approaches have been proposed to improve Wireless Body Area Networks (WBAN) reliability and efficiency. These approaches inadequately deal with device heterogeneity and the message traffic dynamics in WBANs. We propose an adaptive policy TDMA-based MAC protocol. To improve policy communication between devices, each one receive time slots for transmission, considering their band-width and communication channel interference. Our approach simulations results show significant reliability and performance compared to IEEE 802.15.4, IEEE 802.15.6, and DSBS (Dynamic Scheduling Based on Sleeping Slots) protocols. The protocol decreased message loss while maintaining low power consumption and low latency in experiment simulation scenarios.","PeriodicalId":130479,"journal":{"name":"2022 XII Brazilian Symposium on Computing Systems Engineering (SBESC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124508142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-21DOI: 10.1109/SBESC56799.2022.9964520
Rafael Schild Reusch, L. Juracy, Fernando Gehm Moraes
Artificial Intelligence (AI) solves complex tasks like human activity and speech recognition. Accuracy-driven AI models introduced new challenges related to their applicability in resource-scarce systems. In Human Activity Recognition (HAR), state-of-the-art presents proposals using complex multi-layer LSTM networks. The literature states that LSTM networks are suitable for treating temporal-series data, a key feature for HAR. Most works in the literature seek the best possible accuracy, with few evaluating the overall computational cost to run the inference phase. In HAR, low-power IoT devices such as wearable sensors are widely used as data-gathering devices, but little effort is made to deploy AI technology in these devices. Most studies suggest an approach using edge devices or cloud computing architectures, where the end-device task is to gather and send data to the edge/cloud device. Most voice assistants, such as Amazon's Alexa and Google, use this architecture. In real-life applications, mainly in the healthcare industry, relying only on edge/cloud devices is not acceptable since these devices are not always available or reachable. The objective of this work is to evaluate the accuracy of convolutional networks with a simpler architecture, using 1D convolution, for HAR. The motivation for using networks with simpler network architectures is the possibility of embedding them in power- and memory-constrained devices.
{"title":"Assessment and Optimization of 1D CNN Model for Human Activity Recognition","authors":"Rafael Schild Reusch, L. Juracy, Fernando Gehm Moraes","doi":"10.1109/SBESC56799.2022.9964520","DOIUrl":"https://doi.org/10.1109/SBESC56799.2022.9964520","url":null,"abstract":"Artificial Intelligence (AI) solves complex tasks like human activity and speech recognition. Accuracy-driven AI models introduced new challenges related to their applicability in resource-scarce systems. In Human Activity Recognition (HAR), state-of-the-art presents proposals using complex multi-layer LSTM networks. The literature states that LSTM networks are suitable for treating temporal-series data, a key feature for HAR. Most works in the literature seek the best possible accuracy, with few evaluating the overall computational cost to run the inference phase. In HAR, low-power IoT devices such as wearable sensors are widely used as data-gathering devices, but little effort is made to deploy AI technology in these devices. Most studies suggest an approach using edge devices or cloud computing architectures, where the end-device task is to gather and send data to the edge/cloud device. Most voice assistants, such as Amazon's Alexa and Google, use this architecture. In real-life applications, mainly in the healthcare industry, relying only on edge/cloud devices is not acceptable since these devices are not always available or reachable. The objective of this work is to evaluate the accuracy of convolutional networks with a simpler architecture, using 1D convolution, for HAR. The motivation for using networks with simpler network architectures is the possibility of embedding them in power- and memory-constrained devices.","PeriodicalId":130479,"journal":{"name":"2022 XII Brazilian Symposium on Computing Systems Engineering (SBESC)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123173119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-21DOI: 10.1109/SBESC56799.2022.9965178
Mariana Carmin, L. A. Ensina, M. Alves
Most modern microprocessors have a deep cache hierarchy to hide the latency of accessing the main memory. Thus, with the increase in the number of cores, the shared Last-Level Cache (LLC) also increases, which consumes a large portion of the chip's total power and area. The same cache hierarchy can represent an extra latency barrier for applications with poor temporal and spatial locality. Therefore, sophisticated solutions should ensure optimal resource utilization to mitigate cache problems. In this scenario, an adaptive cache mechanism can benefit such applications, improving general system performance and decreasing energy consumption. When multiple programs are running, adapting the use of the LLC for each application avoids cache conflicts and cache pollution, increasing system performance. In this paper, we assess two approaches based on regression and classification models to adapt the use of the LLC during run-time, both using hardware counters. Analyzing the efficiency and overhead of each model through SPEC CPU 2006 and 2017, we observe a better performance for the classification model based on the Random Forest algorithm for both single and multi-program workloads.
大多数现代微处理器都有一个深缓存层次结构来隐藏访问主存的延迟。因此,随着内核数量的增加,共享的最后一级缓存(LLC)也会增加,这消耗了芯片总功率和面积的很大一部分。对于时间和空间局部性差的应用程序,相同的缓存层次结构可能表示额外的延迟屏障。因此,复杂的解决方案应该确保最佳的资源利用,以减轻缓存问题。在这种情况下,自适应缓存机制可以使这些应用程序受益,从而提高系统的总体性能并降低能耗。当运行多个程序时,为每个应用程序调整LLC的使用可以避免缓存冲突和缓存污染,从而提高系统性能。在本文中,我们评估了基于回归和分类模型的两种方法,以适应在运行时使用LLC,两者都使用硬件计数器。通过SPEC CPU 2006和2017分析每个模型的效率和开销,我们观察到基于随机森林算法的分类模型在单程序和多程序工作负载下都具有更好的性能。
{"title":"Comparison of Different Adaptable Cache Bypassing Approaches","authors":"Mariana Carmin, L. A. Ensina, M. Alves","doi":"10.1109/SBESC56799.2022.9965178","DOIUrl":"https://doi.org/10.1109/SBESC56799.2022.9965178","url":null,"abstract":"Most modern microprocessors have a deep cache hierarchy to hide the latency of accessing the main memory. Thus, with the increase in the number of cores, the shared Last-Level Cache (LLC) also increases, which consumes a large portion of the chip's total power and area. The same cache hierarchy can represent an extra latency barrier for applications with poor temporal and spatial locality. Therefore, sophisticated solutions should ensure optimal resource utilization to mitigate cache problems. In this scenario, an adaptive cache mechanism can benefit such applications, improving general system performance and decreasing energy consumption. When multiple programs are running, adapting the use of the LLC for each application avoids cache conflicts and cache pollution, increasing system performance. In this paper, we assess two approaches based on regression and classification models to adapt the use of the LLC during run-time, both using hardware counters. Analyzing the efficiency and overhead of each model through SPEC CPU 2006 and 2017, we observe a better performance for the classification model based on the Random Forest algorithm for both single and multi-program workloads.","PeriodicalId":130479,"journal":{"name":"2022 XII Brazilian Symposium on Computing Systems Engineering (SBESC)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121699973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-21DOI: 10.1109/SBESC56799.2022.9964915
M. C. Silva, A. G. Bianchi, R. A. R. Oliveira, S. Ribeiro
Human Activity Recognition (HAR) using artificial intelligence has a broad range of applications. These applications reach a set of disciplines and areas as home activity monitoring, sports, traffic, and healthcare. Using Edge Computing as a tool to enhance is a recent but promising research front. In this work, we propose an architecture for an Edge AI system based on wearable devices. We validate aspects such as the algorithm and functioning based on an edge computing system. Our research displays that the developed system is capable of recognizing 18 different activities with 94% global average precision. Furthermore, it is suitable for usage in both mobile edge computing and cloudlets perspectives.
{"title":"Designing a Multiple-User Wearable Edge AI system towards Human Activity Recognition","authors":"M. C. Silva, A. G. Bianchi, R. A. R. Oliveira, S. Ribeiro","doi":"10.1109/SBESC56799.2022.9964915","DOIUrl":"https://doi.org/10.1109/SBESC56799.2022.9964915","url":null,"abstract":"Human Activity Recognition (HAR) using artificial intelligence has a broad range of applications. These applications reach a set of disciplines and areas as home activity monitoring, sports, traffic, and healthcare. Using Edge Computing as a tool to enhance is a recent but promising research front. In this work, we propose an architecture for an Edge AI system based on wearable devices. We validate aspects such as the algorithm and functioning based on an edge computing system. Our research displays that the developed system is capable of recognizing 18 different activities with 94% global average precision. Furthermore, it is suitable for usage in both mobile edge computing and cloudlets perspectives.","PeriodicalId":130479,"journal":{"name":"2022 XII Brazilian Symposium on Computing Systems Engineering (SBESC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124063219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-21DOI: 10.1109/SBESC56799.2022.9965153
Samuel Amico Fidelis, Márcio Castro, Frank Siqueira
Moving machine learning services such as inference and training from the cloud layer to the edge layer is a complex task, but necessary to guarantee the quality of service of many Internet of Things (IoT) applications. However, running machine learning models in edge computing using lighter (limited) hardware ends up being an obstacle to applying powerful models that have better accuracy. In this context, distributed machine learning techniques aim to mitigate such limitations, being federated learning, model compression and model ensemble some of the existing alternatives. The present work proposes a new distributed machine learning technique focused on inference, which improves the accuracy of the final response of the models respecting the limitations of commonly used hardware in edge computing through a consensus algorithm.
{"title":"Distributed Learning using Consensus on Edge AI","authors":"Samuel Amico Fidelis, Márcio Castro, Frank Siqueira","doi":"10.1109/SBESC56799.2022.9965153","DOIUrl":"https://doi.org/10.1109/SBESC56799.2022.9965153","url":null,"abstract":"Moving machine learning services such as inference and training from the cloud layer to the edge layer is a complex task, but necessary to guarantee the quality of service of many Internet of Things (IoT) applications. However, running machine learning models in edge computing using lighter (limited) hardware ends up being an obstacle to applying powerful models that have better accuracy. In this context, distributed machine learning techniques aim to mitigate such limitations, being federated learning, model compression and model ensemble some of the existing alternatives. The present work proposes a new distributed machine learning technique focused on inference, which improves the accuracy of the final response of the models respecting the limitations of commonly used hardware in edge computing through a consensus algorithm.","PeriodicalId":130479,"journal":{"name":"2022 XII Brazilian Symposium on Computing Systems Engineering (SBESC)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114183948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}