Complex Event Processing (CEP) is a mature technology providing particularly efficient solutions for pattern detection in streaming settings. Nevertheless, even the most advanced CEP engines struggle to deal with cases when the number of pattern matches grows exponentially, e.g., when the queries involve Kleene operators to detect trends. In this work, we present an overview of state-of-the-art CEP engines used for pattern detection, focusing also on systems that discover demanding event trends. The main contribution lies in the comparison of existing CEP engine alternatives and the proposal of a novel hash-endowed automata-based lazy hybrid execution engine, called SASEXT, that undertakes the processing of pattern queries involving Kleene patterns. Our proposal is orders of magnitude faster than existing solutions.
{"title":"Exploring alternatives of Complex Event Processing execution engines in demanding cases","authors":"Styliani Kyrama, A. Gounaris","doi":"10.1145/3555776.3577734","DOIUrl":"https://doi.org/10.1145/3555776.3577734","url":null,"abstract":"Complex Event Processing (CEP) is a mature technology providing particularly efficient solutions for pattern detection in streaming settings. Nevertheless, even the most advanced CEP engines struggle to deal with cases when the number of pattern matches grows exponentially, e.g., when the queries involve Kleene operators to detect trends. In this work, we present an overview of state-of-the-art CEP engines used for pattern detection, focusing also on systems that discover demanding event trends. The main contribution lies in the comparison of existing CEP engine alternatives and the proposal of a novel hash-endowed automata-based lazy hybrid execution engine, called SASEXT, that undertakes the processing of pattern queries involving Kleene patterns. Our proposal is orders of magnitude faster than existing solutions.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78875002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristof Jannes, Vincent Reniers, Wouter Lenaerts, B. Lagaisse, W. Joosen
Distributed Ledger Technology (DLTs) or blockchains have been steadily emerging and providing innovation in the past decade for several use cases, ranging from financial networks, to notarization, or trustworthy execution via smart contracts. DLTs are enticing due to their properties of decentralization, non-repudiation, and auditability (transparency). These properties are of high potential to access control systems that can be implemented on-chain, and are executed without infringement and full transparency. While it remains uncertain which use cases will truly turn out to be viable, many use cases such as financial transactions can benefit from integrating certain restrictions via access control on the blockchain. In addition, smart contracts may in the future present security risks that are currently yet unknown. As a solution, access control policies can provide flexibility in the execution flow when adopted by smart contracts. In this paper, we present our DEDACS architecture which provides decentralized and dynamic access control for smart contracts in a policy-based manner. Our access control is expressive as it features policies, and dynamic as the environment or users can be changed, or alternative policies can be assigned to smart contracts. DEDACS ensures that our access control preserves the desired properties of decentralization and transparency, while aiming to keep the costs involved as minimal as possible. We have evaluated DEDACS in the context of a Uniswap token-exchange platform, in which we evaluated the costs related to (i) the introduced overhead at deployment time and (ii) the operational overhead cost. DEDACS introduces a relative overhead of on average 52% at deployment time, and an operational overhead between 52% and 80% depending on the chosen policy and its complexity.
{"title":"DEDACS: Decentralized and dynamic access control for smart contracts in a policy-based manner","authors":"Kristof Jannes, Vincent Reniers, Wouter Lenaerts, B. Lagaisse, W. Joosen","doi":"10.1145/3555776.3577676","DOIUrl":"https://doi.org/10.1145/3555776.3577676","url":null,"abstract":"Distributed Ledger Technology (DLTs) or blockchains have been steadily emerging and providing innovation in the past decade for several use cases, ranging from financial networks, to notarization, or trustworthy execution via smart contracts. DLTs are enticing due to their properties of decentralization, non-repudiation, and auditability (transparency). These properties are of high potential to access control systems that can be implemented on-chain, and are executed without infringement and full transparency. While it remains uncertain which use cases will truly turn out to be viable, many use cases such as financial transactions can benefit from integrating certain restrictions via access control on the blockchain. In addition, smart contracts may in the future present security risks that are currently yet unknown. As a solution, access control policies can provide flexibility in the execution flow when adopted by smart contracts. In this paper, we present our DEDACS architecture which provides decentralized and dynamic access control for smart contracts in a policy-based manner. Our access control is expressive as it features policies, and dynamic as the environment or users can be changed, or alternative policies can be assigned to smart contracts. DEDACS ensures that our access control preserves the desired properties of decentralization and transparency, while aiming to keep the costs involved as minimal as possible. We have evaluated DEDACS in the context of a Uniswap token-exchange platform, in which we evaluated the costs related to (i) the introduced overhead at deployment time and (ii) the operational overhead cost. DEDACS introduces a relative overhead of on average 52% at deployment time, and an operational overhead between 52% and 80% depending on the chosen policy and its complexity.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75192893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frederic A. Hayek, Mirko Koscina, P. Lafourcade, Charles Olivier-Anclin
Private permissioned blockchains are becoming gradually more sought-after. Such systems are reachable by authorized users, and tend to be completely transparent to whoever interacts with the blockchain. In this paper, we mitigate the latter. Authorized users can now stay unlinked to the transaction they propose in the blockchain while being authenticated before being allowed to interact. As a first contribution, we developed a consensus algorithm for private permissioned blockchains based on Hyperledger Fabric and the Practical Byzantine Fault Tolerance consensus. Building on this blockchain, five additional variations achieving various client-wise privacy preserving levels are proposed. These different protocols allow for different use cases and levels of privacy control and sometimes its revocation by an authority. All our protocols guarantee the unlinkability of transactions to their issuers achieving anonymity or pseudonymity. Miners can also inherit some of the above privacy preserving setting. Naturally, we maintain liveness and safety of the system and its data.
{"title":"Generic Privacy Preserving Private Permissioned Blockchains","authors":"Frederic A. Hayek, Mirko Koscina, P. Lafourcade, Charles Olivier-Anclin","doi":"10.1145/3555776.3577735","DOIUrl":"https://doi.org/10.1145/3555776.3577735","url":null,"abstract":"Private permissioned blockchains are becoming gradually more sought-after. Such systems are reachable by authorized users, and tend to be completely transparent to whoever interacts with the blockchain. In this paper, we mitigate the latter. Authorized users can now stay unlinked to the transaction they propose in the blockchain while being authenticated before being allowed to interact. As a first contribution, we developed a consensus algorithm for private permissioned blockchains based on Hyperledger Fabric and the Practical Byzantine Fault Tolerance consensus. Building on this blockchain, five additional variations achieving various client-wise privacy preserving levels are proposed. These different protocols allow for different use cases and levels of privacy control and sometimes its revocation by an authority. All our protocols guarantee the unlinkability of transactions to their issuers achieving anonymity or pseudonymity. Miners can also inherit some of the above privacy preserving setting. Naturally, we maintain liveness and safety of the system and its data.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84945426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Some researchers evaluate their fair Machine Learning (ML) algorithms by simulating data with a fair and biased version of its labels. The fair labels reflect what labels individuals deserve, while the biased labels reflect labels obtained through a biased decision process. Given such data, fair algorithms are evaluated by measuring how well they can predict the fair labels, after being trained on the biased ones. The big problem with these approaches is, that they are based on simulated data, which is unlikely to capture the full complexity and noise of real-life decision problems. In this paper, we show how we created a new, more realistic dataset with both fair and biased labels. For this purpose, we started with an existing dataset containing information about high school students and whether they passed an exam or not. Through a human experiment, where participants estimated the school performance given some description of these students, we collect a biased version of these labels. We show how this new dataset can be used to evaluate fair ML algorithms, and how some fairness interventions, that perform well in the traditional evaluation schemes, do not necessarily perform well with respect to the unbiased labels in our dataset, leading to new insights into the performance of debiasing techniques.
{"title":"Real-life Performance of Fairness Interventions - Introducing A New Benchmarking Dataset for Fair ML","authors":"Daphne Lenders, T. Calders","doi":"10.1145/3555776.3577634","DOIUrl":"https://doi.org/10.1145/3555776.3577634","url":null,"abstract":"Some researchers evaluate their fair Machine Learning (ML) algorithms by simulating data with a fair and biased version of its labels. The fair labels reflect what labels individuals deserve, while the biased labels reflect labels obtained through a biased decision process. Given such data, fair algorithms are evaluated by measuring how well they can predict the fair labels, after being trained on the biased ones. The big problem with these approaches is, that they are based on simulated data, which is unlikely to capture the full complexity and noise of real-life decision problems. In this paper, we show how we created a new, more realistic dataset with both fair and biased labels. For this purpose, we started with an existing dataset containing information about high school students and whether they passed an exam or not. Through a human experiment, where participants estimated the school performance given some description of these students, we collect a biased version of these labels. We show how this new dataset can be used to evaluate fair ML algorithms, and how some fairness interventions, that perform well in the traditional evaluation schemes, do not necessarily perform well with respect to the unbiased labels in our dataset, leading to new insights into the performance of debiasing techniques.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80350170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heithem Sliman, I. Megdiche, Sami Yangui, Aida Drira, Ines Drira, E. Lamine
Artificial Intelligence (AI) has undergone considerable development in recent years in the field of medicine and in particular in decision support diagnostic. However, the development of such algorithms depends on the presence of a sufficiently large amount of data to provide reliable results. Unfortunately in medicine, it is not always possible to provide so much data on all pathologies. This problem is particularly true for rare diseases. In this paper we focus on uveitis, a rare disease in ophthalmology which is the third cause of blindness worldwide. This pathology is difficult to diagnose because of the disparity in prevalence of its etiologies. In order to provide physicians with a diagnostic aid system, it would be necessary to have a representative dataset of epidemiological profiles that have been studied for a long time in this domain. This work proposes a breakthrough in this field by suggesting a methodological framework for the generation of an open source dataset based on the crossing of several epidemiological profiles and using data augmentation techniques. The results of these generated synthetic data have been qualitatively validated by specialist physicians in ophthalmology. Our results are very promising and consist in a first brick to promote research in AI on Uveitis disease.
{"title":"A Synthetic Dataset Generation for the Uveitis Pathology Based on MedWGAN Model","authors":"Heithem Sliman, I. Megdiche, Sami Yangui, Aida Drira, Ines Drira, E. Lamine","doi":"10.1145/3555776.3577648","DOIUrl":"https://doi.org/10.1145/3555776.3577648","url":null,"abstract":"Artificial Intelligence (AI) has undergone considerable development in recent years in the field of medicine and in particular in decision support diagnostic. However, the development of such algorithms depends on the presence of a sufficiently large amount of data to provide reliable results. Unfortunately in medicine, it is not always possible to provide so much data on all pathologies. This problem is particularly true for rare diseases. In this paper we focus on uveitis, a rare disease in ophthalmology which is the third cause of blindness worldwide. This pathology is difficult to diagnose because of the disparity in prevalence of its etiologies. In order to provide physicians with a diagnostic aid system, it would be necessary to have a representative dataset of epidemiological profiles that have been studied for a long time in this domain. This work proposes a breakthrough in this field by suggesting a methodological framework for the generation of an open source dataset based on the crossing of several epidemiological profiles and using data augmentation techniques. The results of these generated synthetic data have been qualitatively validated by specialist physicians in ophthalmology. Our results are very promising and consist in a first brick to promote research in AI on Uveitis disease.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90651227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the aging population increases, predictive health applications for the elderly can provide opportunities for more independent living, increase cost efficiency and improve the quality of health services for senior citizens. Human activity recognition within IoT-based smart homes can enable detection of early health risks related to mild cognitive impairment by providing proactive measurements and interventions to both the elderly and supporting healthcare givers. In this paper, we develop and evaluate a method to forecast activities of daily living (ADL) and detect anomalous behaviour using motion sensor data from smart homes. We build a predictive Multivariate long short term memory (LSTM) model for forecasting activities and evaluate it using data from six real-world smart homes. Further, we use Mahalanobis distance to identify anomalies in user behaviors based on predictions and actual values. In all of the datasets used for forecasting both duration of stay and level of activities using duration of activeness/stillness features, the max NMAE error was about 6%, the values show that the performance of LSTM for predicting the direct next activity versus the seven coming activities are close.
{"title":"Unsupervised Forecasting and Anomaly Detection of ADLs in single-resident elderly smart homes","authors":"Zahraa Khais Shahid, S. Saguna, C. Åhlund","doi":"10.1145/3555776.3577822","DOIUrl":"https://doi.org/10.1145/3555776.3577822","url":null,"abstract":"As the aging population increases, predictive health applications for the elderly can provide opportunities for more independent living, increase cost efficiency and improve the quality of health services for senior citizens. Human activity recognition within IoT-based smart homes can enable detection of early health risks related to mild cognitive impairment by providing proactive measurements and interventions to both the elderly and supporting healthcare givers. In this paper, we develop and evaluate a method to forecast activities of daily living (ADL) and detect anomalous behaviour using motion sensor data from smart homes. We build a predictive Multivariate long short term memory (LSTM) model for forecasting activities and evaluate it using data from six real-world smart homes. Further, we use Mahalanobis distance to identify anomalies in user behaviors based on predictions and actual values. In all of the datasets used for forecasting both duration of stay and level of activities using duration of activeness/stillness features, the max NMAE error was about 6%, the values show that the performance of LSTM for predicting the direct next activity versus the seven coming activities are close.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81874960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Salatiel Dantas Silva, C. E. Campelo, Maxwell Guimarães De Oliveira
Representing Points of Interest (POI) types, such as restaurants and shopping malls, is crucial to develop computational mechanisms that may assist in tasks such as urban planning and POI recommendation. The POI co-occurrences in different spatial regions have been used to represent POI types in high-dimensional vectors. However, such representations do not consider the geographic features (e.g. streets, buildings, rivers, parks) in the vicinity of POIs which may contribute to characterize such types. In this context, we propose the Geographic Context to Vector (GeoContext2Vec), an approach that relies on geographic features in the POIs' vicinity to generate POI types representation based on embeddings. We carried out an experiment to evaluate the GeoContext2Vec by using a POI type representation from the state-of-the-art that it does not consider geographic features. The promising results show that the geographic information provided by the GeoContext2Vec outperforms the state-of-the-art and demonstrates the relevance of surrouding geographic features on representing POI type more precisely.
{"title":"POI types characterization based on geographic feature embeddings","authors":"Salatiel Dantas Silva, C. E. Campelo, Maxwell Guimarães De Oliveira","doi":"10.1145/3555776.3577659","DOIUrl":"https://doi.org/10.1145/3555776.3577659","url":null,"abstract":"Representing Points of Interest (POI) types, such as restaurants and shopping malls, is crucial to develop computational mechanisms that may assist in tasks such as urban planning and POI recommendation. The POI co-occurrences in different spatial regions have been used to represent POI types in high-dimensional vectors. However, such representations do not consider the geographic features (e.g. streets, buildings, rivers, parks) in the vicinity of POIs which may contribute to characterize such types. In this context, we propose the Geographic Context to Vector (GeoContext2Vec), an approach that relies on geographic features in the POIs' vicinity to generate POI types representation based on embeddings. We carried out an experiment to evaluate the GeoContext2Vec by using a POI type representation from the state-of-the-art that it does not consider geographic features. The promising results show that the geographic information provided by the GeoContext2Vec outperforms the state-of-the-art and demonstrates the relevance of surrouding geographic features on representing POI type more precisely.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80002939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iñaki Amatria-Barral, J. González-Domínguez, J. Touriño
Long non-coding RNA sequences (lncRNAs) have completely changed how scientists approach genetics. While some believe that many lncRNAs are results of spurious transcriptions, recent evidence suggests that there exist thousands of them and that they have functions and regulate key biological processes. For the experimental characterization of lncRNAs, many tools that try to predict their interactions with other RNAs have been developed. Some of the fastest and more accurate tools, however, require a slow database construction step prior to the identification of interaction partners for each lncRNA. This paper presents a novel and efficient parallel database construction procedure. Benchmarking results on a 16-node multicore cluster show that our parallel algorithm can build databases up to 318 times faster than other tools in the market using just 256 CPU cores. All the code developed in this work is available to download at GitHub under the MIT License (https://github.com/UDC-GAC/pRIblast).
{"title":"Parallel construction of RNA databases for extensive lncRNA-RNA interaction prediction","authors":"Iñaki Amatria-Barral, J. González-Domínguez, J. Touriño","doi":"10.1145/3555776.3577772","DOIUrl":"https://doi.org/10.1145/3555776.3577772","url":null,"abstract":"Long non-coding RNA sequences (lncRNAs) have completely changed how scientists approach genetics. While some believe that many lncRNAs are results of spurious transcriptions, recent evidence suggests that there exist thousands of them and that they have functions and regulate key biological processes. For the experimental characterization of lncRNAs, many tools that try to predict their interactions with other RNAs have been developed. Some of the fastest and more accurate tools, however, require a slow database construction step prior to the identification of interaction partners for each lncRNA. This paper presents a novel and efficient parallel database construction procedure. Benchmarking results on a 16-node multicore cluster show that our parallel algorithm can build databases up to 318 times faster than other tools in the market using just 256 CPU cores. All the code developed in this work is available to download at GitHub under the MIT License (https://github.com/UDC-GAC/pRIblast).","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83827091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, research using knowledge distillation in artificial intelligence (AI) has been actively conducted. In particular, data-efficient image transformer (DeiT) is a representative transformer model using knowledge distillation in image classification. However, DeiT's safety against the patch unit's adversarial attacks was not verified. Furthermore, existing DeiT research did not prove security robustness against adversarial attacks. In order to verify the vulnerability of adversarial attacks, we conducted an attack using the fast gradient sign method (FGSM) targeting the DeiT model based on knowledge distillation. As a result of the experiment, an accuracy of 93.99% was shown in DeiT verification based on Normal data (Cifar-10). In contrast, when verified with abnormal data based on FGSM (adversarial examples), the accuracy decreased by 83.49% to 10.50%. By analyzing the vulnerability pattern related to adversarial attacks, we confirmed that FGSM showed successful attack performance through weight control of DeiT. Moreover, we verified that DeiT has security limitations for practical application.
{"title":"Security Verification Software Platform of Data-efficient Image Transformer Based on Fast Gradient Sign Method","authors":"In-pyo Hong, Gyu-ho Choi, Pan-koo Kim, Chang Choi","doi":"10.1145/3555776.3577731","DOIUrl":"https://doi.org/10.1145/3555776.3577731","url":null,"abstract":"Recently, research using knowledge distillation in artificial intelligence (AI) has been actively conducted. In particular, data-efficient image transformer (DeiT) is a representative transformer model using knowledge distillation in image classification. However, DeiT's safety against the patch unit's adversarial attacks was not verified. Furthermore, existing DeiT research did not prove security robustness against adversarial attacks. In order to verify the vulnerability of adversarial attacks, we conducted an attack using the fast gradient sign method (FGSM) targeting the DeiT model based on knowledge distillation. As a result of the experiment, an accuracy of 93.99% was shown in DeiT verification based on Normal data (Cifar-10). In contrast, when verified with abnormal data based on FGSM (adversarial examples), the accuracy decreased by 83.49% to 10.50%. By analyzing the vulnerability pattern related to adversarial attacks, we confirmed that FGSM showed successful attack performance through weight control of DeiT. Moreover, we verified that DeiT has security limitations for practical application.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89146499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kengo Watanabe, F. Machida, E. Andrade, R. Pietrantuono, Domenico Cotroneo
Real-time object detection systems are rapidly adopted in many edge computing systems for IoT applications. Since the computational resources on edge devices are often limited, continuous real-time object detection may suffer from the degradation of performance and reliability due to software aging. To provide a reliable IoT applications, it is crucial to understand how software aging can manifest in object detection systems under resource-constrained environment. In this paper, we investigate the software aging issue in a real-time object detection system using YOLOv5 running on a Raspberry Pi-based edge server. By performing statistical analysis on the measurement data, we detected a suspicious trend of software aging in the memory usage, which is induced by real-time object detection workloads. We also observe that a system monitoring process is halted due to the shortage of free storage space as a result of YOLOv5's resource dissipation. The monitoring process fails after 24.11, 44.56, and 115.36 hours (on average), when we set the sizes of input images to 160px, 320px, and 640px, respectively, in our system. Our experimental results can be used to plan countermeasures such as software rejuvenation and task offloading.
{"title":"Software Aging in a Real-Time Object Detection System on an Edge Server","authors":"Kengo Watanabe, F. Machida, E. Andrade, R. Pietrantuono, Domenico Cotroneo","doi":"10.1145/3555776.3577717","DOIUrl":"https://doi.org/10.1145/3555776.3577717","url":null,"abstract":"Real-time object detection systems are rapidly adopted in many edge computing systems for IoT applications. Since the computational resources on edge devices are often limited, continuous real-time object detection may suffer from the degradation of performance and reliability due to software aging. To provide a reliable IoT applications, it is crucial to understand how software aging can manifest in object detection systems under resource-constrained environment. In this paper, we investigate the software aging issue in a real-time object detection system using YOLOv5 running on a Raspberry Pi-based edge server. By performing statistical analysis on the measurement data, we detected a suspicious trend of software aging in the memory usage, which is induced by real-time object detection workloads. We also observe that a system monitoring process is halted due to the shortage of free storage space as a result of YOLOv5's resource dissipation. The monitoring process fails after 24.11, 44.56, and 115.36 hours (on average), when we set the sizes of input images to 160px, 320px, and 640px, respectively, in our system. Our experimental results can be used to plan countermeasures such as software rejuvenation and task offloading.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89101926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}