Pub Date : 2022-06-01DOI: 10.1109/COMPSAC54236.2022.00268
Vittorio Capocasale, S. Musso, G. Perboli
Logistics 4.0 is a revolution based on information sharing and digitalization. Thus, Logistics 4.0 leads to the generation of huge amounts of data in short periods, and the data bloating problem must be addressed. One possible solution is the interplanetary file system (IPFS), which guarantees data replication and availability while limiting the storage of overlapping data. This study is the first literature review on IPFS and focuses on its application to the logistic sector. The main findings of this study are: the topic is gaining interest, but the solutions proposed in the literature were still in the early stages; IPFS was always coupled with the blockchain technology, and all of the authors used similar strategies to integrate them; the authors identified many advantages in the use of IPFS, but did not analyze in-depth the related disadvantages.
{"title":"Interplanetary File System in Logistic Networks: a Review","authors":"Vittorio Capocasale, S. Musso, G. Perboli","doi":"10.1109/COMPSAC54236.2022.00268","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00268","url":null,"abstract":"Logistics 4.0 is a revolution based on information sharing and digitalization. Thus, Logistics 4.0 leads to the generation of huge amounts of data in short periods, and the data bloating problem must be addressed. One possible solution is the interplanetary file system (IPFS), which guarantees data replication and availability while limiting the storage of overlapping data. This study is the first literature review on IPFS and focuses on its application to the logistic sector. The main findings of this study are: the topic is gaining interest, but the solutions proposed in the literature were still in the early stages; IPFS was always coupled with the blockchain technology, and all of the authors used similar strategies to integrate them; the authors identified many advantages in the use of IPFS, but did not analyze in-depth the related disadvantages.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127293003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1109/COMPSAC54236.2022.00049
Rongbo Chen, Kunpeng Xun, Jean-Marc Patenaude, Shengrui Wang
We investigate issues related to dynamic cross-sectional regime identification for financial market prediction. A financial market can be viewed as an ecosystem regulated by regimes that may switch at different time points. In most existing regime-based prediction models, regimes can only switch, according to a static transition probability matrix, among a fixed set of regimes identified on training data due to the fact that they lack in mechanism of identifying new regimes on test data. This prevents them from being effective as the financial markets are time-evolving and may fall into a new regime at any future time. Moreover, most of them only handle single time series, and are not capable of dealing with multiple time series. These shortcomings prompted us to devise a dynamic cross-sectional regime identification model for time series prediction. The new model is defined on a multi-time-series system, with time-varying transition probabilities, and can identify new cross-sectional regimes dynamically from the time-evolving financial market. Experimental results on real-world financial datasets illustrate the promising performance and suitability of our model.
{"title":"Dynamic Cross-sectional Regime Identification for Financial Market Prediction","authors":"Rongbo Chen, Kunpeng Xun, Jean-Marc Patenaude, Shengrui Wang","doi":"10.1109/COMPSAC54236.2022.00049","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00049","url":null,"abstract":"We investigate issues related to dynamic cross-sectional regime identification for financial market prediction. A financial market can be viewed as an ecosystem regulated by regimes that may switch at different time points. In most existing regime-based prediction models, regimes can only switch, according to a static transition probability matrix, among a fixed set of regimes identified on training data due to the fact that they lack in mechanism of identifying new regimes on test data. This prevents them from being effective as the financial markets are time-evolving and may fall into a new regime at any future time. Moreover, most of them only handle single time series, and are not capable of dealing with multiple time series. These shortcomings prompted us to devise a dynamic cross-sectional regime identification model for time series prediction. The new model is defined on a multi-time-series system, with time-varying transition probabilities, and can identify new cross-sectional regimes dynamically from the time-evolving financial market. Experimental results on real-world financial datasets illustrate the promising performance and suitability of our model.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129934891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Predicate abstraction techniques have been shown to be a powerful technique for verifying imperative programs, which can solve the problem of state space explosion pretty well. Among them, lazy abstraction with interpolation-based refinement also called the IMPACT approach has gained increasing popularity in the last years. However, despite its high efficiency, the IMPACT fails to work out some kinds of the programs because the interpolants produced by interpolant solver are so bad to make the verification divergent. According to the features of some of these programs, we extend the IMPACT method to make it applicable for them. In addition to its basic ones, two other operations are introduced to the IMPACT refinement to guide it produce reasonal interpolants which are helpful for the verification process to converge. The experiments on the benchmark of SV-COMP2020 show the potential of the extended approach.
{"title":"An Extention of Lazy Abstraction and Refinement for Program Verification","authors":"Haowei Liang, Chunyan Hou, Jinsong Wang, Chen Chen","doi":"10.1109/COMPSAC54236.2022.00278","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00278","url":null,"abstract":"Predicate abstraction techniques have been shown to be a powerful technique for verifying imperative programs, which can solve the problem of state space explosion pretty well. Among them, lazy abstraction with interpolation-based refinement also called the IMPACT approach has gained increasing popularity in the last years. However, despite its high efficiency, the IMPACT fails to work out some kinds of the programs because the interpolants produced by interpolant solver are so bad to make the verification divergent. According to the features of some of these programs, we extend the IMPACT method to make it applicable for them. In addition to its basic ones, two other operations are introduced to the IMPACT refinement to guide it produce reasonal interpolants which are helpful for the verification process to converge. The experiments on the benchmark of SV-COMP2020 show the potential of the extended approach.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126946779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1109/COMPSAC54236.2022.00251
Alexander Heireth Enge, Abylay Satybaldy, M. Nowostawski
Self-sovereign identity (SSI) is an emerging concept that shifts the control of identity to the person or entity to whom it belongs to without the need to rely on any centralized administrative authority. Within the SSI model, a digital identity wallet enables a user to establish relationships and interact with third parties in a secure and trusted manner. However, in order to perform various operations such as messaging and credential exchange, these usually require internet access. In some situations, this is not possible, and entities should be able to communicate independently of any external infrastructure in an offline setting. The objective of this paper is to design a proof-of-concept that would allow for secure, trustworthy, and privacy-preserving decentralized peer-to-peer communication without the need for any external networking infrastructure. For this, we investigate a particular case involving DIDComm and Bluetooth LE. We identify requirements for the architecture and propose an architectural framework that allows two entities to securely communicate. To show our concept's feasibility, we evaluate the existing technologies that could be used in the proposed architecture. Our findings indicate that this approach has the potential to enable a wide range of interesting use cases and can be integrated into existing digital identity wallet solutions to extend the capabilities of offline messaging in a secure and decentralized manner that goes beyond the current models that often rely on Internet connectivity.
{"title":"An architectural framework for enabling secure decentralized P2P messaging using DIDComm and Bluetooth Low Energy","authors":"Alexander Heireth Enge, Abylay Satybaldy, M. Nowostawski","doi":"10.1109/COMPSAC54236.2022.00251","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00251","url":null,"abstract":"Self-sovereign identity (SSI) is an emerging concept that shifts the control of identity to the person or entity to whom it belongs to without the need to rely on any centralized administrative authority. Within the SSI model, a digital identity wallet enables a user to establish relationships and interact with third parties in a secure and trusted manner. However, in order to perform various operations such as messaging and credential exchange, these usually require internet access. In some situations, this is not possible, and entities should be able to communicate independently of any external infrastructure in an offline setting. The objective of this paper is to design a proof-of-concept that would allow for secure, trustworthy, and privacy-preserving decentralized peer-to-peer communication without the need for any external networking infrastructure. For this, we investigate a particular case involving DIDComm and Bluetooth LE. We identify requirements for the architecture and propose an architectural framework that allows two entities to securely communicate. To show our concept's feasibility, we evaluate the existing technologies that could be used in the proposed architecture. Our findings indicate that this approach has the potential to enable a wide range of interesting use cases and can be integrated into existing digital identity wallet solutions to extend the capabilities of offline messaging in a secure and decentralized manner that goes beyond the current models that often rely on Internet connectivity.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127874706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1109/COMPSAC54236.2022.00287
George Yee
We continue to tackle the problem of poorly defined security metrics by building on and improving our previous work on designing sound security metrics. We reformulate the previous method into a set of conditions that are clearer and more widely applicable for deriving sound security metrics. We also modify and enhance some concepts that led to an unforeseen weakness in the previous method that was subsequently found by users, thereby eliminating this weakness from the conditions. We present examples showing how the conditions can be used to obtain sound security metrics. To demonstrate the conditions' versatility, we apply them to show that an aggregate security metric made up of sound security metrics is also sound. This is useful where the use of an aggregate measure may be preferred, to more easily understand the security of a system.
{"title":"Improving the Derivation of Sound Security Metrics","authors":"George Yee","doi":"10.1109/COMPSAC54236.2022.00287","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00287","url":null,"abstract":"We continue to tackle the problem of poorly defined security metrics by building on and improving our previous work on designing sound security metrics. We reformulate the previous method into a set of conditions that are clearer and more widely applicable for deriving sound security metrics. We also modify and enhance some concepts that led to an unforeseen weakness in the previous method that was subsequently found by users, thereby eliminating this weakness from the conditions. We present examples showing how the conditions can be used to obtain sound security metrics. To demonstrate the conditions' versatility, we apply them to show that an aggregate security metric made up of sound security metrics is also sound. This is useful where the use of an aggregate measure may be preferred, to more easily understand the security of a system.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125525758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The financial property of Ethereum makes smart contract attacks frequently bring about tremendous economic loss. Method for effective detection of vulnerabilities in contracts imperative. Existing efforts for contract security analysis heavily rely on rigid rules defined by experts, which are labor-intensive and non-scalable. There is still a lack of effort that considers combining expert-defined security patterns with deep learning. This paper proposes EtherGIS, a vulnerability detection framework that utilizes graph neural networks (GNN) and expert knowledge to extract the graph feature from smart contract control flow graphs (CFG). To gain multi-dimensional contract information and reinforce the attention of vulnerability-related graph features, sensitive EVM instruction corpora are constructed by analyzing EVM underlying logic and diverse vulnerability triggering mechanisms. The characteristic of nodes and edges in a CFG is initially confirmed according to the corpora, generating the corresponding attribute graph. GNN is adopted to aggregate the whole graph's attribute and structure information, bridging the semantic gap between low-level graph features and high-level contract features. The feature representation of the graph is finally input into the graph classification model for vulnerability detection. Furthermore, automated machine learning (AutoML) is adopted to automate the entire deep learning process. Data for this research was collected from Ethereum to build up a dataset of six vulnerabilities for evaluation. Experimental results demonstrate that EtherGIS can productively detect vulnerabilities in Ethereum smart contracts in terms of accuracy, precision, recall, and F1-score. All aspects outperform the existing work.
{"title":"EtherGIS: A Vulnerability Detection Framework for Ethereum Smart Contracts Based on Graph Learning Features","authors":"Qingren Zeng, Jiahao He, Gansen Zhao, Shuangyin Li, Jingji Yang, Hua Tang, Haoyu Luo","doi":"10.1109/COMPSAC54236.2022.00277","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00277","url":null,"abstract":"The financial property of Ethereum makes smart contract attacks frequently bring about tremendous economic loss. Method for effective detection of vulnerabilities in contracts imperative. Existing efforts for contract security analysis heavily rely on rigid rules defined by experts, which are labor-intensive and non-scalable. There is still a lack of effort that considers combining expert-defined security patterns with deep learning. This paper proposes EtherGIS, a vulnerability detection framework that utilizes graph neural networks (GNN) and expert knowledge to extract the graph feature from smart contract control flow graphs (CFG). To gain multi-dimensional contract information and reinforce the attention of vulnerability-related graph features, sensitive EVM instruction corpora are constructed by analyzing EVM underlying logic and diverse vulnerability triggering mechanisms. The characteristic of nodes and edges in a CFG is initially confirmed according to the corpora, generating the corresponding attribute graph. GNN is adopted to aggregate the whole graph's attribute and structure information, bridging the semantic gap between low-level graph features and high-level contract features. The feature representation of the graph is finally input into the graph classification model for vulnerability detection. Furthermore, automated machine learning (AutoML) is adopted to automate the entire deep learning process. Data for this research was collected from Ethereum to build up a dataset of six vulnerabilities for evaluation. Experimental results demonstrate that EtherGIS can productively detect vulnerabilities in Ethereum smart contracts in terms of accuracy, precision, recall, and F1-score. All aspects outperform the existing work.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125531413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1109/COMPSAC54236.2022.00265
Anna Sabatini, E. Nicolai, L. Vollero
The identification of floating particles in liquids in order to characterize their purity and quality is a topic of growing interest in the face of the increasing attention being paid to product quality control and the rising tide of pollution in primary goods such as the drinking water. The problem of microplastics spread in water and food is one of the main issues of attention today, mainly because of the effects on people's health who consume these goods. The monitoring of large volumes of water represents one of the main issues of interest that is driving the development of non-invasive and non-destructive high-precision techniques. Among the most interesting methods of performing this monitoring, optical systems represent a solution of great interest given their negligible, if any, impact on the monitored products and their ability to continuously analyzing the compound of interest. Given a high-quality optical recording system, it is necessary to complement it with a highly reliable and fast detection system to allow large volumes to be monitored in a relatively short time. In this scenario, the current paper brings three main contributions: (i) it defines and models a detection system with controllable reliability, (ii) it presents an online detection algorithm and (iii) it tests the suitability of the proposed system for integration into existing monitoring devices.
{"title":"Detection of floating objects in liquids","authors":"Anna Sabatini, E. Nicolai, L. Vollero","doi":"10.1109/COMPSAC54236.2022.00265","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00265","url":null,"abstract":"The identification of floating particles in liquids in order to characterize their purity and quality is a topic of growing interest in the face of the increasing attention being paid to product quality control and the rising tide of pollution in primary goods such as the drinking water. The problem of microplastics spread in water and food is one of the main issues of attention today, mainly because of the effects on people's health who consume these goods. The monitoring of large volumes of water represents one of the main issues of interest that is driving the development of non-invasive and non-destructive high-precision techniques. Among the most interesting methods of performing this monitoring, optical systems represent a solution of great interest given their negligible, if any, impact on the monitored products and their ability to continuously analyzing the compound of interest. Given a high-quality optical recording system, it is necessary to complement it with a highly reliable and fast detection system to allow large volumes to be monitored in a relatively short time. In this scenario, the current paper brings three main contributions: (i) it defines and models a detection system with controllable reliability, (ii) it presents an online detection algorithm and (iii) it tests the suitability of the proposed system for integration into existing monitoring devices.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116136090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1109/COMPSAC54236.2022.00086
Md Abdullah Khan, H. Shahriar
Post-stroke rehabilitation is always stressful in-home settings due to the unaccustomed environment, irregular sleep, and undergoing rehabilitation exercises. Usually, the intensity and difficulty of the exercise are inherent complex problems for the patients to manage daily. Physical rehabilitation is essential for all stroke patients to recover. Therefore, an automated in-home rehabilitation system with feedback support both for patient and therapist could assist post-stroke patients in managing and assessing exercise daily to recover faster. This work proposes a data acquisition and analysis framework named “MRehab” that helps collect multimodal sensor signals while patients perform both voluntary and non-voluntary (prescribed) exercises. “MRe-hab” assesses the exercise and physiological states of the patients through signal processing and multiple machine learning models. This framework monitors repetition, patient fatigue, and exercise quality and recommends frequency and intensity.
{"title":"MRehab: Mutlimodal data acquisition and modeling framework for assessing stroke and cardiac rehabilitation exercises","authors":"Md Abdullah Khan, H. Shahriar","doi":"10.1109/COMPSAC54236.2022.00086","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00086","url":null,"abstract":"Post-stroke rehabilitation is always stressful in-home settings due to the unaccustomed environment, irregular sleep, and undergoing rehabilitation exercises. Usually, the intensity and difficulty of the exercise are inherent complex problems for the patients to manage daily. Physical rehabilitation is essential for all stroke patients to recover. Therefore, an automated in-home rehabilitation system with feedback support both for patient and therapist could assist post-stroke patients in managing and assessing exercise daily to recover faster. This work proposes a data acquisition and analysis framework named “MRehab” that helps collect multimodal sensor signals while patients perform both voluntary and non-voluntary (prescribed) exercises. “MRe-hab” assesses the exercise and physiological states of the patients through signal processing and multiple machine learning models. This framework monitors repetition, patient fatigue, and exercise quality and recommends frequency and intensity.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115284302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1109/COMPSAC54236.2022.00131
Blaž Podgorelec, Lukas Alber, Thomas Zefferer
Identity management is crucial for any electronic service that needs to authenticate its users. Different identity-management models have been introduced and rolled out on a large scale during the past decades. Key distinguishing criteria of these models are the storage location of users' identity data and the degree of involvement of central entities such as identity providers, which can potentially track user behavior. Growing privacy awareness has led to a renaissance of user-centric identity-management models during the past few years. In this context, especially the concept of wallets applied to the digital identity domain has recently attracted attention, putting users into direct control of their identity data. Various approaches and solutions relying on this concept have been introduced recently. However, no generally accepted definitions of the concept “digital identity wallet” and of its related features and implementations exist so far, leading to considerable confusion in this domain. This paper addresses this issue by providing a systematic literature review on wallets applied to the digital identity domain to identify, analyze, and compare existing definitions, features, and capabilities of such solutions. By means of two research questions, this paper thereby contributes to a better understanding of identity wallets and the various recent developments in this domain.
{"title":"What is a (Digital) Identity Wallet? A Systematic Literature Review","authors":"Blaž Podgorelec, Lukas Alber, Thomas Zefferer","doi":"10.1109/COMPSAC54236.2022.00131","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00131","url":null,"abstract":"Identity management is crucial for any electronic service that needs to authenticate its users. Different identity-management models have been introduced and rolled out on a large scale during the past decades. Key distinguishing criteria of these models are the storage location of users' identity data and the degree of involvement of central entities such as identity providers, which can potentially track user behavior. Growing privacy awareness has led to a renaissance of user-centric identity-management models during the past few years. In this context, especially the concept of wallets applied to the digital identity domain has recently attracted attention, putting users into direct control of their identity data. Various approaches and solutions relying on this concept have been introduced recently. However, no generally accepted definitions of the concept “digital identity wallet” and of its related features and implementations exist so far, leading to considerable confusion in this domain. This paper addresses this issue by providing a systematic literature review on wallets applied to the digital identity domain to identify, analyze, and compare existing definitions, features, and capabilities of such solutions. By means of two research questions, this paper thereby contributes to a better understanding of identity wallets and the various recent developments in this domain.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116669436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1109/COMPSAC54236.2022.00018
Francesco Terrosi, A. Ceccarelli, A. Bondavalli
General Purpose GPUs (GPGPUs) are highly susceptible to both transient and permanent faults. This is a serious concern for their safe and reliable usage in many domains, from autonomous driving to High Performance Computing. The research and industrial community responded fiercely to this issue, by analyzing failures impact and devising failure mitigation strategies. This led to the definition of several failure modes and mitigation approaches. Unfortunately, these are often based on different foundations, and it is not easy to position them in a consistent view. This work elaborates a GPGPU failures model, identifying relations between the GPGPU failure modes and components, and then it analyzes mitigations proposed in the literature. By proposing a unified view on failures and mitigations, the resulting model i) positions each research on the subject, ii) easily identifies the current gaps, and iii) sets the basis for further research on GPGPU failures.
{"title":"Failure modes and failure mitigation in GPGPUs: a reference model and its application","authors":"Francesco Terrosi, A. Ceccarelli, A. Bondavalli","doi":"10.1109/COMPSAC54236.2022.00018","DOIUrl":"https://doi.org/10.1109/COMPSAC54236.2022.00018","url":null,"abstract":"General Purpose GPUs (GPGPUs) are highly susceptible to both transient and permanent faults. This is a serious concern for their safe and reliable usage in many domains, from autonomous driving to High Performance Computing. The research and industrial community responded fiercely to this issue, by analyzing failures impact and devising failure mitigation strategies. This led to the definition of several failure modes and mitigation approaches. Unfortunately, these are often based on different foundations, and it is not easy to position them in a consistent view. This work elaborates a GPGPU failures model, identifying relations between the GPGPU failure modes and components, and then it analyzes mitigations proposed in the literature. By proposing a unified view on failures and mitigations, the resulting model i) positions each research on the subject, ii) easily identifies the current gaps, and iii) sets the basis for further research on GPGPU failures.","PeriodicalId":330838,"journal":{"name":"2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123770115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}