Latent Autoimmune Diabetes in Adults (LADA) is a condition, which is rarely recognised as a complex disease within its own right and remains under researched. Completely over-shadowed by Type 1 and Type 2 diabetes, LADA is the second most prevalent genre of diabetes after Type 2. This paper investigates conventional (clinical and socio-demographic) risk factors including Age, Gender, BMI (Body Mass Index), Cholesterol, Waist Size and Family History, with the motivation of determining their respective significant predictive power in the classification of LADA Diabetes. Such conventional factors are analysed and modelled using a set of supervised machine-learning algorithms including Support Vector Machines with Radial Basis Function Kernel (SVM), Random Forest (RF), K-Nearest Neighbour (KNN), Monotone Multi-Layer Perceptron Neural Network (MONMLP), Neural-net (NN) and Naïve Bayes (NB) Classifier, with the objective of correctly classifying LADA diabetes. Results elucidated from the analysis demonstrate that the predictive capacity of the learning models is significantly enhanced with the utilisation of Neuralnet classifier, achieving a classification accuracy of 85.51%, sensitivity of 84.09%, and specificity of 86.93%, alongside a precision of 86.93%, a recall of 84.53% and an F1 score of 85.71%, thereby outperforming the other studied individual models. Further analysis on the variable importance determined that the conventional variable Waist Size is the most significant variable when using the Neuralnet classifier with a 100% importance for LADA diabetes classification.
{"title":"An empirical analysis of LADA diabetes case, control and variable importance","authors":"A. Miller, John Panneerselvam, Lu Liu","doi":"10.1145/3492323.3495632","DOIUrl":"https://doi.org/10.1145/3492323.3495632","url":null,"abstract":"Latent Autoimmune Diabetes in Adults (LADA) is a condition, which is rarely recognised as a complex disease within its own right and remains under researched. Completely over-shadowed by Type 1 and Type 2 diabetes, LADA is the second most prevalent genre of diabetes after Type 2. This paper investigates conventional (clinical and socio-demographic) risk factors including Age, Gender, BMI (Body Mass Index), Cholesterol, Waist Size and Family History, with the motivation of determining their respective significant predictive power in the classification of LADA Diabetes. Such conventional factors are analysed and modelled using a set of supervised machine-learning algorithms including Support Vector Machines with Radial Basis Function Kernel (SVM), Random Forest (RF), K-Nearest Neighbour (KNN), Monotone Multi-Layer Perceptron Neural Network (MONMLP), Neural-net (NN) and Naïve Bayes (NB) Classifier, with the objective of correctly classifying LADA diabetes. Results elucidated from the analysis demonstrate that the predictive capacity of the learning models is significantly enhanced with the utilisation of Neuralnet classifier, achieving a classification accuracy of 85.51%, sensitivity of 84.09%, and specificity of 86.93%, alongside a precision of 86.93%, a recall of 84.53% and an F1 score of 85.71%, thereby outperforming the other studied individual models. Further analysis on the variable importance determined that the conventional variable Waist Size is the most significant variable when using the Neuralnet classifier with a 100% importance for LADA diabetes classification.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114674672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scalable application development is highly influenced by two major trends - serverless computing and continuum computing. These trends have had little intersection, as most application architectures, even when following a microservices or function-based approach, are built around rather monolithic Function-as-a-Service engines that do not span continuums. Functions are thus separated code-wise but not infrastructure-wise, as they continue to run on the same single platform they have been deployed to. Moreover, developing and deploying distributed applications remains non-trivial and is a hurdle for enhancing the capabilities of mobile and sensing domains. To overcome this limitation, the concept of self-balancing architectures is introduced in which liquid functions traverse cloud and edge/fog platforms in a continuum as needed, represented by the abstract notion of pressure relief valves based on resource capacities, function execution durations and optimisation preferences. With CoRFu, a reference implementation of a continuum-wide distributed Function-as-a-Service engine is introduced and combined with a dynamic function offloading framework. The implementation is validated with a sensor data inference and regression application.
{"title":"Self-balancing architectures based on liquid functions across computing continuums","authors":"Josef Spillner","doi":"10.1145/3492323.3495589","DOIUrl":"https://doi.org/10.1145/3492323.3495589","url":null,"abstract":"Scalable application development is highly influenced by two major trends - serverless computing and continuum computing. These trends have had little intersection, as most application architectures, even when following a microservices or function-based approach, are built around rather monolithic Function-as-a-Service engines that do not span continuums. Functions are thus separated code-wise but not infrastructure-wise, as they continue to run on the same single platform they have been deployed to. Moreover, developing and deploying distributed applications remains non-trivial and is a hurdle for enhancing the capabilities of mobile and sensing domains. To overcome this limitation, the concept of self-balancing architectures is introduced in which liquid functions traverse cloud and edge/fog platforms in a continuum as needed, represented by the abstract notion of pressure relief valves based on resource capacities, function execution durations and optimisation preferences. With CoRFu, a reference implementation of a continuum-wide distributed Function-as-a-Service engine is introduced and combined with a dynamic function offloading framework. The implementation is validated with a sensor data inference and regression application.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"53 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116842324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semih Ince, D. Espès, G. Gogniat, Julien Lallet, R. Santoro
FPGA-enabled cloud computing is getting more and more common as cloud providers offer hardware accelerated solutions. In this context, clients need confidential remote computing. However Intellectual Properties and data are being used and communicated. So current security models require the client to trust the cloud provider blindly by disclosing sensitive information. In addition, the lack of strong authentication and access control mechanisms, for both the client and the provided FPGA in current solutions, is a major security drawback. To enhance security measures and privacy between the client, the cloud provider and the FPGA, an additional entity needs to be introduced: the trusted authority. Its role is to authenticate the client-FPGA pair and isolate them from the cloud provider. With our novel OAuth 2.0-based access delegation solution for FPGA-accelerated clouds, a remote confidential FPGA environment with a token-based access can be created for the client. Our solution allows to manage and securely allocate heterogeneous resource pools with enhanced privacy & confidentiality for the client. Our formal analysis shows that our protocol adds a very small latency which is suitable for real-time application.
{"title":"OAuth 2.0-based authentication solution for FPGA-enabled cloud computing","authors":"Semih Ince, D. Espès, G. Gogniat, Julien Lallet, R. Santoro","doi":"10.1145/3492323.3495635","DOIUrl":"https://doi.org/10.1145/3492323.3495635","url":null,"abstract":"FPGA-enabled cloud computing is getting more and more common as cloud providers offer hardware accelerated solutions. In this context, clients need confidential remote computing. However Intellectual Properties and data are being used and communicated. So current security models require the client to trust the cloud provider blindly by disclosing sensitive information. In addition, the lack of strong authentication and access control mechanisms, for both the client and the provided FPGA in current solutions, is a major security drawback. To enhance security measures and privacy between the client, the cloud provider and the FPGA, an additional entity needs to be introduced: the trusted authority. Its role is to authenticate the client-FPGA pair and isolate them from the cloud provider. With our novel OAuth 2.0-based access delegation solution for FPGA-accelerated clouds, a remote confidential FPGA environment with a token-based access can be created for the client. Our solution allows to manage and securely allocate heterogeneous resource pools with enhanced privacy & confidentiality for the client. Our formal analysis shows that our protocol adds a very small latency which is suitable for real-time application.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"235 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123735613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the recent years, blockchain has been widely studied and applied as a solution to address various healthcare challenges associated with the legacy systems. Availability of a trusted healthcare ecosystem for accountable medical data sharing still remains a problem. This paper discusses the potential applications of blockchain in healthcare and proposes a blockchain-based framework to facilitate health data availability and sharing. It identifies the implementation challenges of such a system and discusses their relationship with blockchain's intrinsic design and characteristics. In the end, this paper delineates the future research directions required to overcome the challenges in realizing a blockchain-based platform for accountable medical data management and sharing.
{"title":"Blockchain-based distributed platform for accountable medical data sharing","authors":"A. Khan, A. Anjum","doi":"10.1145/3492323.3503506","DOIUrl":"https://doi.org/10.1145/3492323.3503506","url":null,"abstract":"In the recent years, blockchain has been widely studied and applied as a solution to address various healthcare challenges associated with the legacy systems. Availability of a trusted healthcare ecosystem for accountable medical data sharing still remains a problem. This paper discusses the potential applications of blockchain in healthcare and proposes a blockchain-based framework to facilitate health data availability and sharing. It identifies the implementation challenges of such a system and discusses their relationship with blockchain's intrinsic design and characteristics. In the end, this paper delineates the future research directions required to overcome the challenges in realizing a blockchain-based platform for accountable medical data management and sharing.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"79 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114043639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent years have witnessed the fast development of file and storage systems. Many improvements of file and storage systems are inspired by Workload analysis, which reveals the characteristics of I/O behavior. Although cloud storage systems are becoming increasingly prominent, few real-world and large-scale cloud storage workload studies are presented. Alibaba Cloud is one of the world's largest cloud providers, and we have collected and analyzed workloads from Alibaba for an extended period. We observe that modern cloud network architecture can easily handle the peak load during busy festivals. However, the client layer is the system bottleneck during the peak period, which calls for further optimization. We also find that the workload is heavily skewed toward a small percentage of virtual disks, and its distribution conforms 80/20 rule. In summary, the characteristics of such a large-scale cloud storage system in production environments are important for future cloud storage system modifications.
{"title":"Client layer becomes bottleneck: workload analysis of an ultra-large-scale cloud storage system","authors":"Xiaoyi Sun, Kai Li, Yaodanjun Ren, Jiale Lin, Zhenyu Ren, Shuzhi Feng, Jian Yin, Zhengwei Qi","doi":"10.1145/3492323.3495625","DOIUrl":"https://doi.org/10.1145/3492323.3495625","url":null,"abstract":"Recent years have witnessed the fast development of file and storage systems. Many improvements of file and storage systems are inspired by Workload analysis, which reveals the characteristics of I/O behavior. Although cloud storage systems are becoming increasingly prominent, few real-world and large-scale cloud storage workload studies are presented. Alibaba Cloud is one of the world's largest cloud providers, and we have collected and analyzed workloads from Alibaba for an extended period. We observe that modern cloud network architecture can easily handle the peak load during busy festivals. However, the client layer is the system bottleneck during the peak period, which calls for further optimization. We also find that the workload is heavily skewed toward a small percentage of virtual disks, and its distribution conforms 80/20 rule. In summary, the characteristics of such a large-scale cloud storage system in production environments are important for future cloud storage system modifications.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131890754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anshul Jindal, Mohak Chadha, S. Benedict, M. Gerndt
Serverless computing is a cloud computing paradigm that allows developers to focus exclusively on business logic as cloud service providers manage resource management tasks. Serverless applications follow this model, where the application is decomposed into a set of fine-grained Function-as-a-Service (FaaS) functions. However, the obscurities of the underlying system infrastructure and dependencies between FaaS functions within the application pose a challenge for estimating the performance of FaaS functions. To characterize the performance of a FaaS function that is relevant for the user, we define Function Capacity (FC) as the maximal number of concurrent invocations the function can serve in a time without violating the Service-Level Objective (SLO). The paper addresses the challenge of quantifying the FC individually for each FaaS function within a serverless application. This challenge is addressed by sandboxing a FaaS function and building its performance model. To this end, we develop FnCapacitor - an end-to-end automated Function Capacity estimation tool. We demonstrate the functioning of our tool on Google Cloud Functions (GCF) and AWS Lambda. FnCapacitor estimates the FCs on different deployment configurations (allocated memory & maximum function instances) by conducting time-framed load tests and building various models using statistical: linear, ridge, and polynomial regression, and Deep Neural Network (DNN) methods on the acquired performance data. Our evaluation of different FaaS functions shows relatively accurate predictions with an accuracy greater than 75% using DNN for both cloud providers.
{"title":"Estimating the capacities of function-as-a-service functions","authors":"Anshul Jindal, Mohak Chadha, S. Benedict, M. Gerndt","doi":"10.1145/3492323.3495628","DOIUrl":"https://doi.org/10.1145/3492323.3495628","url":null,"abstract":"Serverless computing is a cloud computing paradigm that allows developers to focus exclusively on business logic as cloud service providers manage resource management tasks. Serverless applications follow this model, where the application is decomposed into a set of fine-grained Function-as-a-Service (FaaS) functions. However, the obscurities of the underlying system infrastructure and dependencies between FaaS functions within the application pose a challenge for estimating the performance of FaaS functions. To characterize the performance of a FaaS function that is relevant for the user, we define Function Capacity (FC) as the maximal number of concurrent invocations the function can serve in a time without violating the Service-Level Objective (SLO). The paper addresses the challenge of quantifying the FC individually for each FaaS function within a serverless application. This challenge is addressed by sandboxing a FaaS function and building its performance model. To this end, we develop FnCapacitor - an end-to-end automated Function Capacity estimation tool. We demonstrate the functioning of our tool on Google Cloud Functions (GCF) and AWS Lambda. FnCapacitor estimates the FCs on different deployment configurations (allocated memory & maximum function instances) by conducting time-framed load tests and building various models using statistical: linear, ridge, and polynomial regression, and Deep Neural Network (DNN) methods on the acquired performance data. Our evaluation of different FaaS functions shows relatively accurate predictions with an accuracy greater than 75% using DNN for both cloud providers.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114941346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Balouek-Thomert, Pedro Silva, Kevin Fauvel, Alexandru Costan, Gabriel Antoniu, M. Parashar
The growth of the Internet of Things is resulting in an explosion of data volumes at the Edge of the Internet. To reduce costs incurred due to data movement and centralized cloud-based processing, it is becoming increasingly important to process and analyze such data closer to the data sources. Exploiting Edge computing capabilities for stream-based processing is however challenging. It requires addressing the complex characteristics and constraints imposed by all the resources along the data path, as well as the large set of heterogeneous data processing and management frameworks. Consequently, the community needs tools that can facilitate the modeling of this complexity and can integrate the various components involved. In this work, we introduce MDSC, a hierarchical approach for modeling distributed stream-based applications on Edge-to-Cloud continuum infrastructures. We demonstrate how MDSC can be applied to a concrete real-life ML-based application - early earthquake warning - to help answer questions such as: when is it worth decentralizing the classification load from the Cloud to the Edge and how?
{"title":"MDSC: modelling distributed stream processing across the edge-to-cloud continuum","authors":"Daniel Balouek-Thomert, Pedro Silva, Kevin Fauvel, Alexandru Costan, Gabriel Antoniu, M. Parashar","doi":"10.1145/3492323.3495590","DOIUrl":"https://doi.org/10.1145/3492323.3495590","url":null,"abstract":"The growth of the Internet of Things is resulting in an explosion of data volumes at the Edge of the Internet. To reduce costs incurred due to data movement and centralized cloud-based processing, it is becoming increasingly important to process and analyze such data closer to the data sources. Exploiting Edge computing capabilities for stream-based processing is however challenging. It requires addressing the complex characteristics and constraints imposed by all the resources along the data path, as well as the large set of heterogeneous data processing and management frameworks. Consequently, the community needs tools that can facilitate the modeling of this complexity and can integrate the various components involved. In this work, we introduce MDSC, a hierarchical approach for modeling distributed stream-based applications on Edge-to-Cloud continuum infrastructures. We demonstrate how MDSC can be applied to a concrete real-life ML-based application - early earthquake warning - to help answer questions such as: when is it worth decentralizing the classification load from the Cloud to the Edge and how?","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120898093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motion recognition is an essential aspect of computer vision used in a wide range of fields and has received much attention as one of the most popular research topics. Traditional motion recognition studies are mainly based on RGB images and videos, but the lighting and viewpoint of RGB data can easily affect the model performance. Skeleton sequences are the most common type of coordinate data and avoid these problems. Therefore, more and more research has been conducted to combine skeleton sequences with deep learning to solve action recognition problems, and awe-inspiring results have been obtained. In particular, the recent rapid emergence of GCN methods, which fit well with the characteristics of skeletal data, offers a promising future for action recognition based on skeletal sequences. In this paper, we first introduce the acquisition of skeletal data and some common datasets, summarise some of the research in the field of skeletal sequence-based action recognition, and briefly discuss the future directions of this kind of research.
{"title":"A short survey on deep learning for skeleton-based action recognition","authors":"Wei Wang, Yudong Zhang","doi":"10.1145/3492323.3495571","DOIUrl":"https://doi.org/10.1145/3492323.3495571","url":null,"abstract":"Motion recognition is an essential aspect of computer vision used in a wide range of fields and has received much attention as one of the most popular research topics. Traditional motion recognition studies are mainly based on RGB images and videos, but the lighting and viewpoint of RGB data can easily affect the model performance. Skeleton sequences are the most common type of coordinate data and avoid these problems. Therefore, more and more research has been conducted to combine skeleton sequences with deep learning to solve action recognition problems, and awe-inspiring results have been obtained. In particular, the recent rapid emergence of GCN methods, which fit well with the characteristics of skeletal data, offers a promising future for action recognition based on skeletal sequences. In this paper, we first introduce the acquisition of skeletal data and some common datasets, summarise some of the research in the field of skeletal sequence-based action recognition, and briefly discuss the future directions of this kind of research.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132713663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: 1st Workshop on Distributed Machine Learning for the Intelligent Computing Continuum (DML-ICC)","authors":"","doi":"10.1145/3517186","DOIUrl":"https://doi.org/10.1145/3517186","url":null,"abstract":"","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133523110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rapid digitalisation has resulted in a massive exchange of digital data between individuals and organisations, accelerating the importance of privacy-preserving legal frameworks, such as the General Data Protection Regulation (GDPR). Despite the importance of the implementation of such a framework, current research lacks evidence about how individuals perceive GDPR compliance. Given that, the objective of this study was to explore individuals' attitudes towards GDPR compliance in line with Protection Motivation Theory. This study employed a cross-sectional research design and collected 540 valid responses to test a model using structural equational modelling. The result of the analysis showed that perceived threat severity, response efficacy and self-efficacy have positive relationships with attitude towards GDPR compliance. In addition, it was found that attitude correlates with emotional empowerment. The findings of this paper contribute to the literature on privacy-preserving mechanisms by shedding light on individuals' perceptions of the GDPR. The evidence also adds to the current body of literature on information systems management by giving insights into the factors that determine the utilisation of privacy-preserving technologies. These pieces of evidence offer implications for policymakers by providing guidelines on how to communicate the benefits of the GDPR to the public.
{"title":"General data protection regulation: an individual's perspective","authors":"D. Marikyan, S. Papagiannidis, R. Ranjan, O. Rana","doi":"10.1145/3492323.3495620","DOIUrl":"https://doi.org/10.1145/3492323.3495620","url":null,"abstract":"Rapid digitalisation has resulted in a massive exchange of digital data between individuals and organisations, accelerating the importance of privacy-preserving legal frameworks, such as the General Data Protection Regulation (GDPR). Despite the importance of the implementation of such a framework, current research lacks evidence about how individuals perceive GDPR compliance. Given that, the objective of this study was to explore individuals' attitudes towards GDPR compliance in line with Protection Motivation Theory. This study employed a cross-sectional research design and collected 540 valid responses to test a model using structural equational modelling. The result of the analysis showed that perceived threat severity, response efficacy and self-efficacy have positive relationships with attitude towards GDPR compliance. In addition, it was found that attitude correlates with emotional empowerment. The findings of this paper contribute to the literature on privacy-preserving mechanisms by shedding light on individuals' perceptions of the GDPR. The evidence also adds to the current body of literature on information systems management by giving insights into the factors that determine the utilisation of privacy-preserving technologies. These pieces of evidence offer implications for policymakers by providing guidelines on how to communicate the benefits of the GDPR to the public.","PeriodicalId":440884,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing Companion","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133658158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}