Most vehicles use the controller area network bus for communication between their components. Attackers who have already penetrated the in-vehicle network often utilize this bus in order to take control of safety-relevant components of the vehicle. Such targeted attack scenarios are often hard to detect by network intrusion detection systems because the specific payload is usually not contained within their training data sets. In this work, we describe an intrusion detection system that uses decision trees that have been modelled through genetic programming. We evaluate the advantages and disadvantages of this approach compared to artificial neural networks and rule-based approaches. For this, we model and simulate specific targeted attacks as well as several types of intrusions described in the literature. The results show that the genetic programming approach is well suited to identify intrusions with respect to complex relationships between sensor values which we consider important for the classification of specific targeted attacks. However, the system is less efficient for the classification of other types of attacks which are better identified by the alternative methods in our evaluation. Further research could thus consider hybrid approaches.
{"title":"In-vehicle detection of targeted CAN bus attacks","authors":"Florian Fenzl, R. Rieke, Andreas Dominik","doi":"10.1145/3465481.3465755","DOIUrl":"https://doi.org/10.1145/3465481.3465755","url":null,"abstract":"Most vehicles use the controller area network bus for communication between their components. Attackers who have already penetrated the in-vehicle network often utilize this bus in order to take control of safety-relevant components of the vehicle. Such targeted attack scenarios are often hard to detect by network intrusion detection systems because the specific payload is usually not contained within their training data sets. In this work, we describe an intrusion detection system that uses decision trees that have been modelled through genetic programming. We evaluate the advantages and disadvantages of this approach compared to artificial neural networks and rule-based approaches. For this, we model and simulate specific targeted attacks as well as several types of intrusions described in the literature. The results show that the genetic programming approach is well suited to identify intrusions with respect to complex relationships between sensor values which we consider important for the classification of specific targeted attacks. However, the system is less efficient for the classification of other types of attacks which are better identified by the alternative methods in our evaluation. Further research could thus consider hybrid approaches.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115648864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Against the background of the new corona virus and its far reaching impact on our everyday life there have been numerous initiatives around the globe, which work on the design and implementation of services related to certificates containing information about the vaccination, testing and/or recovery status of citizen (“Vaccination Certificates”). Due to the distributed and largely independent development under high time pressure there is a risk that the resulting services for the creation, presentation and verification of the aforementioned Vaccination Certificates, will in the end not be interoperable and hence finally turn out to be of limited interoperability. To contribute to the mitigation of this risk, the present paper aims at creating a compact overview with respect to the relevant underlying technologies and an up to date survey with respect to the most relevant initiatives around the globe, before elucidating the system requirements for Vaccination Certificate Services and then outline a technical reference architecture accordingly. This reference architecture, which is as far as possible based on open standards, seeks to integrate all relevant currently existing and emerging approaches and hence may facilitate well-grounded discussions and the exchange of ideas between the different communities and the harmonization of specifications and related schema artifacts in this area. The present contribution concludes with an outlook towards future developments, which includes a long term perspective towards the integration of the Vaccination Services with electronic health records and data exchange infrastructures supporting the International Patient Summary.
{"title":"Towards Interoperable Vaccination Certificate Services","authors":"A. Corici, Tina Hühnlein, D. Hühnlein, Olaf Rode","doi":"10.1145/3465481.3470035","DOIUrl":"https://doi.org/10.1145/3465481.3470035","url":null,"abstract":"Against the background of the new corona virus and its far reaching impact on our everyday life there have been numerous initiatives around the globe, which work on the design and implementation of services related to certificates containing information about the vaccination, testing and/or recovery status of citizen (“Vaccination Certificates”). Due to the distributed and largely independent development under high time pressure there is a risk that the resulting services for the creation, presentation and verification of the aforementioned Vaccination Certificates, will in the end not be interoperable and hence finally turn out to be of limited interoperability. To contribute to the mitigation of this risk, the present paper aims at creating a compact overview with respect to the relevant underlying technologies and an up to date survey with respect to the most relevant initiatives around the globe, before elucidating the system requirements for Vaccination Certificate Services and then outline a technical reference architecture accordingly. This reference architecture, which is as far as possible based on open standards, seeks to integrate all relevant currently existing and emerging approaches and hence may facilitate well-grounded discussions and the exchange of ideas between the different communities and the harmonization of specifications and related schema artifacts in this area. The present contribution concludes with an outlook towards future developments, which includes a long term perspective towards the integration of the Vaccination Services with electronic health records and data exchange infrastructures supporting the International Patient Summary.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131759633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charilaos C. Zarakovitis, D. Klonidis, Zujany Salazar, A. Prudnikova, Arash Bozorgchenani, Q. Ni, C. Klitis, George Guirgis, A. Cavalli, Nicholas Sgouros, Eftychia Makri, Antonios Lalas, K. Votis, George Amponis, Wissam Mallouli
Abstract: Security, Trust and Reliability are crucial issues in mobile 5G networks from both hardware and software perspectives. These issues are of significant importance when considering implementations over distributed environments, i.e., corporate Cloud environment over massively virtualized infrastructures as envisioned in the 5G service provision paradigm. The SANCUS1 solution intends providing a modular framework integrating different engines in order to enable next‐generation 5G system networks to perform automated and intelligent analysis of their firmware images at massive scale, as well as the validation of applications and services. SANCUS also proposes a proactive risk assessment of network applications and services by means of maximising the overall system resilience in terms of security, privacy and reliability. This paper presents an overview of the SANCUS architecture in its current release as well as the pilots use cases that will be demonstrated at the end of the project and used for validating the concepts.
{"title":"SANCUS: Multi-layers Vulnerability Management Framework for Cloud-native 5G networks","authors":"Charilaos C. Zarakovitis, D. Klonidis, Zujany Salazar, A. Prudnikova, Arash Bozorgchenani, Q. Ni, C. Klitis, George Guirgis, A. Cavalli, Nicholas Sgouros, Eftychia Makri, Antonios Lalas, K. Votis, George Amponis, Wissam Mallouli","doi":"10.1145/3465481.3470092","DOIUrl":"https://doi.org/10.1145/3465481.3470092","url":null,"abstract":"Abstract: Security, Trust and Reliability are crucial issues in mobile 5G networks from both hardware and software perspectives. These issues are of significant importance when considering implementations over distributed environments, i.e., corporate Cloud environment over massively virtualized infrastructures as envisioned in the 5G service provision paradigm. The SANCUS1 solution intends providing a modular framework integrating different engines in order to enable next‐generation 5G system networks to perform automated and intelligent analysis of their firmware images at massive scale, as well as the validation of applications and services. SANCUS also proposes a proactive risk assessment of network applications and services by means of maximising the overall system resilience in terms of security, privacy and reliability. This paper presents an overview of the SANCUS architecture in its current release as well as the pilots use cases that will be demonstrated at the end of the project and used for validating the concepts.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"32 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123656191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Micro- and Small Enterprises (MSE) and the persons working there (owners/managers, employees) are often neglected in policies and initiatives concerning cybersecurity and data privacy. Communication strategies are targeting IT-departments or IT-specialists - most MSEs have neither. The Horizon 2020 project GEIGER wants to address this problem by providing a cybersecurity monitoring solution that can be used by IT-laypersons. In addition to an easy-to-use software tool focusing on the monitoring of imminent cyber threats GEIGER develops an Education Ecosystem, which approaches this target groups at different levels: from regular employees, who cannot or don't want to extensively deal with cybersecurity, to designated persons (internal or external), who are made responsible for monitoring the functioning of GEIGER in a company. To take full account of this, the competence level of individuals and their development are part of the data structure of the GEIGER monitoring. Hence, it also includes automated recommendations to follow certain training sequences included in GEIGER or from other sources. To define the different levels of competence in cybersecurity, i.e. also their development, to propose adequate learning objectives and design pertinent learning materials, GEIGER has elaborated a curriculum. The structure of this curriculum follows the conditions and requirements given by the general situation of security threats and learning scenarios in MSEs. It has three main dimensions: ‘levels’ that reflect the competence development within MSE-specific learning environments; ‘pillars’ that reflect the GEIGER-specific topical differentiation in general cybersecurity as well as handling and communicating GEIGER functions; object ‘layers’ that reflect specific cybersecurity threats as they appear for the IT-lay target groups in MSEs. To allow for interoperability of the educational parts of GEIGER the competences of the GEIGER curriculum are written in form of xAPI-statements, i.e. a specific metadata-format for learning achievements.
{"title":"Structuring a Cybersecurity Curriculum for Non-IT Employees of Micro- and Small Enterprises","authors":"Bernd Remmele, Jessica Peichl","doi":"10.1145/3465481.3469198","DOIUrl":"https://doi.org/10.1145/3465481.3469198","url":null,"abstract":"Micro- and Small Enterprises (MSE) and the persons working there (owners/managers, employees) are often neglected in policies and initiatives concerning cybersecurity and data privacy. Communication strategies are targeting IT-departments or IT-specialists - most MSEs have neither. The Horizon 2020 project GEIGER wants to address this problem by providing a cybersecurity monitoring solution that can be used by IT-laypersons. In addition to an easy-to-use software tool focusing on the monitoring of imminent cyber threats GEIGER develops an Education Ecosystem, which approaches this target groups at different levels: from regular employees, who cannot or don't want to extensively deal with cybersecurity, to designated persons (internal or external), who are made responsible for monitoring the functioning of GEIGER in a company. To take full account of this, the competence level of individuals and their development are part of the data structure of the GEIGER monitoring. Hence, it also includes automated recommendations to follow certain training sequences included in GEIGER or from other sources. To define the different levels of competence in cybersecurity, i.e. also their development, to propose adequate learning objectives and design pertinent learning materials, GEIGER has elaborated a curriculum. The structure of this curriculum follows the conditions and requirements given by the general situation of security threats and learning scenarios in MSEs. It has three main dimensions: ‘levels’ that reflect the competence development within MSE-specific learning environments; ‘pillars’ that reflect the GEIGER-specific topical differentiation in general cybersecurity as well as handling and communicating GEIGER functions; object ‘layers’ that reflect specific cybersecurity threats as they appear for the IT-lay target groups in MSEs. To allow for interoperability of the educational parts of GEIGER the competences of the GEIGER curriculum are written in form of xAPI-statements, i.e. a specific metadata-format for learning achievements.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114216415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Kochberger, S. Schrittwieser, Stefan Schweighofer, Peter Kieseberg, E. Weippl
Malware authors often rely on code obfuscation to hide the malicious functionality of their software, making detection and analysis more difficult. One of the most advanced techniques for binary obfuscation is virtualization-based obfuscation, which converts the functionality of a program into the bytecode of a randomly generated virtual machine which is embedded into the protected program. To enable the automatic detection and analysis of protected malware, new deobfuscation techniques against virtualization-based obfuscation are constantly being developed and proposed in the literature. In this work, we systematize existing knowledge of automatic deobfuscation of virtualization-protected programs in a novel classification scheme and evaluate where we stand in the arms race between malware authors and code analysts in regards to virtualization-based obfuscation. In addition to a theoretical discussion of different types of deobfuscation methodologies, we present an in-depth practical evaluation that compares state-of-the-art virtualization-based obfuscators with currently available deobfuscation tools. The results clearly indicate the possibility of automatic deobfuscation of virtualization-based obfuscation in specific scenarios. Furthermore, however, the results highlight limitations of existing deobfuscation methods. Multiple challenges still lie ahead on the way towards reliable and resilient automatic deobfuscation of virtualization-based obfuscation.
{"title":"SoK: Automatic Deobfuscation of Virtualization-protected Applications","authors":"Patrick Kochberger, S. Schrittwieser, Stefan Schweighofer, Peter Kieseberg, E. Weippl","doi":"10.1145/3465481.3465772","DOIUrl":"https://doi.org/10.1145/3465481.3465772","url":null,"abstract":"Malware authors often rely on code obfuscation to hide the malicious functionality of their software, making detection and analysis more difficult. One of the most advanced techniques for binary obfuscation is virtualization-based obfuscation, which converts the functionality of a program into the bytecode of a randomly generated virtual machine which is embedded into the protected program. To enable the automatic detection and analysis of protected malware, new deobfuscation techniques against virtualization-based obfuscation are constantly being developed and proposed in the literature. In this work, we systematize existing knowledge of automatic deobfuscation of virtualization-protected programs in a novel classification scheme and evaluate where we stand in the arms race between malware authors and code analysts in regards to virtualization-based obfuscation. In addition to a theoretical discussion of different types of deobfuscation methodologies, we present an in-depth practical evaluation that compares state-of-the-art virtualization-based obfuscators with currently available deobfuscation tools. The results clearly indicate the possibility of automatic deobfuscation of virtualization-based obfuscation in specific scenarios. Furthermore, however, the results highlight limitations of existing deobfuscation methods. Multiple challenges still lie ahead on the way towards reliable and resilient automatic deobfuscation of virtualization-based obfuscation.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123835162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Passwords are a problem in today's digital world. FIDO2, through WebAuthn, brought alternative password-less authentication that is more usable and secure than classic password-based systems, for web applications and services. In this work, we give a brief overview of FIDO2, and we present WebDevAuthn, a novel FIDO2/WebAuthn requests and responses analyser web tool. This tool can be used to help developers understand how FIDO2 works, aid in the development processes by speeding debugging using the WebAuthn traffic analyser and to test the security of an application through penetration testing by editing the WebAuhn requests or responses.
{"title":"A web tool for analyzing FIDO2/WebAuthn Requests and Responses","authors":"A. Grammatopoulos, Ilias Politis, C. Xenakis","doi":"10.1145/3465481.3469209","DOIUrl":"https://doi.org/10.1145/3465481.3469209","url":null,"abstract":"Passwords are a problem in today's digital world. FIDO2, through WebAuthn, brought alternative password-less authentication that is more usable and secure than classic password-based systems, for web applications and services. In this work, we give a brief overview of FIDO2, and we present WebDevAuthn, a novel FIDO2/WebAuthn requests and responses analyser web tool. This tool can be used to help developers understand how FIDO2 works, aid in the development processes by speeding debugging using the WebAuthn traffic analyser and to test the security of an application through penetration testing by editing the WebAuhn requests or responses.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116791093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Panagiotis Bountakas, Konstantinos Koutroumpouchos, C. Xenakis
Phishing is the most-used malicious attempt in which attackers, commonly via emails, impersonate trusted persons or entities to obtain private information from a victim. Even though phishing email attacks are a known cybercriminal strategy for decades, their usage has been expanded over last couple of years due to the COVID-19 pandemic, where attackers exploit people’s consternation to lure victims. Therefore, further research is needed in the phishing email detection field. Recent phishing email detection solutions that extract representational text-based features from the email’s body have proved to be an appropriate strategy to tackle these threats. This paper proposes a comparison approach for the combined usage of Natural Language Processing (TF-IDF, Word2Vec, and BERT) and Machine Learning (Random Forest, Decision Tree, Logistic Regression, Gradient Boosting Trees, and Naive Bayes) methods for phishing email detection. The evaluation was performed on two datasets, one balanced and one imbalanced, both of which were comprised of emails from the well-known Enron corpus and the most recent emails from the Nazario phishing corpus. The best combination in the balanced dataset proved to be the Word2Vec with the Random Forest algorithm, while in the imbalanced dataset the Word2Vec with the Logistic Regression algorithm.
{"title":"A Comparison of Natural Language Processing and Machine Learning Methods for Phishing Email Detection","authors":"Panagiotis Bountakas, Konstantinos Koutroumpouchos, C. Xenakis","doi":"10.1145/3465481.3469205","DOIUrl":"https://doi.org/10.1145/3465481.3469205","url":null,"abstract":"Phishing is the most-used malicious attempt in which attackers, commonly via emails, impersonate trusted persons or entities to obtain private information from a victim. Even though phishing email attacks are a known cybercriminal strategy for decades, their usage has been expanded over last couple of years due to the COVID-19 pandemic, where attackers exploit people’s consternation to lure victims. Therefore, further research is needed in the phishing email detection field. Recent phishing email detection solutions that extract representational text-based features from the email’s body have proved to be an appropriate strategy to tackle these threats. This paper proposes a comparison approach for the combined usage of Natural Language Processing (TF-IDF, Word2Vec, and BERT) and Machine Learning (Random Forest, Decision Tree, Logistic Regression, Gradient Boosting Trees, and Naive Bayes) methods for phishing email detection. The evaluation was performed on two datasets, one balanced and one imbalanced, both of which were comprised of emails from the well-known Enron corpus and the most recent emails from the Nazario phishing corpus. The best combination in the balanced dataset proved to be the Word2Vec with the Random Forest algorithm, while in the imbalanced dataset the Word2Vec with the Logistic Regression algorithm.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122931188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Nowakowski, Piotr Żórawski, Krzysztof Cabaj, W. Mazurczyk
Information hiding in communication networks is gaining recently increased attention from the security community. This is because such techniques are a double-edged sword that, on the one hand, can be used, e.g., to enhance the privacy of Internet users while on the other can be utilized by malware developers to enable a covert communication feature in malicious software. This means that to understand the risks that data hiding poses, it is of utmost importance to study the inner workings of potential information hiding methods and accompanying mechanisms (e.g., those that provide reliability of such communications) as well as to develop effective and efficient countermeasures. That is why, in this paper we perform a systematic experimental evaluation of the error detection and correcting scheme, which is suitable for complex network data hiding approaches, i.e., distributed network covert channels (DNCCs). The obtained results prove that the proposed solution guarantees secret communication reliability even when faced with severe networking conditions up to 20% of data corruption while maintaining a stable covert data rate.
{"title":"Study of the Error Detection and Correction Scheme for Distributed Network Covert Channels","authors":"P. Nowakowski, Piotr Żórawski, Krzysztof Cabaj, W. Mazurczyk","doi":"10.1145/3465481.3470087","DOIUrl":"https://doi.org/10.1145/3465481.3470087","url":null,"abstract":"Information hiding in communication networks is gaining recently increased attention from the security community. This is because such techniques are a double-edged sword that, on the one hand, can be used, e.g., to enhance the privacy of Internet users while on the other can be utilized by malware developers to enable a covert communication feature in malicious software. This means that to understand the risks that data hiding poses, it is of utmost importance to study the inner workings of potential information hiding methods and accompanying mechanisms (e.g., those that provide reliability of such communications) as well as to develop effective and efficient countermeasures. That is why, in this paper we perform a systematic experimental evaluation of the error detection and correcting scheme, which is suitable for complex network data hiding approaches, i.e., distributed network covert channels (DNCCs). The obtained results prove that the proposed solution guarantees secret communication reliability even when faced with severe networking conditions up to 20% of data corruption while maintaining a stable covert data rate.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123375094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fog Computing is a decentralized infrastructure layer between Cloud and Edge Devices moving the computation closer to the edge, allowing good latency and bandwidth even for large-scale Internet of Things deployments. Still, devices using fog services are exposed to the immediate application environment and potentially malicious users, thus security, privacy, and trust are critical issues. To provide trust and privacy within fog infrastructures, enabling the secured execution of future Internet of Things services, lightweight collective and distributed attestation mechanism for the bulk attestation of the edge devices and the fog infrastructure can be used, especially leveraging Direct Anonymous Attestation, an anonymous attestation signature that allows attesting to the state of the host system, without violating the specified privacy of the host. As in all cryptographic schemes the management and protection of keys is of the highest significance. We present key management for a fog architecture in the context of the RAINBOW fog platform and show how the computations of a recently published proof-of-concept implementation of Direct Anonymous Attestation can be distributed in our specific fog environment. We provide details on an embedded system-level implementation and performance benchmarks for Internet of Things applications keys stored with proper hardware-based protection within a Trusted Platform Module.
{"title":"Managing Anonymous Keys in a Fog-Computing Platform","authors":"Raphael Schermann, Ronald Toegl","doi":"10.1145/3465481.3470063","DOIUrl":"https://doi.org/10.1145/3465481.3470063","url":null,"abstract":"Fog Computing is a decentralized infrastructure layer between Cloud and Edge Devices moving the computation closer to the edge, allowing good latency and bandwidth even for large-scale Internet of Things deployments. Still, devices using fog services are exposed to the immediate application environment and potentially malicious users, thus security, privacy, and trust are critical issues. To provide trust and privacy within fog infrastructures, enabling the secured execution of future Internet of Things services, lightweight collective and distributed attestation mechanism for the bulk attestation of the edge devices and the fog infrastructure can be used, especially leveraging Direct Anonymous Attestation, an anonymous attestation signature that allows attesting to the state of the host system, without violating the specified privacy of the host. As in all cryptographic schemes the management and protection of keys is of the highest significance. We present key management for a fog architecture in the context of the RAINBOW fog platform and show how the computations of a recently published proof-of-concept implementation of Direct Anonymous Attestation can be distributed in our specific fog environment. We provide details on an embedded system-level implementation and performance benchmarks for Internet of Things applications keys stored with proper hardware-based protection within a Trusted Platform Module.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"112 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120996370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michail Bampatsikos, Ilias Politis, C. Xenakis, S. Thomopoulos
Internet of Things has a profound effect on everyday life and critical vertical services including healthcare, factories of the future and intelligent transport systems. The highly distributed nature of such networks and the heterogeneity of the devices, which constitute them, necessitates that their users should be able to trust them at all times. A method to determine the device's service trustworthiness is Trust Management (TM), which assigns scores to devices according to their trustworthiness level, based on evaluations from other entities that interacted with it. Often Internet of Things devices that just joined the network, have not interacted with any other entity of this network before, hence there is no way to determine its trustworthiness. Such an event is referred to as the cold start trust score or initial trust score problem. The majority of the trust management approaches address this problem by setting an arbitrary initial trust score, while others will ignore it. Assigning arbitrary trust scores for devices connected to the network for the first time has the potential to disrupt the operation of the entire system, when a high trust score is assigned to a non-trusted malicious device, or lead to unfair policies, when trusted devices are assumed as potential intruders, which also deteriorates the performance of the system. This paper proposes a mechanism, which combines the blockchain based BARRETT remote attestation protocol with a set of device's properties and communication and operational context parameters, in order to determine accurately and assign the initial trust score to each device. Through a set of extensive simulations over different experimental setups, the proposed scheme is achieving to safely distribute initial trust scores to one thousand devices over less than 6ms, while minimising the risk of computational denial of service attacks due to the inherent characteristics of the BARRETT remote attestation protocol.
{"title":"Solving the cold start problem in Trust Management in IoT","authors":"Michail Bampatsikos, Ilias Politis, C. Xenakis, S. Thomopoulos","doi":"10.1145/3465481.3469208","DOIUrl":"https://doi.org/10.1145/3465481.3469208","url":null,"abstract":"Internet of Things has a profound effect on everyday life and critical vertical services including healthcare, factories of the future and intelligent transport systems. The highly distributed nature of such networks and the heterogeneity of the devices, which constitute them, necessitates that their users should be able to trust them at all times. A method to determine the device's service trustworthiness is Trust Management (TM), which assigns scores to devices according to their trustworthiness level, based on evaluations from other entities that interacted with it. Often Internet of Things devices that just joined the network, have not interacted with any other entity of this network before, hence there is no way to determine its trustworthiness. Such an event is referred to as the cold start trust score or initial trust score problem. The majority of the trust management approaches address this problem by setting an arbitrary initial trust score, while others will ignore it. Assigning arbitrary trust scores for devices connected to the network for the first time has the potential to disrupt the operation of the entire system, when a high trust score is assigned to a non-trusted malicious device, or lead to unfair policies, when trusted devices are assumed as potential intruders, which also deteriorates the performance of the system. This paper proposes a mechanism, which combines the blockchain based BARRETT remote attestation protocol with a set of device's properties and communication and operational context parameters, in order to determine accurately and assign the initial trust score to each device. Through a set of extensive simulations over different experimental setups, the proposed scheme is achieving to safely distribute initial trust scores to one thousand devices over less than 6ms, while minimising the risk of computational denial of service attacks due to the inherent characteristics of the BARRETT remote attestation protocol.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116349021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}