Pub Date : 2019-11-01DOI: 10.1109/CMI48017.2019.8962146
Mandy Balthasar, Armin Gerl
A life can be mastered by digital processes in a world that is always incomplete due to its complexity, only by balancing. An ethical balancing act within the framework of Poppers' trilemma of open society. This is accompanied by another tightrope act between privacy, which is made visible, and privacy, which is technically implemented. In a social, digitally transformed data culture, privacy is always subjective from the user's point of view, which is why the degree of protection must be individually adaptable. As a key element between users, companies (technologies) and the legal frameworks, privacy languages should serve to give the user the freedom to control and manage transparency from consent to processing over his data within the framework of an order, if he wishes so. But also to be able to introduce criticism in order to be able to change previously defined conditions and thus flexibly meet the technical and moral change. Privacy is to be understood as a mediator of reciprocal sympathy and tolerance between data provider and data recipient, which can be implemented by means of a Privacy Language.
{"title":"Privacy in the toolbox of freedom","authors":"Mandy Balthasar, Armin Gerl","doi":"10.1109/CMI48017.2019.8962146","DOIUrl":"https://doi.org/10.1109/CMI48017.2019.8962146","url":null,"abstract":"A life can be mastered by digital processes in a world that is always incomplete due to its complexity, only by balancing. An ethical balancing act within the framework of Poppers' trilemma of open society. This is accompanied by another tightrope act between privacy, which is made visible, and privacy, which is technically implemented. In a social, digitally transformed data culture, privacy is always subjective from the user's point of view, which is why the degree of protection must be individually adaptable. As a key element between users, companies (technologies) and the legal frameworks, privacy languages should serve to give the user the freedom to control and manage transparency from consent to processing over his data within the framework of an order, if he wishes so. But also to be able to introduce criticism in order to be able to change previously defined conditions and thus flexibly meet the technical and moral change. Privacy is to be understood as a mediator of reciprocal sympathy and tolerance between data provider and data recipient, which can be implemented by means of a Privacy Language.","PeriodicalId":142770,"journal":{"name":"2019 12th CMI Conference on Cybersecurity and Privacy (CMI)","volume":"64 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120939382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CMI48017.2019.8962144
Jens Leicht, M. Heisel
Privacy policies are a widely used way of expressing the data handling by service providers. However, the legalese used in these documents hinders many users in understanding the important information about what is happening with their data. A privacy policy language and corresponding easy to understand visualization can help users in understanding these policies. In this survey we compare 18 policy languages that can be used in the context of privacy policies. The focus of this survey lies on compatibility with legislation like the General Data Protection Regulation of the European Union and the formalization of such language,
{"title":"A Survey on Privacy Policy Languages: Expressiveness Concerning Data Protection Regulations","authors":"Jens Leicht, M. Heisel","doi":"10.1109/CMI48017.2019.8962144","DOIUrl":"https://doi.org/10.1109/CMI48017.2019.8962144","url":null,"abstract":"Privacy policies are a widely used way of expressing the data handling by service providers. However, the legalese used in these documents hinders many users in understanding the important information about what is happening with their data. A privacy policy language and corresponding easy to understand visualization can help users in understanding these policies. In this survey we compare 18 policy languages that can be used in the context of privacy policies. The focus of this survey lies on compatibility with legislation like the General Data Protection Regulation of the European Union and the formalization of such language,","PeriodicalId":142770,"journal":{"name":"2019 12th CMI Conference on Cybersecurity and Privacy (CMI)","volume":"35 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132900155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CMI48017.2019.8962140
Kok-Seng Wong, Nguyen Anh Tu, Dinh-Mao Bui, S. Ooi, M. Kim
Collaborative anonymization deals with a group of respondents in a distributed environment. Unlike in centralized settings, no respondent is willing to reveal his or her records to any party due to the privacy concerns. This creates a challenge for anonymization, and it requires a level of trust among respondents. In this paper, we study a collaborative anonymization protocol that aims to increase the confidence of respondents during data collection. Unlike in existing works, our protocol does not reveal the complete set of quasi-identifier (QID) to the data collector (e.g., agency) before and after the data anonymization process. Because QID can be both sensitive values and identifying values, we allow the respondents to hide sensitive-QID attributes from other parties. Our protocol ensures that the desired protection level (i.e., k-anonymity) can be verified before the respondents submit their records to the agency. Furthermore, we allow honest respondents to indict a malicious agency if it modifies the intermediate results or not following the protocol faithfully.
{"title":"Privacy-Preserving Collaborative Data Anonymization with Sensitive Quasi-Identifiers","authors":"Kok-Seng Wong, Nguyen Anh Tu, Dinh-Mao Bui, S. Ooi, M. Kim","doi":"10.1109/CMI48017.2019.8962140","DOIUrl":"https://doi.org/10.1109/CMI48017.2019.8962140","url":null,"abstract":"Collaborative anonymization deals with a group of respondents in a distributed environment. Unlike in centralized settings, no respondent is willing to reveal his or her records to any party due to the privacy concerns. This creates a challenge for anonymization, and it requires a level of trust among respondents. In this paper, we study a collaborative anonymization protocol that aims to increase the confidence of respondents during data collection. Unlike in existing works, our protocol does not reveal the complete set of quasi-identifier (QID) to the data collector (e.g., agency) before and after the data anonymization process. Because QID can be both sensitive values and identifying values, we allow the respondents to hide sensitive-QID attributes from other parties. Our protocol ensures that the desired protection level (i.e., k-anonymity) can be verified before the respondents submit their records to the agency. Furthermore, we allow honest respondents to indict a malicious agency if it modifies the intermediate results or not following the protocol faithfully.","PeriodicalId":142770,"journal":{"name":"2019 12th CMI Conference on Cybersecurity and Privacy (CMI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122443738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CMI48017.2019.8962288
Roslyn Layton, S. Elaluf-Calderwood
The General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have been presented by many policymakers as fundamental, welfare enhancing policies. While individuals value privacy, these policies require significant up front and ongoing investment by firms. For example, an analysis commissioned by the California Department of Justice's Office of the Attorney General estimates 14:1 cost to benefit ratio. No such analysis could be found from EU authorities for the GDPR. Sweeping regulatory regimes can create unintended consequences. This paper offers a brief introduction to the new cybersecurity challenges created by the GDPR and CCPA within firms and in the larger Internet ecosystem. As a result of the regulation, firms face many challenges to comply with costly and complex rules, broad definitions of personally identifiable information (PII), and increased risk of fee and/or lawsuit for violations, vulnerabilities, and lack of compliance. Since the promulgation of the GDPR, important security side effects have reported including the blocking of public information in the WHOIS internet protocol database, identity theft through the hacking of the Right to Access provision (Article 15) and other provisions, and the proliferation of network equipment with security and privacy vulnerabilities. The paper also offers a brief overview of the Gordon-Loeb (GL) model used for calculating the optimal investment in cybersecurity. [1] A preliminary data set is offered to examine the difficulty of estimating the cost of cybersecurity investment in light of the GDPR. Notably, the value of the European Union's data economy was estimated to be €300 billion in 2016 [2]. The given GL model would suggest that the optimal investment to protect data would be €13.2 billion. The actual European cyber spend was some €15 billion in 2015, [3] a slightly higher number which covers the EU plus additional European countries, suggesting that the GL model some applicability. There are limited GL type models and tools to guide data protection or privacy investments, and given the emergence of new data protection expectations, it is worth investigating how and whether firms can deliver both sets of expenditures and to what degree. The low level of GDPR compliance suggests that a workable equation of data protection is still not clear for most firms.
{"title":"A Social Economic Analysis of the Impact of GDPR on Security and Privacy Practices","authors":"Roslyn Layton, S. Elaluf-Calderwood","doi":"10.1109/CMI48017.2019.8962288","DOIUrl":"https://doi.org/10.1109/CMI48017.2019.8962288","url":null,"abstract":"The General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have been presented by many policymakers as fundamental, welfare enhancing policies. While individuals value privacy, these policies require significant up front and ongoing investment by firms. For example, an analysis commissioned by the California Department of Justice's Office of the Attorney General estimates 14:1 cost to benefit ratio. No such analysis could be found from EU authorities for the GDPR. Sweeping regulatory regimes can create unintended consequences. This paper offers a brief introduction to the new cybersecurity challenges created by the GDPR and CCPA within firms and in the larger Internet ecosystem. As a result of the regulation, firms face many challenges to comply with costly and complex rules, broad definitions of personally identifiable information (PII), and increased risk of fee and/or lawsuit for violations, vulnerabilities, and lack of compliance. Since the promulgation of the GDPR, important security side effects have reported including the blocking of public information in the WHOIS internet protocol database, identity theft through the hacking of the Right to Access provision (Article 15) and other provisions, and the proliferation of network equipment with security and privacy vulnerabilities. The paper also offers a brief overview of the Gordon-Loeb (GL) model used for calculating the optimal investment in cybersecurity. [1] A preliminary data set is offered to examine the difficulty of estimating the cost of cybersecurity investment in light of the GDPR. Notably, the value of the European Union's data economy was estimated to be €300 billion in 2016 [2]. The given GL model would suggest that the optimal investment to protect data would be €13.2 billion. The actual European cyber spend was some €15 billion in 2015, [3] a slightly higher number which covers the EU plus additional European countries, suggesting that the GL model some applicability. There are limited GL type models and tools to guide data protection or privacy investments, and given the emergence of new data protection expectations, it is worth investigating how and whether firms can deliver both sets of expenditures and to what degree. The low level of GDPR compliance suggests that a workable equation of data protection is still not clear for most firms.","PeriodicalId":142770,"journal":{"name":"2019 12th CMI Conference on Cybersecurity and Privacy (CMI)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128379064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CMI48017.2019.8962143
Zaruhi Aslanyan, M. Boesgaard
With the growing number of regulations and concerns regarding data privacy, there is an increasing need for protecting Personally Identifiable Information (PII). A widely-used approach to protect PII is to apply data-masking techniques in order to remove or hide the identities of the individuals referred to in the data under investigation. A particular class of data-masking techniques aims at preserving the format of the source data, so as to allow using encoded data where the corresponding source is expected, thereby minimising application changes to perform tasks such as statistical analysis or testing. Various encoding techniques are used to protect data privacy while preserving the format, including Format-Preserving Encryption (FPE) and masking out. Even though convenient, preserving the format of data might lead to re-identification attacks. In this paper, we discuss the vulnerabilities of data-masking techniques that preserve the format of data and analyse their security and privacy properties. We investigate two industrial datasets and quantify the potential data privacy leakage that could arise from using inappropriate data masking techniques.
{"title":"Privacy Analysis of Format-Preserving Data-Masking Techniques","authors":"Zaruhi Aslanyan, M. Boesgaard","doi":"10.1109/CMI48017.2019.8962143","DOIUrl":"https://doi.org/10.1109/CMI48017.2019.8962143","url":null,"abstract":"With the growing number of regulations and concerns regarding data privacy, there is an increasing need for protecting Personally Identifiable Information (PII). A widely-used approach to protect PII is to apply data-masking techniques in order to remove or hide the identities of the individuals referred to in the data under investigation. A particular class of data-masking techniques aims at preserving the format of the source data, so as to allow using encoded data where the corresponding source is expected, thereby minimising application changes to perform tasks such as statistical analysis or testing. Various encoding techniques are used to protect data privacy while preserving the format, including Format-Preserving Encryption (FPE) and masking out. Even though convenient, preserving the format of data might lead to re-identification attacks. In this paper, we discuss the vulnerabilities of data-masking techniques that preserve the format of data and analyse their security and privacy properties. We investigate two industrial datasets and quantify the potential data privacy leakage that could arise from using inappropriate data masking techniques.","PeriodicalId":142770,"journal":{"name":"2019 12th CMI Conference on Cybersecurity and Privacy (CMI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133808488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CMI48017.2019.8962145
A. Barinov, N. Davydkin, Daria V. Sharova, Sergey V. Skurlaev
The paper is devoted to a preparation for security assessment of connected vehicles, prioritization points of attacks on structural elements in particular. Described approach is based on quality assessment of developed components and their accordance with model of attacker. Important feature of methodology concludes in evaluation of each informational flow criticality interacting with component. The conclusion indicates the advantages of the developed approach and describes its disadvantages, if approach is implemented on elements, built by the AUTOSAR architecture basis.
{"title":"Prioritization methodology of computing assets for connected vehicles in security assessment purpose","authors":"A. Barinov, N. Davydkin, Daria V. Sharova, Sergey V. Skurlaev","doi":"10.1109/CMI48017.2019.8962145","DOIUrl":"https://doi.org/10.1109/CMI48017.2019.8962145","url":null,"abstract":"The paper is devoted to a preparation for security assessment of connected vehicles, prioritization points of attacks on structural elements in particular. Described approach is based on quality assessment of developed components and their accordance with model of attacker. Important feature of methodology concludes in evaluation of each informational flow criticality interacting with component. The conclusion indicates the advantages of the developed approach and describes its disadvantages, if approach is implemented on elements, built by the AUTOSAR architecture basis.","PeriodicalId":142770,"journal":{"name":"2019 12th CMI Conference on Cybersecurity and Privacy (CMI)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132246201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CMI48017.2019.8962142
Samuel Agbesi, George Asante
Blockchain has been said to be one of the technologies for the future, and researchers have argued that this technology is going to disrupt many industries in the coming years, and democratic elections are one of the key areas blockchain is going to transform. Several organizations have begun experimenting on blockchain-enabled e-voting platforms, such as Democratic Earth, Horizon State and Follow My Vote. This study seeks to conceptualize a blockchain architecture for the storage of election results that provide trust, transparency, and immutability using distributed ledger technology (DLT). One of the main issues with elections in Ghana and other sub-Saharan African countries is the inaccurate recording of votes from polling stations, constituencies and at the national office. There are instances where votes recorded at the polling station changes at the constituencies either intentionally or accidentally. The study discussed the basic properties of the blockchain, such as distributed ledger, consensus mechanisms and cryptographic hash function, and how it can be used to address the current challenges in vote recording during elections. The study evaluates current blockchain-enabled e-voting systems and designs a blockchain-based vote recording system that provides immutability, trust, and transparency. The proposed design addresses the issues of vote tempering because transactions added to the block are secure with a cryptographic hash function which makes tempering of the votes stored in the blockchain nearly impossible and make it immutable.
区块链被认为是未来的技术之一,研究人员认为,这项技术将在未来几年颠覆许多行业,民主选举是区块链将要改变的关键领域之一。一些组织已经开始在支持区块链的电子投票平台上进行试验,比如Democratic Earth、Horizon State和Follow My Vote。本研究旨在概念化一种区块链架构,用于存储使用分布式账本技术(DLT)提供信任、透明度和不可变性的选举结果。加纳和其他撒哈拉以南非洲国家选举的主要问题之一是投票站、选区和国家办事处的选票记录不准确。在投票站记录的选票有意或无意地在选区发生了变化。该研究讨论了区块链的基本属性,如分布式账本、共识机制和加密哈希函数,以及如何使用它来解决当前选举期间投票记录方面的挑战。该研究评估了当前支持区块链的电子投票系统,并设计了一个基于区块链的投票记录系统,该系统提供了不变性、信任和透明度。提议的设计解决了投票调和问题,因为添加到块中的交易通过加密哈希函数是安全的,这使得存储在区块链中的投票调和几乎是不可能的,并且使其不可变。
{"title":"Electronic Voting Recording System Based on Blockchain Technology","authors":"Samuel Agbesi, George Asante","doi":"10.1109/CMI48017.2019.8962142","DOIUrl":"https://doi.org/10.1109/CMI48017.2019.8962142","url":null,"abstract":"Blockchain has been said to be one of the technologies for the future, and researchers have argued that this technology is going to disrupt many industries in the coming years, and democratic elections are one of the key areas blockchain is going to transform. Several organizations have begun experimenting on blockchain-enabled e-voting platforms, such as Democratic Earth, Horizon State and Follow My Vote. This study seeks to conceptualize a blockchain architecture for the storage of election results that provide trust, transparency, and immutability using distributed ledger technology (DLT). One of the main issues with elections in Ghana and other sub-Saharan African countries is the inaccurate recording of votes from polling stations, constituencies and at the national office. There are instances where votes recorded at the polling station changes at the constituencies either intentionally or accidentally. The study discussed the basic properties of the blockchain, such as distributed ledger, consensus mechanisms and cryptographic hash function, and how it can be used to address the current challenges in vote recording during elections. The study evaluates current blockchain-enabled e-voting systems and designs a blockchain-based vote recording system that provides immutability, trust, and transparency. The proposed design addresses the issues of vote tempering because transactions added to the block are secure with a cryptographic hash function which makes tempering of the votes stored in the blockchain nearly impossible and make it immutable.","PeriodicalId":142770,"journal":{"name":"2019 12th CMI Conference on Cybersecurity and Privacy (CMI)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129029526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CMI48017.2019.8962136
Paul Irolla, G. Châtel
The Membership Inference Attack (MIA) is the process of determining whether a sample comes from the training dataset (in) of a machine learning model or not (out). This attack makes use of a trained machine learning to expose confidential information about its training data. It is particularly alarming in cases where data is tightly linked to individuals like in the medical, financial and marketing domains. The underlying factors of the success of MIA are not well understood. The current theory explains its success by the difference in the confidence levels for in samples and out samples. In this article, we show that the confidence levels play little to no role in the MIA success in most of the cases. We propose a more general theory that explains previous results and some unexpected observations that have been made in the state-of-the-art. To back up our theory, we run MIA exneriments on MNIST, CIFAR-10 and Fashion-MNIST.
{"title":"Demystifying the Membership Inference Attack","authors":"Paul Irolla, G. Châtel","doi":"10.1109/CMI48017.2019.8962136","DOIUrl":"https://doi.org/10.1109/CMI48017.2019.8962136","url":null,"abstract":"The Membership Inference Attack (MIA) is the process of determining whether a sample comes from the training dataset (in) of a machine learning model or not (out). This attack makes use of a trained machine learning to expose confidential information about its training data. It is particularly alarming in cases where data is tightly linked to individuals like in the medical, financial and marketing domains. The underlying factors of the success of MIA are not well understood. The current theory explains its success by the difference in the confidence levels for in samples and out samples. In this article, we show that the confidence levels play little to no role in the MIA success in most of the cases. We propose a more general theory that explains previous results and some unexpected observations that have been made in the state-of-the-art. To back up our theory, we run MIA exneriments on MNIST, CIFAR-10 and Fashion-MNIST.","PeriodicalId":142770,"journal":{"name":"2019 12th CMI Conference on Cybersecurity and Privacy (CMI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127939637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/CMI48017.2019.8962137
Mordechai Guri, Dima Bykhovsky, Y. Elovici
Air-gapped computers are systems that are kept isolated from the Internet since they store or process sensitive information. In this paper, we introduce an optical covert channel in which an attacker can leak (or, exfiltlrate) sensitive information from air-gapped computers through manipulations on the screen brightness. This covert channel is invisible and it works even while the user is working on the computer. Malware on a compromised computer can obtain sensitive data (e.g., files, images, encryption keys and passwords), and modulate it within the screen brightness, invisible to users. The small changes in the brightness are invisible to humans but can be recovered from video streams taken by cameras such as a local security camera, smartphone camera or a webcam. We present related work and discuss the technical and scientific background of this covert channel. We examined the channel's boundaries under various parameters, with different types of computer and TV screens, and at several distances. We also tested different types of camera receivers to demonstrate the covert channel. Lastly, we present relevant countermeasures to this type of attack.
{"title":"Brightness: Leaking Sensitive Data from Air-Gapped Workstations via Screen Brightness","authors":"Mordechai Guri, Dima Bykhovsky, Y. Elovici","doi":"10.1109/CMI48017.2019.8962137","DOIUrl":"https://doi.org/10.1109/CMI48017.2019.8962137","url":null,"abstract":"Air-gapped computers are systems that are kept isolated from the Internet since they store or process sensitive information. In this paper, we introduce an optical covert channel in which an attacker can leak (or, exfiltlrate) sensitive information from air-gapped computers through manipulations on the screen brightness. This covert channel is invisible and it works even while the user is working on the computer. Malware on a compromised computer can obtain sensitive data (e.g., files, images, encryption keys and passwords), and modulate it within the screen brightness, invisible to users. The small changes in the brightness are invisible to humans but can be recovered from video streams taken by cameras such as a local security camera, smartphone camera or a webcam. We present related work and discuss the technical and scientific background of this covert channel. We examined the channel's boundaries under various parameters, with different types of computer and TV screens, and at several distances. We also tested different types of camera receivers to demonstrate the covert channel. Lastly, we present relevant countermeasures to this type of attack.","PeriodicalId":142770,"journal":{"name":"2019 12th CMI Conference on Cybersecurity and Privacy (CMI)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127010557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}