Frederik Armknecht, C. Boyd, Gareth T. Davies, Kristian Gjøsteen, Mohsen Toorani
Deduplication removes redundant copies of files or data blocks stored on the cloud. Client-side deduplication, where the client only uploads the file upon the request of the server, provides major storage and bandwidth savings, but introduces a number of security concerns. Harnik et al. (2010) showed how cross-user client-side deduplication inherently gives the adversary access to a (noisy) side-channel that may divulge whether or not a particular file is stored on the server, leading to leakage of user information. We provide formal definitions for deduplication strategies and their security in terms of adversarial advantage. Using these definitions, we provide a criterion for designing good strategies and then prove a bound characterizing the necessary trade-off between security and efficiency.
{"title":"Side Channels in Deduplication: Trade-offs between Leakage and Efficiency","authors":"Frederik Armknecht, C. Boyd, Gareth T. Davies, Kristian Gjøsteen, Mohsen Toorani","doi":"10.1145/3052973.3053019","DOIUrl":"https://doi.org/10.1145/3052973.3053019","url":null,"abstract":"Deduplication removes redundant copies of files or data blocks stored on the cloud. Client-side deduplication, where the client only uploads the file upon the request of the server, provides major storage and bandwidth savings, but introduces a number of security concerns. Harnik et al. (2010) showed how cross-user client-side deduplication inherently gives the adversary access to a (noisy) side-channel that may divulge whether or not a particular file is stored on the server, leading to leakage of user information. We provide formal definitions for deduplication strategies and their security in terms of adversarial advantage. Using these definitions, we provide a criterion for designing good strategies and then prove a bound characterizing the necessary trade-off between security and efficiency.","PeriodicalId":20540,"journal":{"name":"Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89629320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Memory Corruption Att. & Def.","authors":"Heng Yin","doi":"10.1145/3248550","DOIUrl":"https://doi.org/10.1145/3248550","url":null,"abstract":"","PeriodicalId":20540,"journal":{"name":"Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90974846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although there are over 1,900,000 third-party Android apps in the Google Play Store, little is understood about how their security and privacy characteristics, such as dangerous permission usage and the vulnerabilities they contain, have evolved over time. Our research is two-fold: we take quarterly snapshots of the Google Play Store over a two-year period to understand how permission usage by apps has changed; and we analyse 30,000 apps to understand how their security and privacy characteristics have changed over the same two-year period. Extrapolating our findings, we estimate that over 35,000 apps in the Google Play Store ask for additional dangerous permissions every three months. Our statistically significant observations suggest that free apps and popular apps are more likely to ask for additional dangerous permissions when they are updated. Worryingly, we discover that Android apps are not getting safer as they are updated. In many cases, app updates serve to increase the number of distinct vulnerabilities contained within apps, especially for popular apps. We conclude with recommendations to stakeholders for improving the security of the Android ecosystem.
{"title":"To Update or Not to Update: Insights From a Two-Year Study of Android App Evolution","authors":"Vincent F. Taylor, I. Martinovic","doi":"10.1145/3052973.3052990","DOIUrl":"https://doi.org/10.1145/3052973.3052990","url":null,"abstract":"Although there are over 1,900,000 third-party Android apps in the Google Play Store, little is understood about how their security and privacy characteristics, such as dangerous permission usage and the vulnerabilities they contain, have evolved over time. Our research is two-fold: we take quarterly snapshots of the Google Play Store over a two-year period to understand how permission usage by apps has changed; and we analyse 30,000 apps to understand how their security and privacy characteristics have changed over the same two-year period. Extrapolating our findings, we estimate that over 35,000 apps in the Google Play Store ask for additional dangerous permissions every three months. Our statistically significant observations suggest that free apps and popular apps are more likely to ask for additional dangerous permissions when they are updated. Worryingly, we discover that Android apps are not getting safer as they are updated. In many cases, app updates serve to increase the number of distinct vulnerabilities contained within apps, especially for popular apps. We conclude with recommendations to stakeholders for improving the security of the Android ecosystem.","PeriodicalId":20540,"journal":{"name":"Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80333830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: SGX","authors":"Mathias Payer","doi":"10.1145/3248547","DOIUrl":"https://doi.org/10.1145/3248547","url":null,"abstract":"","PeriodicalId":20540,"journal":{"name":"Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86523451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
PAKE protocols, for Password-Authenticated Key Exchange, enable two parties to establish a shared cryptographically strong key over an insecure network using a short common secret as authentication means. After the seminal work by Bellovin and Merritt, with the famous EKE, for Encrypted Key Exchange, various settings and security notions have been defined, and many protocols have been proposed. In this paper, we revisit the promising SPEKE, for Simple Password Exponential Key Exchange, proposed by Jablon. The only known security analysis works in the random oracle model under the CDH assumption, but in the multiplicative groups of finite fields only (subgroups of Zp*), which means the use of large elements and so huge communications and computations. Our new instantiation (TBPEKE, for Two-Basis Password Exponential Key Exchange) applies to any group, and our security analysis requires a DLIN-like assumption to hold. In particular, one can use elliptic curves, which leads to a better efficiency, at both the communication and computation levels. We additionally consider server corruptions, which immediately leak all the passwords to the adversary with symmetric PAKE. We thus study an asymmetric variant, also known as VPAKE, for Verifier-based Password Authenticated Key Exchange. We then propose a verifier-based variant of TBPEKE, the so-called VTBPEKE, which is also quite efficient, and resistant to server-compromise.
{"title":"VTBPEKE: Verifier-based Two-Basis Password Exponential Key Exchange","authors":"D. Pointcheval, Guilin Wang","doi":"10.1145/3052973.3053026","DOIUrl":"https://doi.org/10.1145/3052973.3053026","url":null,"abstract":"PAKE protocols, for Password-Authenticated Key Exchange, enable two parties to establish a shared cryptographically strong key over an insecure network using a short common secret as authentication means. After the seminal work by Bellovin and Merritt, with the famous EKE, for Encrypted Key Exchange, various settings and security notions have been defined, and many protocols have been proposed. In this paper, we revisit the promising SPEKE, for Simple Password Exponential Key Exchange, proposed by Jablon. The only known security analysis works in the random oracle model under the CDH assumption, but in the multiplicative groups of finite fields only (subgroups of Zp*), which means the use of large elements and so huge communications and computations. Our new instantiation (TBPEKE, for Two-Basis Password Exponential Key Exchange) applies to any group, and our security analysis requires a DLIN-like assumption to hold. In particular, one can use elliptic curves, which leads to a better efficiency, at both the communication and computation levels. We additionally consider server corruptions, which immediately leak all the passwords to the adversary with symmetric PAKE. We thus study an asymmetric variant, also known as VPAKE, for Verifier-based Password Authenticated Key Exchange. We then propose a verifier-based variant of TBPEKE, the so-called VTBPEKE, which is also quite efficient, and resistant to server-compromise.","PeriodicalId":20540,"journal":{"name":"Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86731931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Countless systems ranging from consumer electronics to military equipment are dependent on integrated circuits (ICs). A surprisingly large number of such systems are already security- critical, e.g., automotive electronics, medical devices, or SCADA systems. If the underlying ICs in such applications are maliciously manipulated through hardware Trojans, the security of the entire system can be compromised. In recent years, hardware Trojans have drawn the attention of the scientific community and government. Initially, the primary attacker model was a malicious foundry that could alter the design, i.e., introduce hardware Trojans which could interfere with the functionality of a chip. Many other attacker models exist too. For instance, a legitimate IC manufacturer, such as a consumer electronics company, might be in cohort with a national intelligence agency and could alter its products in a way that compromises their security. Even though hardware Trojans have been studied for a decade or so in the literature, little is known about how they might look, and what the "use cases" for them is. We describe two applications for low-level hardware manipulations. One introduces an ASIC Trojans by sub-transistor changes, and the other is a novel type of fault-injection attacks against FPGAs. As an example for an extremely stealthy manipulations, we show how a dangerous Trojans can be introduced by merely changing the dopant polarity of selected existing transistors of a design. The Trojan manipulates the digital post-processing of Intel's cryptographically secure random number generator used in the Ivy Bridge processors. The adversary is capable of exactly controlling the entropy of the RNG. For example, the attacker can reduce the RNG's entropy to 40 bits of randomness. Due to the AES-based one-way function after the entropy extracting, the Trojan is very difficult to detect. Crucially, this approach does not require to add new circuits to the IC. Since the modified circuit appears legitimate on all wiring layers (including all metal and polysilicon), our family of Trojans is resistant to many detection techniques, including fine-grain optical inspection and checking against "golden chips". As a second "use case", we show how an adversary can extract cryptographic keys from an unknown FPGA design. The attack, coined bitstream fault injection (BiFI), systematically manipulates the bitstream by changing random LUT contents, configures the target device, and collects the resulting faulty ciphertexts. The ciphertexts are used to recover the key by testing a set of hypotheses, e.g., that the ciphertext is the plaintext XORed with the key. The attack only needs a black-box assumption about the bitstream structure and format. It was verified by considering a set of 3 rd party AES designs on different standard FPGAs. In 15 out of 16 designs, we were able to extract the AES key.
{"title":"Hardware Trojans and Other Threats against Embedded Systems","authors":"C. Paar","doi":"10.1145/3052973.3053885","DOIUrl":"https://doi.org/10.1145/3052973.3053885","url":null,"abstract":"Countless systems ranging from consumer electronics to military equipment are dependent on integrated circuits (ICs). A surprisingly large number of such systems are already security- critical, e.g., automotive electronics, medical devices, or SCADA systems. If the underlying ICs in such applications are maliciously manipulated through hardware Trojans, the security of the entire system can be compromised. In recent years, hardware Trojans have drawn the attention of the scientific community and government. Initially, the primary attacker model was a malicious foundry that could alter the design, i.e., introduce hardware Trojans which could interfere with the functionality of a chip. Many other attacker models exist too. For instance, a legitimate IC manufacturer, such as a consumer electronics company, might be in cohort with a national intelligence agency and could alter its products in a way that compromises their security. Even though hardware Trojans have been studied for a decade or so in the literature, little is known about how they might look, and what the \"use cases\" for them is. We describe two applications for low-level hardware manipulations. One introduces an ASIC Trojans by sub-transistor changes, and the other is a novel type of fault-injection attacks against FPGAs. As an example for an extremely stealthy manipulations, we show how a dangerous Trojans can be introduced by merely changing the dopant polarity of selected existing transistors of a design. The Trojan manipulates the digital post-processing of Intel's cryptographically secure random number generator used in the Ivy Bridge processors. The adversary is capable of exactly controlling the entropy of the RNG. For example, the attacker can reduce the RNG's entropy to 40 bits of randomness. Due to the AES-based one-way function after the entropy extracting, the Trojan is very difficult to detect. Crucially, this approach does not require to add new circuits to the IC. Since the modified circuit appears legitimate on all wiring layers (including all metal and polysilicon), our family of Trojans is resistant to many detection techniques, including fine-grain optical inspection and checking against \"golden chips\". As a second \"use case\", we show how an adversary can extract cryptographic keys from an unknown FPGA design. The attack, coined bitstream fault injection (BiFI), systematically manipulates the bitstream by changing random LUT contents, configures the target device, and collects the resulting faulty ciphertexts. The ciphertexts are used to recover the key by testing a set of hypotheses, e.g., that the ciphertext is the plaintext XORed with the key. The attack only needs a black-box assumption about the bitstream structure and format. It was verified by considering a set of 3 rd party AES designs on different standard FPGAs. In 15 out of 16 designs, we were able to extract the AES key.","PeriodicalId":20540,"journal":{"name":"Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security","volume":"19 1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83040351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gildas Avoine, Xavier Bultel, S. Gambs, David Gérault, P. Lafourcade, Cristina Onete, J. Robert
Distance-bounding protocols have been introduced to thwart relay attacks against contactless authentication protocols. In this context, verifiers have to authenticate the credentials of untrusted provers. Unfortunately, these protocols are themselves subject to complex threats such as terrorist-fraud attacks, in which a malicious prover helps an accomplice to authenticate. Provably guaranteeing the resistance of distance-bounding protocols to these attacks is complex. The classical solutions assume that rational provers want to protect their long-term authentication credentials, even with respect to their accomplices. Thus, terrorist-fraud resistant protocols generally rely on artificial extraction mechanisms, ensuring that an accomplice can retrieve the credential of his partnering prover, if he is able to authenticate. We propose a novel approach to obtain provable terrorist-fraud resistant protocols that does not rely on an accomplice being able to extract any long-term key. Instead, we simply assume that he can replay the information received from the prover. Thus, rational provers should refuse to cooperate with third parties if they can impersonate them freely afterwards. We introduce a generic construction for provably secure distance-bounding protocols, and give three instances of this construction: (1) an efficient symmetric-key protocol, (2) a public-key protocol protecting the identities of provers against external eavesdroppers, and finally (3) a fully anonymous protocol protecting the identities of provers even against malicious verifiers that try to profile them.
{"title":"A Terrorist-fraud Resistant and Extractor-free Anonymous Distance-bounding Protocol","authors":"Gildas Avoine, Xavier Bultel, S. Gambs, David Gérault, P. Lafourcade, Cristina Onete, J. Robert","doi":"10.1145/3052973.3053000","DOIUrl":"https://doi.org/10.1145/3052973.3053000","url":null,"abstract":"Distance-bounding protocols have been introduced to thwart relay attacks against contactless authentication protocols. In this context, verifiers have to authenticate the credentials of untrusted provers. Unfortunately, these protocols are themselves subject to complex threats such as terrorist-fraud attacks, in which a malicious prover helps an accomplice to authenticate. Provably guaranteeing the resistance of distance-bounding protocols to these attacks is complex. The classical solutions assume that rational provers want to protect their long-term authentication credentials, even with respect to their accomplices. Thus, terrorist-fraud resistant protocols generally rely on artificial extraction mechanisms, ensuring that an accomplice can retrieve the credential of his partnering prover, if he is able to authenticate. We propose a novel approach to obtain provable terrorist-fraud resistant protocols that does not rely on an accomplice being able to extract any long-term key. Instead, we simply assume that he can replay the information received from the prover. Thus, rational provers should refuse to cooperate with third parties if they can impersonate them freely afterwards. We introduce a generic construction for provably secure distance-bounding protocols, and give three instances of this construction: (1) an efficient symmetric-key protocol, (2) a public-key protocol protecting the identities of provers against external eavesdroppers, and finally (3) a fully anonymous protocol protecting the identities of provers even against malicious verifiers that try to profile them.","PeriodicalId":20540,"journal":{"name":"Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87708881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Genomic privacy has attracted much attention from the research community, mainly since its risks are unique and breaches can lead to terrifying leakage of most personal and sensitive information. The much less explored topic of genomic security needs to mitigate threats of the digitized genome being altered by its owner or an outside party, which can have dire consequences, especially, in medical or legal settings. At the same time, many anticipated genomic applications (with varying degrees of trust) require only small amounts of genomic data. Supporting such applications requires a careful balance between security and privacy. Furthermore, genome's size raises performance concerns. We argue that genomic security must be taken seriously and explored as a research topic in its own right. To this end, we discuss the problem space, identify the stakeholders, discuss assumptions about them, and outline several simple approaches based on common cryptographic techniques, including signature variants and authenticated data structures. We also present some extensions and identify opportunities for future research. The main goal of this paper is to highlight the importance of genomic security as a research topic in its own right.
{"title":"Security in Personal Genomics: Lest We Forget","authors":"G. Tsudik","doi":"10.1145/3052973.3056128","DOIUrl":"https://doi.org/10.1145/3052973.3056128","url":null,"abstract":"Genomic privacy has attracted much attention from the research community, mainly since its risks are unique and breaches can lead to terrifying leakage of most personal and sensitive information. The much less explored topic of genomic security needs to mitigate threats of the digitized genome being altered by its owner or an outside party, which can have dire consequences, especially, in medical or legal settings. At the same time, many anticipated genomic applications (with varying degrees of trust) require only small amounts of genomic data. Supporting such applications requires a careful balance between security and privacy. Furthermore, genome's size raises performance concerns. We argue that genomic security must be taken seriously and explored as a research topic in its own right. To this end, we discuss the problem space, identify the stakeholders, discuss assumptions about them, and outline several simple approaches based on common cryptographic techniques, including signature variants and authenticated data structures. We also present some extensions and identify opportunities for future research. The main goal of this paper is to highlight the importance of genomic security as a research topic in its own right.","PeriodicalId":20540,"journal":{"name":"Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77376720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Omid Mirzaei, Guillermo Suarez-Tangil, J. Tapiador, J. M. D. Fuentes
Information flows in Android can be effectively used to give an informative summary of an application's behavior, showing how and for what purpose apps use specific pieces of information. This has been shown to be extremely useful to characterize risky behaviors and, ultimately, to identify unwanted or malicious applications in Android. However, identifying information flows in an application is computationally highly expensive and, with more than one million apps in the Google Play market, it is critical to prioritize applications that are likely to pose a risk. In this work, we develop a triage mechanism to rank applications considering their potential risk. Our approach, called TriFlow, relies on static features that are quick to obtain. TriFlow combines a probabilistic model to predict the existence of information flows with a metric of how significant a flow is in benign and malicious apps. Based on this, TriFlow provides a score for each application that can be used to prioritize analysis. TriFlow also provides an explanatory report of the associated risk. We evaluate our tool with a representative dataset of benign and malicious Android apps. Our results show that it can predict the presence of information flows very accurately and that the overall triage mechanism enables significant resource saving.
{"title":"TriFlow: Triaging Android Applications using Speculative Information Flows","authors":"Omid Mirzaei, Guillermo Suarez-Tangil, J. Tapiador, J. M. D. Fuentes","doi":"10.1145/3052973.3053001","DOIUrl":"https://doi.org/10.1145/3052973.3053001","url":null,"abstract":"Information flows in Android can be effectively used to give an informative summary of an application's behavior, showing how and for what purpose apps use specific pieces of information. This has been shown to be extremely useful to characterize risky behaviors and, ultimately, to identify unwanted or malicious applications in Android. However, identifying information flows in an application is computationally highly expensive and, with more than one million apps in the Google Play market, it is critical to prioritize applications that are likely to pose a risk. In this work, we develop a triage mechanism to rank applications considering their potential risk. Our approach, called TriFlow, relies on static features that are quick to obtain. TriFlow combines a probabilistic model to predict the existence of information flows with a metric of how significant a flow is in benign and malicious apps. Based on this, TriFlow provides a score for each application that can be used to prioritize analysis. TriFlow also provides an explanatory report of the associated risk. We evaluate our tool with a representative dataset of benign and malicious Android apps. Our results show that it can predict the presence of information flows very accurately and that the overall triage mechanism enables significant resource saving.","PeriodicalId":20540,"journal":{"name":"Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security","volume":"260 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82960079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Beunardeau, Aisling Connolly, R. Géraud, D. Naccache
In several popular standards (e.g. ISO 7816, ISO 14443 or ISO 11898) and IoT applications, a node (transponder, terminal) sends commands and data to another node (transponder, card) to accomplish an applicative task (e.g. a payment or a measurement). Most standards encrypt and authenticate the data. However, as an application of Kerckhoffs' principle, system designers usually consider that commands are part of the system specifications and must hence be transmitted in clear while the data that these commands process is encrypted and signed. While this assumption holds in systems representable by relatively simple state machines, leaking command information is undesirable when the addressed nodes offer the caller a large "toolbox" of commands that the addressing node can activate in many different orders to accomplish different applicative goals. This work proposes protections allowing encrypting and protecting not only the data but also the commands associated to them. The practical implementation of this idea raises a number of difficulties. The first is that of defining a clear adversarial model, a question that we will not address in this paper. The difficulty comes from the application-specific nature of the harm that may possibly stem from leaking the command sequence as well as from the modeling of the observations that the attacker has on the target node's behavior (is a transaction accepted? is a door opened? is a packet routed etc). This paper proposes a collection of empirical protection techniques allowing the sender to hide the sequence of commands sent. We discuss the advantages and the shortcomings of each proposed method. Besides the evident use of nonces (or other internal system states) to render the encryption of identical commands different in time, we also discuss the introduction of random delays between commands (to avoid inferring the next command based on the time elapsed since the previous command), the splitting of a command followed by n data bytes into a collection of encrypted sub-commands conveying the n bytes in chunks of random sizes and the appending of a random number of useless bytes to each packet. Independent commands can be permuted in time or sent ahead of time and buffered. Another practically useful countermeasure consists in masking the number of commands by adding useless "null" command packets. In its best implementation, the flow of commands is sent in packets in which, at times, the sending node addresses several data and command chunks belonging to different successive commands in the sequence.
在一些流行的标准(例如ISO 7816, ISO 14443或ISO 11898)和物联网应用中,节点(转发器,终端)向另一个节点(转发器,卡)发送命令和数据以完成应用任务(例如支付或测量)。大多数标准对数据进行加密和身份验证。然而,作为Kerckhoffs原理的应用,系统设计者通常认为命令是系统规范的一部分,因此必须明确传输,而这些命令处理的数据是加密和签名的。虽然这种假设在可以用相对简单的状态机表示的系统中成立,但是当寻址节点为调用者提供大量命令“工具箱”时,命令信息泄漏是不希望出现的,寻址节点可以以许多不同的顺序激活这些命令以实现不同的应用程序目标。这项工作提出的保护措施不仅允许加密和保护数据,还允许加密和保护与数据相关的命令。这一想法的实际实施提出了一些困难。首先是定义一个明确的对抗性模型,这是我们在本文中不会讨论的问题。困难来自于特定于应用程序的危害,这种危害可能源于命令序列的泄露,以及攻击者对目标节点行为的观察的建模(是否接受事务?有门开着吗?是一个数据包路由等)。本文提出了一套经验保护技术,允许发送方隐藏发送的命令序列。我们讨论了每种方法的优点和缺点。除了明显使用nonce(或其他内部系统状态)使相同命令的加密在时间上不同之外,我们还讨论了命令之间引入的随机延迟(以避免根据自上一个命令以来经过的时间来推断下一个命令)。将后跟n个数据字节的命令拆分为加密的子命令集合,以随机大小的块传输n个字节,并向每个数据包附加随机数量的无用字节。独立的命令可以及时排列,也可以提前发送并进行缓冲。另一个实际有用的对策是通过添加无用的“null”命令包来掩盖命令的数量。在其最佳实现中,命令流以数据包的形式发送,在数据包中,发送节点有时会处理序列中属于不同连续命令的几个数据和命令块。
{"title":"The Case for System Command Encryption","authors":"M. Beunardeau, Aisling Connolly, R. Géraud, D. Naccache","doi":"10.1145/3052973.3056129","DOIUrl":"https://doi.org/10.1145/3052973.3056129","url":null,"abstract":"In several popular standards (e.g. ISO 7816, ISO 14443 or ISO 11898) and IoT applications, a node (transponder, terminal) sends commands and data to another node (transponder, card) to accomplish an applicative task (e.g. a payment or a measurement). Most standards encrypt and authenticate the data. However, as an application of Kerckhoffs' principle, system designers usually consider that commands are part of the system specifications and must hence be transmitted in clear while the data that these commands process is encrypted and signed. While this assumption holds in systems representable by relatively simple state machines, leaking command information is undesirable when the addressed nodes offer the caller a large \"toolbox\" of commands that the addressing node can activate in many different orders to accomplish different applicative goals. This work proposes protections allowing encrypting and protecting not only the data but also the commands associated to them. The practical implementation of this idea raises a number of difficulties. The first is that of defining a clear adversarial model, a question that we will not address in this paper. The difficulty comes from the application-specific nature of the harm that may possibly stem from leaking the command sequence as well as from the modeling of the observations that the attacker has on the target node's behavior (is a transaction accepted? is a door opened? is a packet routed etc). This paper proposes a collection of empirical protection techniques allowing the sender to hide the sequence of commands sent. We discuss the advantages and the shortcomings of each proposed method. Besides the evident use of nonces (or other internal system states) to render the encryption of identical commands different in time, we also discuss the introduction of random delays between commands (to avoid inferring the next command based on the time elapsed since the previous command), the splitting of a command followed by n data bytes into a collection of encrypted sub-commands conveying the n bytes in chunks of random sizes and the appending of a random number of useless bytes to each packet. Independent commands can be permuted in time or sent ahead of time and buffered. Another practically useful countermeasure consists in masking the number of commands by adding useless \"null\" command packets. In its best implementation, the flow of commands is sent in packets in which, at times, the sending node addresses several data and command chunks belonging to different successive commands in the sequence.","PeriodicalId":20540,"journal":{"name":"Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security","volume":"88 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90592965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}