Fabio Pagani, Matteo De Astis, Mariano Graziano, A. Lanzi, D. Balzarotti
Spam has been largely studied in the past years from different perspectives but, unfortunately, it is still an open problem and a lucrative and active business for criminals and bot herders. While several countermeasures have been proposed and deployed in the past decade, their impact and effectiveness is not always clear. In particular, on top of the most common content-and sender-based anti-spam techniques, two minor approaches are popular among system administrators to cope with this annoying problem: greylisting and nolisting. These techniques exploit known features of the Simple Mail Transfer Protocol (SMTP) protocol that are not often respected by spambots. This assumption makes these two countermeasures really simple to adopt and, at least in theory, quite effective. In this paper we present the first comprehensive study of nolisting and greylisting, in which we analyze these spam countermeasures from different perspectives. First, we measure their world-wide deployment and provide insights from their distribution. Second, we measure their effectiveness against areal dataset of malware samples responsible to generate over 70% of the global spam traffic. Finally, we measure the impact of these two defensive mechanisms on the delivery of normal emails. Our study provides a unique and valuable perspective on two of the most innovative and atypical anti-spam systems. Our findings may guide system administrators and security experts to better assess their anti-spam infrastructure and shed some light on myths about greylisting and nolisting.
在过去的几年里,人们从不同的角度对垃圾邮件进行了大量的研究,但不幸的是,它仍然是一个悬而未决的问题,对犯罪分子和机器人牧人来说,它仍然是一个有利可图的活跃行业。虽然在过去十年中提出并实施了若干对策,但其影响和效力并不总是很明确。特别是,除了最常见的基于内容和发件人的反垃圾邮件技术之外,系统管理员还常用两种方法来处理这个恼人的问题:灰名单和非名单。这些技术利用了简单邮件传输协议(Simple Mail Transfer Protocol, SMTP)的已知特性,而这些特性通常不受垃圾邮件机器人的重视。这一假设使得这两种对策非常容易采用,而且至少在理论上相当有效。本文首次对非黑名单和灰名单进行了全面的研究,从不同的角度分析了这些垃圾邮件的对策。首先,我们衡量它们在全球的部署情况,并从它们的分布情况中提供见解。其次,我们根据恶意软件样本的真实数据集衡量它们的有效性,这些样本负责产生超过70%的全球垃圾邮件流量。最后,我们测量了这两种防御机制对正常电子邮件传递的影响。我们的研究为两个最具创新性和非典型的反垃圾邮件系统提供了独特而有价值的视角。我们的发现可以指导系统管理员和安全专家更好地评估他们的反垃圾邮件基础设施,并揭示一些关于灰名单和非名单的神话。
{"title":"Measuring the Role of Greylisting and Nolisting in Fighting Spam","authors":"Fabio Pagani, Matteo De Astis, Mariano Graziano, A. Lanzi, D. Balzarotti","doi":"10.1109/DSN.2016.57","DOIUrl":"https://doi.org/10.1109/DSN.2016.57","url":null,"abstract":"Spam has been largely studied in the past years from different perspectives but, unfortunately, it is still an open problem and a lucrative and active business for criminals and bot herders. While several countermeasures have been proposed and deployed in the past decade, their impact and effectiveness is not always clear. In particular, on top of the most common content-and sender-based anti-spam techniques, two minor approaches are popular among system administrators to cope with this annoying problem: greylisting and nolisting. These techniques exploit known features of the Simple Mail Transfer Protocol (SMTP) protocol that are not often respected by spambots. This assumption makes these two countermeasures really simple to adopt and, at least in theory, quite effective. In this paper we present the first comprehensive study of nolisting and greylisting, in which we analyze these spam countermeasures from different perspectives. First, we measure their world-wide deployment and provide insights from their distribution. Second, we measure their effectiveness against areal dataset of malware samples responsible to generate over 70% of the global spam traffic. Finally, we measure the impact of these two defensive mechanisms on the delivery of normal emails. Our study provides a unique and valuable perspective on two of the most innovative and atypical anti-spam systems. Our findings may guide system administrators and security experts to better assess their anti-spam infrastructure and shed some light on myths about greylisting and nolisting.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124465975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bo Fang, Qining Lu, K. Pattabiraman, M. Ripeanu, S. Gurumurthi
The Program Vulnerability Factor (PVF) has been proposed as a metric to understand the impact of hardware faults on software. The PVF is calculated by identifying the program bits required for architecturally correct execution (ACE bits). PVF, however, is conservative as it assumes that all erroneous executions are a major concern, not just those that result in silent data corruptions, and it also does not account for errorsthat are detected at runtime, i.e., lead to program crashes. A more discriminating metric can inform the choice of the appropriate resilience techniques with acceptable performance and energy overheads. This paper proposes ePVF, an enhancement of the original PVF methodology, which filters out the crash-causing bits from the ACE bits identified by the traditional PVF analysis. The ePVF methodology consists of an error propagation model that reasons about error propagation in the program, and a crash model that encapsulates the platform-specific characteristics for handling hardware exceptions. ePVF reduces the vulnerable bits estimated by the original PVF analysis by between 45% and 67% depending on the benchmark, and has high accuracy (89% recall, 92% precision) in identifying the crash-causing bits. We demonstrate the utility of ePVF by using it to inform selective protection of the most SDC-prone instructions in a program.
{"title":"ePVF: An Enhanced Program Vulnerability Factor Methodology for Cross-Layer Resilience Analysis","authors":"Bo Fang, Qining Lu, K. Pattabiraman, M. Ripeanu, S. Gurumurthi","doi":"10.1109/DSN.2016.24","DOIUrl":"https://doi.org/10.1109/DSN.2016.24","url":null,"abstract":"The Program Vulnerability Factor (PVF) has been proposed as a metric to understand the impact of hardware faults on software. The PVF is calculated by identifying the program bits required for architecturally correct execution (ACE bits). PVF, however, is conservative as it assumes that all erroneous executions are a major concern, not just those that result in silent data corruptions, and it also does not account for errorsthat are detected at runtime, i.e., lead to program crashes. A more discriminating metric can inform the choice of the appropriate resilience techniques with acceptable performance and energy overheads. This paper proposes ePVF, an enhancement of the original PVF methodology, which filters out the crash-causing bits from the ACE bits identified by the traditional PVF analysis. The ePVF methodology consists of an error propagation model that reasons about error propagation in the program, and a crash model that encapsulates the platform-specific characteristics for handling hardware exceptions. ePVF reduces the vulnerable bits estimated by the original PVF analysis by between 45% and 67% depending on the benchmark, and has high accuracy (89% recall, 92% precision) in identifying the crash-causing bits. We demonstrate the utility of ePVF by using it to inform selective protection of the most SDC-prone instructions in a program.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131183120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tuan Le, Gabriel Salles-Loustau, L. Najafizadeh, M. Javanmard, S. Zonouz
Trustworthy and usable healthcare requires not only effective disease diagnostic procedures to ensure delivery of rapid and accurate outcomes, but also lightweight user privacy-preserving capabilities for resource-limited medical sensing devices. In this paper, we present MedSen, a portable, inexpensive and secure smartphone-based biomarker1 detection sensor to provide users with easy-to-use real-time disease diagnostic capabilities without the need for in-person clinical visits. To minimize the deployment cost and size without sacrificing the diagnostic accuracy, security and time requirement, MedSen operates as a dongle to the user's smartphone and leverages the smartphone's computational capabilities for its real-time data processing. From the security viewpoint, MedSen introduces a new hardware-level trusted sensing framework, built in the sensor, to encrypt measured analog signals related to cell counting in the patient's blood sample, at the data acquisition point. To protect the user privacy, MedSen's in-sensor encryption scheme conceals the user's private information before sending them out for cloud-based medical diagnostics analysis. The analysis outcomes are sent back to Med-Sen for decryption and user notifications. Additionally, MedSen introduces cyto-coded passwords to authenticate the user to the cloud server without the need for explicit screen password entry. Each user's password constitutes a predetermined number of synthetic beads with different dielectric characteristics. MedSen mixes the password beads with the user's blood before submitting the data for diagnostics analysis. The cloud server authenticates the user based on the statistics and characteristics of the beads with the blood sample, and links the user's identity to the encrypted analysis outcomes. We have implemented a real-world working prototype of MedSen through bio-sensor fabrication and smartphone app (Android) implementations. Our results show that MedSen can reliably classify different users based on their cyto-coded passwords with high accuracy. MedSen's built-in analog signal encryption guarantees the user's privacy by considering the smartphone and cloud server possibly untrusted (curious but honest). MedSen's end-to-end time requirement for disease diagnostics is approximately 0.2 seconds on average.
{"title":"Secure Point-of-Care Medical Diagnostics via Trusted Sensing and Cyto-Coded Passwords","authors":"Tuan Le, Gabriel Salles-Loustau, L. Najafizadeh, M. Javanmard, S. Zonouz","doi":"10.1109/DSN.2016.59","DOIUrl":"https://doi.org/10.1109/DSN.2016.59","url":null,"abstract":"Trustworthy and usable healthcare requires not only effective disease diagnostic procedures to ensure delivery of rapid and accurate outcomes, but also lightweight user privacy-preserving capabilities for resource-limited medical sensing devices. In this paper, we present MedSen, a portable, inexpensive and secure smartphone-based biomarker1 detection sensor to provide users with easy-to-use real-time disease diagnostic capabilities without the need for in-person clinical visits. To minimize the deployment cost and size without sacrificing the diagnostic accuracy, security and time requirement, MedSen operates as a dongle to the user's smartphone and leverages the smartphone's computational capabilities for its real-time data processing. From the security viewpoint, MedSen introduces a new hardware-level trusted sensing framework, built in the sensor, to encrypt measured analog signals related to cell counting in the patient's blood sample, at the data acquisition point. To protect the user privacy, MedSen's in-sensor encryption scheme conceals the user's private information before sending them out for cloud-based medical diagnostics analysis. The analysis outcomes are sent back to Med-Sen for decryption and user notifications. Additionally, MedSen introduces cyto-coded passwords to authenticate the user to the cloud server without the need for explicit screen password entry. Each user's password constitutes a predetermined number of synthetic beads with different dielectric characteristics. MedSen mixes the password beads with the user's blood before submitting the data for diagnostics analysis. The cloud server authenticates the user based on the statistics and characteristics of the beads with the blood sample, and links the user's identity to the encrypted analysis outcomes. We have implemented a real-world working prototype of MedSen through bio-sensor fabrication and smartphone app (Android) implementations. Our results show that MedSen can reliably classify different users based on their cyto-coded passwords with high accuracy. MedSen's built-in analog signal encryption guarantees the user's privacy by considering the smartphone and cloud server possibly untrusted (curious but honest). MedSen's end-to-end time requirement for disease diagnostics is approximately 0.2 seconds on average.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"303 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128626530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Bagheri, Alireza Sadeghi, Reyhaneh Jabbarvand Behrouz, S. Malek
As the dominant mobile computing platform, Android has become a prime target for cyber-security attacks. Many of these attacks are manifested at the application level, and through the exploitation of vulnerabilities in apps downloaded from the popular app stores. Increasingly, sophisticated attacks exploit the vulnerabilities in multiple installed apps, making it extremely difficult to foresee such attacks, as neither the app developers nor the store operators know a priori which apps will be installed together. This paper presents an approach that allows the end-users to safeguard a given bundle of apps installed on their device from such attacks. The approach, realized in a tool, called SEPAR, combines static analysis with lightweight formal methods to automatically infer security-relevant properties from a bundle of apps. It then uses a constraint solver to synthesize possible security exploits, from which fine-grained security policies are derived and automatically enforced to protect a given device. In our experiments with over 4,000 Android apps, SEPAR has proven to be highly effective at detecting previously unknown vulnerabilities as well as preventing their exploitation.
{"title":"Practical, Formal Synthesis and Automatic Enforcement of Security Policies for Android","authors":"H. Bagheri, Alireza Sadeghi, Reyhaneh Jabbarvand Behrouz, S. Malek","doi":"10.1109/DSN.2016.53","DOIUrl":"https://doi.org/10.1109/DSN.2016.53","url":null,"abstract":"As the dominant mobile computing platform, Android has become a prime target for cyber-security attacks. Many of these attacks are manifested at the application level, and through the exploitation of vulnerabilities in apps downloaded from the popular app stores. Increasingly, sophisticated attacks exploit the vulnerabilities in multiple installed apps, making it extremely difficult to foresee such attacks, as neither the app developers nor the store operators know a priori which apps will be installed together. This paper presents an approach that allows the end-users to safeguard a given bundle of apps installed on their device from such attacks. The approach, realized in a tool, called SEPAR, combines static analysis with lightweight formal methods to automatically infer security-relevant properties from a bundle of apps. It then uses a constraint solver to synthesize possible security exploits, from which fine-grained security policies are derived and automatically enforced to protect a given device. In our experiments with over 4,000 Android apps, SEPAR has proven to be highly effective at detecting previously unknown vulnerabilities as well as preventing their exploitation.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128125060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xitao Wen, Bo Yang, Yan Chen, Chengchen Hu, Yi Wang, B. Liu, Xiaolin Chen
The OpenFlow paradigm embraces third-party development efforts, and therefore suffers from potential attacks that usurp the excessive privileges of control plane applications (apps). Such privilege abuse could lead to various attacks impacting the entire administrative domain. In this paper, we present SDNShield, a permission control system that helps network administrators to express and enforce only the minimum required privileges to individual controller apps. SDNShield achieves this goal through (i) fine-grained SDN permission abstractions that allow accurate representation of app behavior boundary, (ii) automatic security policy reconciliation that incorporates security policies specified by administrators into the requested app permissions, and (iii) a lightweight thread-based controller architecture for controller/app isolation and reliable permission enforcement. Through prototype implementation, we verify its effectiveness against proof-of-concept attacks. Performance evaluation shows that SDNShield introduces negligible runtime overhead.
{"title":"SDNShield: Reconciliating Configurable Application Permissions for SDN App Markets","authors":"Xitao Wen, Bo Yang, Yan Chen, Chengchen Hu, Yi Wang, B. Liu, Xiaolin Chen","doi":"10.1109/DSN.2016.20","DOIUrl":"https://doi.org/10.1109/DSN.2016.20","url":null,"abstract":"The OpenFlow paradigm embraces third-party development efforts, and therefore suffers from potential attacks that usurp the excessive privileges of control plane applications (apps). Such privilege abuse could lead to various attacks impacting the entire administrative domain. In this paper, we present SDNShield, a permission control system that helps network administrators to express and enforce only the minimum required privileges to individual controller apps. SDNShield achieves this goal through (i) fine-grained SDN permission abstractions that allow accurate representation of app behavior boundary, (ii) automatic security policy reconciliation that incorporates security policies specified by administrators into the requested app permissions, and (iii) a lightweight thread-based controller architecture for controller/app isolation and reliable permission enforcement. Through prototype implementation, we verify its effectiveness against proof-of-concept attacks. Performance evaluation shows that SDNShield introduces negligible runtime overhead.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133405841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seyed Mohammad Seyedzadeh, R. Maddah, A. Jones, R. Melhem
Designing reliable systems using scaled Spin-Transfer Torque Random Access Memory (STT-RAM) has become a significant challenge as the memory technology feature size is scaled down. The introduction of a more prominent read disturbance is a key contributor in this reliability challenge. However, techniques to address read disturbance are often considered in a vacuum that assumes other concerns like transient read errors (false reads) and write faults do not occur. This paper studies several techniques that leverage ECC to mitigate persistent errors resulting from read disturbance and write faults of STT-RAM while still considering the impact of transient errors of false reads. In particular, we study three policies to enable better-than-conservative read disturbance mitigation. The first policy, write after error (WAE), uses ECC to detect errors and write back data to clear persistent errors. The second policy, write after persistent error (WAP), filters out false reads by reading a second time when an error is detected leading to trade-off between write and read energy. The third policy, write after error threshold (WAT), leaves cells with incorrect data behind (up to a threshold) when the number of errors is less than the ECC capability. To evaluate the effectiveness of the different schemes and compare with the simple previously proposed scheme of writing after every read (WAR), we model these policies using Markov processes. This approach allows the determination of appropriate bit error rates in the context of both persistent and transient errors to accurately estimate the system reliability and the energy consumption of different error correction approaches. Our evaluations show that each of these policies provides benefits for different error scenarios. Moreover some approaches can save energy by an average of 99.5%, while incurring the same reliability as other approaches.
随着存储技术特征尺寸的缩小,设计可靠的自旋传递扭矩随机存取存储器(STT-RAM)系统已成为一个重大挑战。在这种可靠性挑战中,引入一个更突出的读干扰是一个关键因素。然而,解决读干扰的技术通常是在真空中考虑的,假设没有发生其他问题,如瞬态读错误(误读)和写错误。本文研究了几种利用ECC来减轻STT-RAM读干扰和写错误导致的持续错误的技术,同时仍然考虑错误读的瞬态错误的影响。特别是,我们研究了三种策略来实现优于保守的读干扰缓解。第一种策略是WAE (write after error),使用ECC检测错误,并回写数据以清除持久错误。第二个策略是持久错误后写入(WAP),当检测到错误时,通过第二次读取来过滤错误读取,从而在写和读能量之间进行权衡。第三个策略是在错误阈值之后写入(WAT),当错误数量少于ECC能力时,将不正确数据的单元留在后面(直到一个阈值)。为了评估不同方案的有效性,并与之前提出的简单的每次读取后写入(WAR)方案进行比较,我们使用马尔可夫过程对这些策略进行建模。这种方法允许在持久错误和瞬态错误的情况下确定适当的误码率,以准确地估计系统可靠性和不同纠错方法的能耗。我们的评估表明,这些策略中的每一个都为不同的错误场景提供了好处。此外,一些方法可以平均节省99.5%的能源,同时产生与其他方法相同的可靠性。
{"title":"Leveraging ECC to Mitigate Read Disturbance, False Reads and Write Faults in STT-RAM","authors":"Seyed Mohammad Seyedzadeh, R. Maddah, A. Jones, R. Melhem","doi":"10.1109/DSN.2016.28","DOIUrl":"https://doi.org/10.1109/DSN.2016.28","url":null,"abstract":"Designing reliable systems using scaled Spin-Transfer Torque Random Access Memory (STT-RAM) has become a significant challenge as the memory technology feature size is scaled down. The introduction of a more prominent read disturbance is a key contributor in this reliability challenge. However, techniques to address read disturbance are often considered in a vacuum that assumes other concerns like transient read errors (false reads) and write faults do not occur. This paper studies several techniques that leverage ECC to mitigate persistent errors resulting from read disturbance and write faults of STT-RAM while still considering the impact of transient errors of false reads. In particular, we study three policies to enable better-than-conservative read disturbance mitigation. The first policy, write after error (WAE), uses ECC to detect errors and write back data to clear persistent errors. The second policy, write after persistent error (WAP), filters out false reads by reading a second time when an error is detected leading to trade-off between write and read energy. The third policy, write after error threshold (WAT), leaves cells with incorrect data behind (up to a threshold) when the number of errors is less than the ECC capability. To evaluate the effectiveness of the different schemes and compare with the simple previously proposed scheme of writing after every read (WAR), we model these policies using Markov processes. This approach allows the determination of appropriate bit error rates in the context of both persistent and transient errors to accurately estimate the system reliability and the energy consumption of different error correction approaches. Our evaluations show that each of these policies provides benefits for different error scenarios. Moreover some approaches can save energy by an average of 99.5%, while incurring the same reliability as other approaches.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129397814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Code identity is a fundamental concept for authenticated operations in Trusted Computing. In today's approach, the overhead of assigning an identity to a protected service increases linearly with the service code size. In addition, service code size continues to grow to accommodate richer services. This trend negatively impacts either the security or the efficiency of current protocols for trusted executions. We present an execution protocol that breaks the dependency between the code size of the service and the identification overhead, without affecting security, and that works on different trusted components. This is achieved by computing an identity for each of the code modules that are actually executed, and then building a robust chain of trust that links them together for efficient verification. We implemented and applied our protocol to a widely-deployed database engine, improving query-processing time up to 2× compared to the monolithic execution of the engine.
{"title":"Secure Identification of Actively Executed Code on a Generic Trusted Component","authors":"Bruno Vavala, N. Neves, P. Steenkiste","doi":"10.1109/DSN.2016.45","DOIUrl":"https://doi.org/10.1109/DSN.2016.45","url":null,"abstract":"Code identity is a fundamental concept for authenticated operations in Trusted Computing. In today's approach, the overhead of assigning an identity to a protected service increases linearly with the service code size. In addition, service code size continues to grow to accommodate richer services. This trend negatively impacts either the security or the efficiency of current protocols for trusted executions. We present an execution protocol that breaks the dependency between the code size of the service and the identification overhead, without affecting security, and that works on different trusted components. This is achieved by computing an identity for each of the code modules that are actually executed, and then building a robust chain of trust that links them together for efficient verification. We implemented and applied our protocol to a widely-deployed database engine, improving query-processing time up to 2× compared to the monolithic execution of the engine.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115710270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
How to improve the performance of single failure recovery has been an active research topic because of its prevalence in large-scale storage systems. We argue that when erasure coding is deployed in a cluster file system (CFS), existing single failure recovery designs are limited in different aspects: neglecting the bandwidth diversity property in a CFS architecture, targeting specific erasure code constructions, and no special treatment on load balancing during recovery. In this paper, we reconsider the single failure recovery problem in a CFS setting, and propose CAR, a cross-rack-aware recovery algorithm. For each stripe, CAR finds a recovery solution that retrieves data from the minimum number of racks. It also reduces the amount of cross-rack repair traffic by performing intra-rack data aggregation prior to cross-rack transmission. Furthermore, by considering multi-stripe recovery, CAR balances the amount of cross-rack repair traffic across multiple racks. Evaluation results show that CAR can effectively reduce the amount of cross-rack repair traffic and the resulting recovery time.
{"title":"Reconsidering Single Failure Recovery in Clustered File Systems","authors":"Zhirong Shen, J. Shu, P. Lee","doi":"10.1109/DSN.2016.37","DOIUrl":"https://doi.org/10.1109/DSN.2016.37","url":null,"abstract":"How to improve the performance of single failure recovery has been an active research topic because of its prevalence in large-scale storage systems. We argue that when erasure coding is deployed in a cluster file system (CFS), existing single failure recovery designs are limited in different aspects: neglecting the bandwidth diversity property in a CFS architecture, targeting specific erasure code constructions, and no special treatment on load balancing during recovery. In this paper, we reconsider the single failure recovery problem in a CFS setting, and propose CAR, a cross-rack-aware recovery algorithm. For each stripe, CAR finds a recovery solution that retrieves data from the minimum number of racks. It also reduces the amount of cross-rack repair traffic by performing intra-rack data aggregation prior to cross-rack transmission. Furthermore, by considering multi-stripe recovery, CAR balances the amount of cross-rack repair traffic across multiple racks. Evaluation results show that CAR can effectively reduce the amount of cross-rack repair traffic and the resulting recovery time.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124254766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lannan Luo, Yu Fu, Dinghao Wu, Sencun Zhu, Peng Liu
App repackaging has become a severe threat to theAndroid ecosystem. While various protection techniques, such as watermarking and repackaging detection, have been proposed, a defense that stops repackaged apps from working on user devices, i.e., repackage-proofing, is missing. We propose a technique that builds a reliable and stealthy repackage-proofing capability into Android apps. A large number of detection nodes are inserted into the original app without incurring much overhead, each is woven into the surrounding code to blur itself. Once repackaging is detected, a response node injects a failure in the form of delayed malfunctions, making it difficult to trace back. The response nodes and detection nodes form high-degree connections and communicate through stealthy communication channels, such that upon detection several of the many response nodes are selected stochastically to take actions, which further obfuscates and enhances the protection. We have built a prototype. The evaluation shows that the technique is effective and efficient.
{"title":"Repackage-Proofing Android Apps","authors":"Lannan Luo, Yu Fu, Dinghao Wu, Sencun Zhu, Peng Liu","doi":"10.1109/DSN.2016.56","DOIUrl":"https://doi.org/10.1109/DSN.2016.56","url":null,"abstract":"App repackaging has become a severe threat to theAndroid ecosystem. While various protection techniques, such as watermarking and repackaging detection, have been proposed, a defense that stops repackaged apps from working on user devices, i.e., repackage-proofing, is missing. We propose a technique that builds a reliable and stealthy repackage-proofing capability into Android apps. A large number of detection nodes are inserted into the original app without incurring much overhead, each is woven into the surrounding code to blur itself. Once repackaging is detected, a response node injects a failure in the form of delayed malfunctions, making it difficult to trace back. The response nodes and detection nodes form high-degree connections and communicate through stealthy communication channels, such that upon detection several of the many response nodes are selected stochastically to take actions, which further obfuscates and enhances the protection. We have built a prototype. The evaluation shows that the technique is effective and efficient.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125448808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phase change memory (PCM) has emerged as a promising non-volatile memory technology. Multi-level cell (MLC) PCM, while effectively reducing per bit fabrication cost, suffers from resistance drift based soft errors. It is challenging to construct reliable MLC chips that achieve high performance, high storage density, and low energy consumption simultaneously. In this paper, we propose ReadDuo, a fast and robust readout solution to address resistance drift in MLC PCM. We first integrate fast current sensing and resistance drift resilient voltage sensing, which exposes performance optimization opportunities without sacrificing reliability. We then devise last writes tracking and selective different write schemes to minimize performance and energy consumption overhead in scrubbing. Our experimental results show that ReadDuo achieves 37% improvement on average over existing solutions when considering performance, dynamic energy consumption, and storage density all together.
{"title":"ReadDuo: Constructing Reliable MLC Phase Change Memory through Fast and Robust Readout","authors":"Rujia Wang, Youtao Zhang, Jun Yang","doi":"10.1109/DSN.2016.27","DOIUrl":"https://doi.org/10.1109/DSN.2016.27","url":null,"abstract":"Phase change memory (PCM) has emerged as a promising non-volatile memory technology. Multi-level cell (MLC) PCM, while effectively reducing per bit fabrication cost, suffers from resistance drift based soft errors. It is challenging to construct reliable MLC chips that achieve high performance, high storage density, and low energy consumption simultaneously. In this paper, we propose ReadDuo, a fast and robust readout solution to address resistance drift in MLC PCM. We first integrate fast current sensing and resistance drift resilient voltage sensing, which exposes performance optimization opportunities without sacrificing reliability. We then devise last writes tracking and selective different write schemes to minimize performance and energy consumption overhead in scrubbing. Our experimental results show that ReadDuo achieves 37% improvement on average over existing solutions when considering performance, dynamic energy consumption, and storage density all together.","PeriodicalId":102292,"journal":{"name":"2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122306865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}