Jeman Park, Aminollah Khormali, Manar Mohaisen, Aziz Mohaisen
Open DNS resolvers are resolvers that perform recursive resolution on behalf of any user. They can be exploited by adversaries because they are open to the public and require no authorization to use. Therefore, it is important to understand the state of open resolvers to gauge their potentially negative impact on the security and stability of the Internet. In this study, we conducted a comprehensive probing over the entire IPv4 address space and found that more than 3 million open resolvers still exist in the wild. Moreover, we found that many of them work in a way that deviates from the standard. More importantly, we found that many open resolvers answer queries with the incorrect, even malicious, responses. Contrasting to results obtained in 2013, we found that while the number of open resolvers has decreased significantly, the number of resolvers providing incorrect responses is almost the same, while the number of open resolvers providing malicious responses has increased, highlighting the prevalence of their threat.
{"title":"Where Are You Taking Me? Behavioral Analysis of Open DNS Resolvers","authors":"Jeman Park, Aminollah Khormali, Manar Mohaisen, Aziz Mohaisen","doi":"10.1109/DSN.2019.00057","DOIUrl":"https://doi.org/10.1109/DSN.2019.00057","url":null,"abstract":"Open DNS resolvers are resolvers that perform recursive resolution on behalf of any user. They can be exploited by adversaries because they are open to the public and require no authorization to use. Therefore, it is important to understand the state of open resolvers to gauge their potentially negative impact on the security and stability of the Internet. In this study, we conducted a comprehensive probing over the entire IPv4 address space and found that more than 3 million open resolvers still exist in the wild. Moreover, we found that many of them work in a way that deviates from the standard. More importantly, we found that many open resolvers answer queries with the incorrect, even malicious, responses. Contrasting to results obtained in 2013, we found that while the number of open resolvers has decreased significantly, the number of resolvers providing incorrect responses is almost the same, while the number of open resolvers providing malicious responses has increased, highlighting the prevalence of their threat.","PeriodicalId":271955,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124245880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonghwan Kim, Daehee Jang, Yunjong Jeong, Brent Byunghoon Kang
Object Layout Randomization (OLR) is a memory randomization approach that makes unpredictable in-object memory layout by shuffling and relocating each member fields of the object. This defense approach has significant security effect for mitigating various types of memory error attacks. However, the current state-of-the-art enforces OLR while compile time. It makes diversified object layout for each binary, but the layout remains equal across the execution. This approach can be effective in case the program binary is hidden from attackers. However, there are several limitations: (i) the security efficacy is built with the premise that the binary is safely undisclosed from adversaries, (ii) the randomized object layout is identical across multiple executions, and (iii) the programmer should manually specify which objects should be affected by OLR. In this paper, we introduce Per-allocation Object Layout Randomization(POLaR): the first dynamic approach of OLR suited for public binaries. The randomization mechanism of POLaR is applied at runtime, and the randomization makes unique object layout even for the same type of instances. As a result, POLaR achieves two previously unmet security primitives. (i) The randomization does not break upon the exposure of the binary. (ii) Repeating the same attack does not result in deterministic behavior. In addition, we also implemented the TaintClass framework based on DFSan project to optimize/automate the target object selection process. To show the efficacy of POLaR, we use several public open-source software and SPEC2006 benchmark suites.
{"title":"POLaR: Per-Allocation Object Layout Randomization","authors":"Jonghwan Kim, Daehee Jang, Yunjong Jeong, Brent Byunghoon Kang","doi":"10.1109/DSN.2019.00058","DOIUrl":"https://doi.org/10.1109/DSN.2019.00058","url":null,"abstract":"Object Layout Randomization (OLR) is a memory randomization approach that makes unpredictable in-object memory layout by shuffling and relocating each member fields of the object. This defense approach has significant security effect for mitigating various types of memory error attacks. However, the current state-of-the-art enforces OLR while compile time. It makes diversified object layout for each binary, but the layout remains equal across the execution. This approach can be effective in case the program binary is hidden from attackers. However, there are several limitations: (i) the security efficacy is built with the premise that the binary is safely undisclosed from adversaries, (ii) the randomized object layout is identical across multiple executions, and (iii) the programmer should manually specify which objects should be affected by OLR. In this paper, we introduce Per-allocation Object Layout Randomization(POLaR): the first dynamic approach of OLR suited for public binaries. The randomization mechanism of POLaR is applied at runtime, and the randomization makes unique object layout even for the same type of instances. As a result, POLaR achieves two previously unmet security primitives. (i) The randomization does not break upon the exposure of the binary. (ii) Repeating the same attack does not result in deterministic behavior. In addition, we also implemented the TaintClass framework based on DFSan project to optimize/automate the target object selection process. To show the efficacy of POLaR, we use several public open-source software and SPEC2006 benchmark suites.","PeriodicalId":271955,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122484330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The reliability of today's electronic products suffers from a growing variability of failure and ageing effects. In this paper, we investigate a technique for the efficient derivation of uncertainty distributions of system reliability. We assume that a system is composed of unreliable components whose reliabilities are modeled as probability distributions. Existing Monte Carlo (MC) simulation-based techniques, which iteratively select a sample from the probability distributions of the components, often suffer from high execution time and/or poor coverage of the sample space. To avoid the costly re-evaluation of a system reliability during MC simulation, we propose to employ the Taylor expansion of the system reliability function. Moreover, we propose a stratified sampling technique which is based on the fact that the contribution (or importance) of the components on the uncertainty of their system may not be equivalent. This technique finely/coarsely stratifies the probability distribution of the components with high/low contribution. The experimental results show that the proposed technique is more efficient and provides more accurate results compared to previously proposed techniques.
{"title":"Efficient Treatment of Uncertainty in System Reliability Analysis using Importance Measures","authors":"H. Aliee, Faramarz Khosravi, J. Teich","doi":"10.1109/DSN.2019.00022","DOIUrl":"https://doi.org/10.1109/DSN.2019.00022","url":null,"abstract":"The reliability of today's electronic products suffers from a growing variability of failure and ageing effects. In this paper, we investigate a technique for the efficient derivation of uncertainty distributions of system reliability. We assume that a system is composed of unreliable components whose reliabilities are modeled as probability distributions. Existing Monte Carlo (MC) simulation-based techniques, which iteratively select a sample from the probability distributions of the components, often suffer from high execution time and/or poor coverage of the sample space. To avoid the costly re-evaluation of a system reliability during MC simulation, we propose to employ the Taylor expansion of the system reliability function. Moreover, we propose a stratified sampling technique which is based on the fact that the contribution (or importance) of the components on the uncertainty of their system may not be equivalent. This technique finely/coarsely stratifies the probability distribution of the components with high/low contribution. The experimental results show that the proposed technique is more efficient and provides more accurate results compared to previously proposed techniques.","PeriodicalId":271955,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"258 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114421195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rising popularity of file-sharing services such as Google Drive and Dropbox in the workflows of individuals and corporations alike, the protection of client-outsourced data from unauthorized access or tampering remains a major security concern. Existing cryptographic solutions to this problem typically require server-side support, involve non-trivial key management on the part of users, and suffer from severe re-encryption penalties upon access revocations. This combination of performance overheads and management burdens makes this class of solutions undesirable in situations where performant, platform-agnostic, dynamic sharing of user content is required. We present NEXUS, a stackable filesystem that leverages trusted hardware to provide confidentiality and integrity for user files stored on untrusted platforms. NEXUS is explicitly designed to balance security, portability, and performance: it supports dynamic sharing of protected volumes on any platform exposing a file access API without requiring server-side support, enables the use of fine-grained access control policies to allow for selective sharing, and avoids the key revocation and file re-encryption overheads associated with other cryptographic approaches to access control. This combination of features is made possible by the use of a client-side Intel SGX enclave that is used to protect and share NEXUS volumes, ensuring that cryptographic keys never leave enclave memory and obviating the need to reencrypt files upon revocation of access rights. We implemented a NEXUS prototype that runs on top of the AFS filesystem and show that it incurs ×2 overhead for a variety of common file and database operations.
{"title":"NeXUS: Practical and Secure Access Control on Untrusted Storage Platforms using Client-Side SGX","authors":"J. B. Djoko, Jack Lange, Adam J. Lee","doi":"10.1109/DSN.2019.00049","DOIUrl":"https://doi.org/10.1109/DSN.2019.00049","url":null,"abstract":"With the rising popularity of file-sharing services such as Google Drive and Dropbox in the workflows of individuals and corporations alike, the protection of client-outsourced data from unauthorized access or tampering remains a major security concern. Existing cryptographic solutions to this problem typically require server-side support, involve non-trivial key management on the part of users, and suffer from severe re-encryption penalties upon access revocations. This combination of performance overheads and management burdens makes this class of solutions undesirable in situations where performant, platform-agnostic, dynamic sharing of user content is required. We present NEXUS, a stackable filesystem that leverages trusted hardware to provide confidentiality and integrity for user files stored on untrusted platforms. NEXUS is explicitly designed to balance security, portability, and performance: it supports dynamic sharing of protected volumes on any platform exposing a file access API without requiring server-side support, enables the use of fine-grained access control policies to allow for selective sharing, and avoids the key revocation and file re-encryption overheads associated with other cryptographic approaches to access control. This combination of features is made possible by the use of a client-side Intel SGX enclave that is used to protect and share NEXUS volumes, ensuring that cryptographic keys never leave enclave memory and obviating the need to reencrypt files upon revocation of access rights. We implemented a NEXUS prototype that runs on top of the AFS filesystem and show that it incurs ×2 overhead for a variety of common file and database operations.","PeriodicalId":271955,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131521218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hui Kang, Ting Dai, Nerla Jean-Louis, S. Tao, Xiaohui Gu
On a Blockchain network, transaction data are exposed to all participants. To preserve privacy and confidentiality in transactions, while still maintaining data immutability, we design and implement FabZK. FabZK conceals transaction details on a shared ledger by storing only encrypted data from each transaction (e.g., payment amount), and by anonymizing the transactional relationship (e.g., payer and payee) between members in a Blockchain network. It achieves both privacy and auditability by supporting verifiable Pedersen commitments and constructing zero-knowledge proofs. FabZK is implemented as an extension to the open source Hyperledger Fabric. It provides APIs to easily enable data privacy in both client code and chaincode. It also supports on-demand, automated auditing based on encrypted data. Our evaluation shows that FabZK offers strong privacy-preserving capabilities, while delivering reasonable performance for the applications developed based on its framework.
{"title":"FabZK: Supporting Privacy-Preserving, Auditable Smart Contracts in Hyperledger Fabric","authors":"Hui Kang, Ting Dai, Nerla Jean-Louis, S. Tao, Xiaohui Gu","doi":"10.1109/DSN.2019.00061","DOIUrl":"https://doi.org/10.1109/DSN.2019.00061","url":null,"abstract":"On a Blockchain network, transaction data are exposed to all participants. To preserve privacy and confidentiality in transactions, while still maintaining data immutability, we design and implement FabZK. FabZK conceals transaction details on a shared ledger by storing only encrypted data from each transaction (e.g., payment amount), and by anonymizing the transactional relationship (e.g., payer and payee) between members in a Blockchain network. It achieves both privacy and auditability by supporting verifiable Pedersen commitments and constructing zero-knowledge proofs. FabZK is implemented as an extension to the open source Hyperledger Fabric. It provides APIs to easily enable data privacy in both client code and chaincode. It also supports on-demand, automated auditing based on encrypted data. Our evaluation shows that FabZK offers strong privacy-preserving capabilities, while delivering reasonable performance for the applications developed based on its framework.","PeriodicalId":271955,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130961735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prashant J. Nair, Bahar Asgari, Moinuddin K. Qureshi
Conventionally, systems have relied on technology scaling to provide smaller cells, which helps in increasing the capacity of on-chip and off-chip structures. Unfortunately, scaling technology to smaller nodes causes increased susceptibility to faults. We study the problem of efficiently tolerating transient failures using scalable Spin-Transfer Torque RAM (STTRAM) as an example. At smaller feature sizes, the energy required to flip a STTRAM cell reduces, which makes these cells more susceptible to random failures caused by thermal noise. Such failures can be tolerated by periodic scrubbing and provisioning each line with Error Correction Code (ECC). However, to tolerate the desired bit-error-rate, the cache needs ECC-6 (six bit error correction) per line, incurring impractical storage overheads. Ideally, we want to tolerate these faults without relying on multi-bit ECC. We propose SuDoku, a design that provisions each line with ECC-1 and a strong error detection code, and relies on a region-based RAID-4 to perform correction of multi-bit errors. Unfortunately, simply having such a RAID-4 based architecture is ineffective at tolerating a high-rate of transient faults and provides an MTTF in the order of only a few seconds. We describe a novel data resurrection scheme that can repair multiple faulty lines in a RAID-4 region to increase the MTTF to several hours. We propose an extension of SuDoku, which hashes a given line into two regions of RAID-4 to significantly enhance reliability and increase the MTTF to trillions of hours. Our evaluations show that SuDoku provides 874x higher reliability than ECC-6, incurs 30% less storage than ECC-6, and performs within 0.1% of an ideal fault-free baseline.
{"title":"SuDoku: Tolerating High-Rate of Transient Failures for Enabling Scalable STTRAM","authors":"Prashant J. Nair, Bahar Asgari, Moinuddin K. Qureshi","doi":"10.1109/DSN.2019.00048","DOIUrl":"https://doi.org/10.1109/DSN.2019.00048","url":null,"abstract":"Conventionally, systems have relied on technology scaling to provide smaller cells, which helps in increasing the capacity of on-chip and off-chip structures. Unfortunately, scaling technology to smaller nodes causes increased susceptibility to faults. We study the problem of efficiently tolerating transient failures using scalable Spin-Transfer Torque RAM (STTRAM) as an example. At smaller feature sizes, the energy required to flip a STTRAM cell reduces, which makes these cells more susceptible to random failures caused by thermal noise. Such failures can be tolerated by periodic scrubbing and provisioning each line with Error Correction Code (ECC). However, to tolerate the desired bit-error-rate, the cache needs ECC-6 (six bit error correction) per line, incurring impractical storage overheads. Ideally, we want to tolerate these faults without relying on multi-bit ECC. We propose SuDoku, a design that provisions each line with ECC-1 and a strong error detection code, and relies on a region-based RAID-4 to perform correction of multi-bit errors. Unfortunately, simply having such a RAID-4 based architecture is ineffective at tolerating a high-rate of transient faults and provides an MTTF in the order of only a few seconds. We describe a novel data resurrection scheme that can repair multiple faulty lines in a RAID-4 region to increase the MTTF to several hours. We propose an extension of SuDoku, which hashes a given line into two regions of RAID-4 to significantly enhance reliability and increase the MTTF to trillions of hours. Our evaluations show that SuDoku provides 874x higher reliability than ECC-6, incurs 30% less storage than ECC-6, and performs within 0.1% of an ideal fault-free baseline.","PeriodicalId":271955,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133389280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weibin Wu, Hui Xu, Sanqiang Zhong, Michael R. Lyu, Irwin King
The exceptional performance of Deep neural networks (DNNs) encourages their deployment in safety-and dependability-critical systems. However, DNNs often demonstrate erroneous behaviors in real-world corner cases. Existing countermeasures center on improving the testing and bug-fixing practice. Unfortunately, building a bug-free DNN-based system is almost impossible currently due to its black-box nature, so anomaly detection is imperative in practice. Motivated by the idea of data validation in a traditional program, we propose and implement Deep Validation, a novel framework for detecting real-world error-inducing corner cases in a DNN-based system during runtime. We model the specifications of DNNs by resorting to their training data and cast checking input validity of DNNs as the problem of discrepancy estimation. Deep Validation achieves excellent detection results against various corner case scenarios across three popular datasets. Consequently, Deep Validation greatly complements existing efforts and is a crucial step toward building safe and dependable DNN-based systems.
{"title":"Deep Validation: Toward Detecting Real-World Corner Cases for Deep Neural Networks","authors":"Weibin Wu, Hui Xu, Sanqiang Zhong, Michael R. Lyu, Irwin King","doi":"10.1109/DSN.2019.00026","DOIUrl":"https://doi.org/10.1109/DSN.2019.00026","url":null,"abstract":"The exceptional performance of Deep neural networks (DNNs) encourages their deployment in safety-and dependability-critical systems. However, DNNs often demonstrate erroneous behaviors in real-world corner cases. Existing countermeasures center on improving the testing and bug-fixing practice. Unfortunately, building a bug-free DNN-based system is almost impossible currently due to its black-box nature, so anomaly detection is imperative in practice. Motivated by the idea of data validation in a traditional program, we propose and implement Deep Validation, a novel framework for detecting real-world error-inducing corner cases in a DNN-based system during runtime. We model the specifications of DNNs by resorting to their training data and cast checking input validity of DNNs as the problem of discrepancy estimation. Deep Validation achieves excellent detection results against various corner case scenarios across three popular datasets. Consequently, Deep Validation greatly complements existing efforts and is a crucial step toward building safe and dependable DNN-based systems.","PeriodicalId":271955,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122275333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saurabh Jha, Subho Sankar Banerjee, Timothy Tsai, S. Hari, Michael B. Sullivan, Z. Kalbarczyk, S. Keckler, R. Iyer
The safety and resilience of fully autonomous vehicles (AVs) are of significant concern, as exemplified by several headline-making accidents. While AV development today involves verification, validation, and testing, end-to-end assessment of AV systems under accidental faults in realistic driving scenarios has been largely unexplored. This paper presents DriveFI, a machine learning-based fault injection engine, which can mine situations and faults that maximally impact AV safety, as demonstrated on two industry-grade AV technology stacks (from NVIDIA and Baidu). For example, DriveFI found 561 safety-critical faults in less than 4 hours. In comparison, random injection experiments executed over several weeks could not find any safety-critical faults.
{"title":"ML-Based Fault Injection for Autonomous Vehicles: A Case for Bayesian Fault Injection","authors":"Saurabh Jha, Subho Sankar Banerjee, Timothy Tsai, S. Hari, Michael B. Sullivan, Z. Kalbarczyk, S. Keckler, R. Iyer","doi":"10.1109/DSN.2019.00025","DOIUrl":"https://doi.org/10.1109/DSN.2019.00025","url":null,"abstract":"The safety and resilience of fully autonomous vehicles (AVs) are of significant concern, as exemplified by several headline-making accidents. While AV development today involves verification, validation, and testing, end-to-end assessment of AV systems under accidental faults in realistic driving scenarios has been largely unexplored. This paper presents DriveFI, a machine learning-based fault injection engine, which can mine situations and faults that maximally impact AV safety, as demonstrated on two industry-grade AV technology stacks (from NVIDIA and Baidu). For example, DriveFI found 561 safety-critical faults in less than 4 hours. In comparison, random injection experiments executed over several weeks could not find any safety-critical faults.","PeriodicalId":271955,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121652849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Message from the Research Track Chairs","authors":"","doi":"10.1109/dsn.2019.00006","DOIUrl":"https://doi.org/10.1109/dsn.2019.00006","url":null,"abstract":"","PeriodicalId":271955,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131374368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[Title page iii]","authors":"","doi":"10.1109/dsn.2019.00002","DOIUrl":"https://doi.org/10.1109/dsn.2019.00002","url":null,"abstract":"","PeriodicalId":271955,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125433458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}