Automated contact tracing is a key solution to control the spread of airborne transmittable diseases: it traces contacts among individuals in order to alert people about their potential risk of being infected. The current SARS-CoV-2 pandemic put a heavy strain on the healthcare system of many countries. Governments chose different approaches to face the spread of the virus and the contact tracing apps were considered the most effective ones. In particular, by leveraging on the Bluetooth Low-Energy technology, mobile apps allow to achieve a privacy-preserving contact tracing of citizens. While researchers proposed several contact tracing approaches, each government developed its own national contact tracing app. In this paper, we demonstrate that many popular contact tracing apps (e.g., the ones promoted by the Italian, French, Swiss government) are vulnerable to relay attacks. Through such attacks people might get misleadingly diagnosed as positive to SARS-CoV-2, thus being enforced to quarantine and eventually leading to a breakdown of the healthcare system. To tackle this vulnerability, we propose a novel and lightweight solution that prevents relay attacks, while providing the same privacy-preserving features as the current approaches. To evaluate the feasibility of both the relay attack and our novel defence mechanism, we developed a proof of concept against the Italian contact tracing app (i.e., Immuni). The design of our defence allows it to be integrated into any contact tracing app. To foster the adoption of our solution in contact tracing apps and encourage developers to integrate it in the future releases, we publish the source code.
{"title":"Contact Tracing Made Un-relay-able","authors":"M. Casagrande, M. Conti, E. Losiouk","doi":"10.1145/3422337.3447829","DOIUrl":"https://doi.org/10.1145/3422337.3447829","url":null,"abstract":"Automated contact tracing is a key solution to control the spread of airborne transmittable diseases: it traces contacts among individuals in order to alert people about their potential risk of being infected. The current SARS-CoV-2 pandemic put a heavy strain on the healthcare system of many countries. Governments chose different approaches to face the spread of the virus and the contact tracing apps were considered the most effective ones. In particular, by leveraging on the Bluetooth Low-Energy technology, mobile apps allow to achieve a privacy-preserving contact tracing of citizens. While researchers proposed several contact tracing approaches, each government developed its own national contact tracing app. In this paper, we demonstrate that many popular contact tracing apps (e.g., the ones promoted by the Italian, French, Swiss government) are vulnerable to relay attacks. Through such attacks people might get misleadingly diagnosed as positive to SARS-CoV-2, thus being enforced to quarantine and eventually leading to a breakdown of the healthcare system. To tackle this vulnerability, we propose a novel and lightweight solution that prevents relay attacks, while providing the same privacy-preserving features as the current approaches. To evaluate the feasibility of both the relay attack and our novel defence mechanism, we developed a proof of concept against the Italian contact tracing app (i.e., Immuni). The design of our defence allows it to be integrated into any contact tracing app. To foster the adoption of our solution in contact tracing apps and encourage developers to integrate it in the future releases, we publish the source code.","PeriodicalId":187272,"journal":{"name":"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy","volume":"321 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115866067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated Learning (FL) is a distributed, and decentralized machine learning protocol. By executing FL, a set of agents can jointly train a model without sharing their datasets with each other, or a third-party. This makes FL particularly suitable for settings where data privacy is desired. At the same time, concealing training data gives attackers an opportunity to inject backdoors into the trained model. It has been shown that an attacker can inject backdoors to the trained model during FL, and then can leverage the backdoor to make the model misclassify later. Several works tried to alleviate this threat by designing robust aggregation functions. However, given more sophisticated attacks are developed over time, which by-pass the existing defenses, we approach this problem from a complementary angle in this work. Particularly, we aim to discourage backdoor attacks by detecting, and punishing the attackers, possibly after the end of training phase. To this end, we develop a hybrid blockchain-based FL framework that uses smart contracts to automatically detect, and punish the attackers via monetary penalties. Our framework is general in the sense that, any aggregation function, and any attacker detection algorithm can be plugged into it. We conduct experiments to demonstrate that our framework preserves the communication-efficient nature of FL, and provide empirical results to illustrate that it can successfully penalize attackers by leveraging our novel attacker detection algorithm.
{"title":"BlockFLA: Accountable Federated Learning via Hybrid Blockchain Architecture","authors":"H. Desai, Mustafa Safa Ozdayi, Murat Kantarcioglu","doi":"10.1145/3422337.3447837","DOIUrl":"https://doi.org/10.1145/3422337.3447837","url":null,"abstract":"Federated Learning (FL) is a distributed, and decentralized machine learning protocol. By executing FL, a set of agents can jointly train a model without sharing their datasets with each other, or a third-party. This makes FL particularly suitable for settings where data privacy is desired. At the same time, concealing training data gives attackers an opportunity to inject backdoors into the trained model. It has been shown that an attacker can inject backdoors to the trained model during FL, and then can leverage the backdoor to make the model misclassify later. Several works tried to alleviate this threat by designing robust aggregation functions. However, given more sophisticated attacks are developed over time, which by-pass the existing defenses, we approach this problem from a complementary angle in this work. Particularly, we aim to discourage backdoor attacks by detecting, and punishing the attackers, possibly after the end of training phase. To this end, we develop a hybrid blockchain-based FL framework that uses smart contracts to automatically detect, and punish the attackers via monetary penalties. Our framework is general in the sense that, any aggregation function, and any attacker detection algorithm can be plugged into it. We conduct experiments to demonstrate that our framework preserves the communication-efficient nature of FL, and provide empirical results to illustrate that it can successfully penalize attackers by leveraging our novel attacker detection algorithm.","PeriodicalId":187272,"journal":{"name":"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115984556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Anjum, Mu Zhu, Isaac Polinsky, W. Enck, M. Reiter, Munindar P. Singh
Historically, enterprise network reconnaissance is an active process, often involving port scanning. However, as routers and switches become more complex, they also become more susceptible to compromise. From this vantage point, an attacker can passively identify high-value hosts such as the workstations of IT administrators, C-suite executives, and finance personnel. The goal of this paper is to develop a technique to deceive and dissuade such adversaries. We propose HoneyRoles, which uses honey connections to build metaphorical haystacks around the network traffic of client hosts belonging to high-value organizational roles. The honey connections also act as network canaries to signal network compromise, thereby dissuading the adversary from acting on information observed in network flows. We design a prototype implementation of HoneyRoles an OpenFlow SDN controller and evaluate its security using the PRISM probabilistic model checker. Our performance evaluation shows that HoneyRoles has a small effect on network request completion time, and security analysis demonstrates that once an alert is raised, HoneyRoles can quickly identify the compromised switch with high probability. In doing so, we show that role-based network deception is a promising approach for defending against adversaries in compromised network devices.
{"title":"Role-Based Deception in Enterprise Networks","authors":"I. Anjum, Mu Zhu, Isaac Polinsky, W. Enck, M. Reiter, Munindar P. Singh","doi":"10.1145/3422337.3447824","DOIUrl":"https://doi.org/10.1145/3422337.3447824","url":null,"abstract":"Historically, enterprise network reconnaissance is an active process, often involving port scanning. However, as routers and switches become more complex, they also become more susceptible to compromise. From this vantage point, an attacker can passively identify high-value hosts such as the workstations of IT administrators, C-suite executives, and finance personnel. The goal of this paper is to develop a technique to deceive and dissuade such adversaries. We propose HoneyRoles, which uses honey connections to build metaphorical haystacks around the network traffic of client hosts belonging to high-value organizational roles. The honey connections also act as network canaries to signal network compromise, thereby dissuading the adversary from acting on information observed in network flows. We design a prototype implementation of HoneyRoles an OpenFlow SDN controller and evaluate its security using the PRISM probabilistic model checker. Our performance evaluation shows that HoneyRoles has a small effect on network request completion time, and security analysis demonstrates that once an alert is raised, HoneyRoles can quickly identify the compromised switch with high probability. In doing so, we show that role-based network deception is a promising approach for defending against adversaries in compromised network devices.","PeriodicalId":187272,"journal":{"name":"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127136497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In training their newly-developed malware detection methods, researchers rely on threshold-based labeling strategies that interpret the scan reports provided by online platforms, such as VirusTotal. The dynamicity of this platform renders those labeling strategies unsustainable over prolonged periods, which leads to inaccurate labels. Using inaccurately labeled apps to train and evaluate malware detection methods significantly undermines the reliability of their results, leading to either dismissing otherwise promising detection approaches or adopting intrinsically inadequate ones. The infeasibility of generating accurate labels via manual analysis and the lack of reliable alternatives force researchers to utilize VirusTotal to label apps. In the paper, we tackle this issue in two manners. Firstly, we reveal the aspects of VirusTotalss dynamicity and how they impact threshold-based labeling strategies and provide actionable insights on how to use these labeling strategies given VirusTotal's dynamicity reliably. Secondly, we motivate the implementation of alternative platforms by (a) identifying VirusTotal limitations that such platforms should avoid, and (b) proposing an architecture of how such platforms can be constructed to mitigate VirusTotal's limitations.
{"title":"Towards Accurate Labeling of Android Apps for Reliable Malware Detection","authors":"Aleieldin Salem","doi":"10.1145/3422337.3447849","DOIUrl":"https://doi.org/10.1145/3422337.3447849","url":null,"abstract":"In training their newly-developed malware detection methods, researchers rely on threshold-based labeling strategies that interpret the scan reports provided by online platforms, such as VirusTotal. The dynamicity of this platform renders those labeling strategies unsustainable over prolonged periods, which leads to inaccurate labels. Using inaccurately labeled apps to train and evaluate malware detection methods significantly undermines the reliability of their results, leading to either dismissing otherwise promising detection approaches or adopting intrinsically inadequate ones. The infeasibility of generating accurate labels via manual analysis and the lack of reliable alternatives force researchers to utilize VirusTotal to label apps. In the paper, we tackle this issue in two manners. Firstly, we reveal the aspects of VirusTotalss dynamicity and how they impact threshold-based labeling strategies and provide actionable insights on how to use these labeling strategies given VirusTotal's dynamicity reliably. Secondly, we motivate the implementation of alternative platforms by (a) identifying VirusTotal limitations that such platforms should avoid, and (b) proposing an architecture of how such platforms can be constructed to mitigate VirusTotal's limitations.","PeriodicalId":187272,"journal":{"name":"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117209553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the membership inference (MI) attack against classifiers, where the attacker's goal is to determine whether a data instance was used for training the classifier. Through systematic cataloging of existing MI attacks and extensive experimental evaluations of them, we find that a model's vulnerability to MI attacks is tightly related to the generalization gap---the difference between training accuracy and test accuracy. We then propose a defense against MI attacks that aims to close the gap by intentionally reduces the training accuracy. More specifically, the training process attempts to match the training and validation accuracies, by means of a new set regularizer using the Maximum Mean Discrepancy between the softmax output empirical distributions of the training and validation sets. Our experimental results show that combining this approach with another simple defense (mix-up training) significantly improves state-of-the-art defense against MI attacks, with minimal impact on testing accuracy.
{"title":"Membership Inference Attacks and Defenses in Classification Models","authors":"Jiacheng Li, Ninghui Li, Bruno Ribeiro","doi":"10.1145/3422337.3447836","DOIUrl":"https://doi.org/10.1145/3422337.3447836","url":null,"abstract":"We study the membership inference (MI) attack against classifiers, where the attacker's goal is to determine whether a data instance was used for training the classifier. Through systematic cataloging of existing MI attacks and extensive experimental evaluations of them, we find that a model's vulnerability to MI attacks is tightly related to the generalization gap---the difference between training accuracy and test accuracy. We then propose a defense against MI attacks that aims to close the gap by intentionally reduces the training accuracy. More specifically, the training process attempts to match the training and validation accuracies, by means of a new set regularizer using the Maximum Mean Discrepancy between the softmax output empirical distributions of the training and validation sets. Our experimental results show that combining this approach with another simple defense (mix-up training) significantly improves state-of-the-art defense against MI attacks, with minimal impact on testing accuracy.","PeriodicalId":187272,"journal":{"name":"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130347933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adversarial examples are among the biggest challenges for machine learning models, especially neural network classifiers. Adversarial examples are inputs manipulated with perturbations insignificant to humans while being able to fool machine learning models. Researchers achieve great progress in utilizing adversarial training as a defense. However, the overwhelming computational cost degrades its applicability, and little has been done to overcome this issue. Single-Step adversarial training methods have been proposed as computationally viable solutions; however, they still fail to defend against iterative adversarial examples. In this work, we first experimentally analyze several different state-of-the-art (SOTA) defenses against adversarial examples. Then, based on observations from experiments, we propose a novel single-step adversarial training method that can defend against both single-step and iterative adversarial examples. Through extensive evaluations, we demonstrate that our proposed method successfully combines the advantages of both single-step (low training overhead) and iterative (high robustness) adversarial training defenses. Compared with ATDA on the CIFAR-10 dataset, for example, our proposed method achieves a 35.67% enhancement in test accuracy and a 19.14% reduction in training time. When compared with methods that use BIM or Madry examples (iterative methods) on the CIFAR-10 dataset, our proposed method saves up to 76.03% in training time, with less than 3.78% degeneration in test accuracy. Finally, our experiments with the ImageNet dataset clearly show the scalability of our approach and its performance advantages over SOTA single-step approaches.
{"title":"Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples","authors":"Guanxiong Liu, Issa M. Khalil, Abdallah Khreishah","doi":"10.1145/3422337.3447841","DOIUrl":"https://doi.org/10.1145/3422337.3447841","url":null,"abstract":"Adversarial examples are among the biggest challenges for machine learning models, especially neural network classifiers. Adversarial examples are inputs manipulated with perturbations insignificant to humans while being able to fool machine learning models. Researchers achieve great progress in utilizing adversarial training as a defense. However, the overwhelming computational cost degrades its applicability, and little has been done to overcome this issue. Single-Step adversarial training methods have been proposed as computationally viable solutions; however, they still fail to defend against iterative adversarial examples. In this work, we first experimentally analyze several different state-of-the-art (SOTA) defenses against adversarial examples. Then, based on observations from experiments, we propose a novel single-step adversarial training method that can defend against both single-step and iterative adversarial examples. Through extensive evaluations, we demonstrate that our proposed method successfully combines the advantages of both single-step (low training overhead) and iterative (high robustness) adversarial training defenses. Compared with ATDA on the CIFAR-10 dataset, for example, our proposed method achieves a 35.67% enhancement in test accuracy and a 19.14% reduction in training time. When compared with methods that use BIM or Madry examples (iterative methods) on the CIFAR-10 dataset, our proposed method saves up to 76.03% in training time, with less than 3.78% degeneration in test accuracy. Finally, our experiments with the ImageNet dataset clearly show the scalability of our approach and its performance advantages over SOTA single-step approaches.","PeriodicalId":187272,"journal":{"name":"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125290008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seoyeon Kang, Sujeong Lee, Yumin Kim, Seong-Kyun Mok, Eun-Sun Cho
In this paper, we propose OBFUS, a web-based tool that can easily apply obfuscation techniques to high-level and low-level programming languages. OBFUS's high-level obfuscator parses and obfuscates the source code, overlaying the obfuscation to produce more complex results. OBFUS's low-level obfuscator decompiles binary programs into LLVM IR. This LLVM IR pro-gram is obfuscated and the LLVM IR program is recompiled to become an obfuscated binary program.
{"title":"OBFUS","authors":"Seoyeon Kang, Sujeong Lee, Yumin Kim, Seong-Kyun Mok, Eun-Sun Cho","doi":"10.1145/3422337.3450321","DOIUrl":"https://doi.org/10.1145/3422337.3450321","url":null,"abstract":"In this paper, we propose OBFUS, a web-based tool that can easily apply obfuscation techniques to high-level and low-level programming languages. OBFUS's high-level obfuscator parses and obfuscates the source code, overlaying the obfuscation to produce more complex results. OBFUS's low-level obfuscator decompiles binary programs into LLVM IR. This LLVM IR pro-gram is obfuscated and the LLVM IR program is recompiled to become an obfuscated binary program.","PeriodicalId":187272,"journal":{"name":"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117120310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yue Li, Zhenyu Wu, Haining Wang, Kun Sun, Zhichun Li, Kangkook Jee, J. Rhee, Haifeng Chen
Tracking user activities inside an enterprise network has been a fundamental building block for today's security infrastructure, as it provides accurate user profiling and helps security auditors to make informed decisions based on the derived insights from the abundant log data. Towards more accurate user tracking, we propose a novel paradigm named UTrack by leveraging rich system-level audit logs. From a holistic perspective, we bridge the semantic gap between user accounts and real users, tracking a real user's activities across different user accounts and different network hosts based on causal relationship among processes. To achieve better scalability and a more salient view, we apply a variety of data reduction and compression techniques to process the large amount of data. %and significantly reduce the data volume. We implement UTrack in a real enterprise environment consisting of 111 hosts, which generate more than 4 billion events in total during the experiment time of one month. Through our evaluation, we demonstrate that UTrack is able to accurately identify the events that are relevant to user activities. Our data reduction and compression modules largely reduce the output data size, producing a both accurate and salient overview on a user session profile.
{"title":"UTrack","authors":"Yue Li, Zhenyu Wu, Haining Wang, Kun Sun, Zhichun Li, Kangkook Jee, J. Rhee, Haifeng Chen","doi":"10.1145/3422337.3447831","DOIUrl":"https://doi.org/10.1145/3422337.3447831","url":null,"abstract":"Tracking user activities inside an enterprise network has been a fundamental building block for today's security infrastructure, as it provides accurate user profiling and helps security auditors to make informed decisions based on the derived insights from the abundant log data. Towards more accurate user tracking, we propose a novel paradigm named UTrack by leveraging rich system-level audit logs. From a holistic perspective, we bridge the semantic gap between user accounts and real users, tracking a real user's activities across different user accounts and different network hosts based on causal relationship among processes. To achieve better scalability and a more salient view, we apply a variety of data reduction and compression techniques to process the large amount of data. %and significantly reduce the data volume. We implement UTrack in a real enterprise environment consisting of 111 hosts, which generate more than 4 billion events in total during the experiment time of one month. Through our evaluation, we demonstrate that UTrack is able to accurately identify the events that are relevant to user activities. Our data reduction and compression modules largely reduce the output data size, producing a both accurate and salient overview on a user session profile.","PeriodicalId":187272,"journal":{"name":"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115615613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bitcoin is the most popular cryptocurrency which supports payment services via the Bitcoin peer-to-peer network. However, Bitcoin suffers from a fundamental problem. In practice, a secure Bitcoin transaction requires the payee to wait for at least 6 block confirmations (one hour) to be validated. Such a long waiting time thwarts the wide deployment of the Bitcoin payment services because many usage scenarios require a much shorter waiting time. In this paper, we propose BFastPay to accelerate the Bitcoin payment validation. BFastPay employs a smart contract called BFPayArbitrator to host the payer's security deposit and fulfills the role of a trusted payment arbitrator which guarantees that a payee always receives the payment even if attacks occur. BFastpay is a routing-free solution that eliminates the requirement for payment routing in the traditional payment routing network (e.g., Lightning Network). The theoretical and experimental results show that BFast is able to significantly reduce the Bitcoin payment waiting time (e.g., from 60 mins to less than 1 second) with nearly no extra operation cost.
{"title":"BFastPay","authors":"Xinyu Lei, Guan-Hua Tu, Tian Xie, Sihan Wang","doi":"10.1145/3422337.3447823","DOIUrl":"https://doi.org/10.1145/3422337.3447823","url":null,"abstract":"Bitcoin is the most popular cryptocurrency which supports payment services via the Bitcoin peer-to-peer network. However, Bitcoin suffers from a fundamental problem. In practice, a secure Bitcoin transaction requires the payee to wait for at least 6 block confirmations (one hour) to be validated. Such a long waiting time thwarts the wide deployment of the Bitcoin payment services because many usage scenarios require a much shorter waiting time. In this paper, we propose BFastPay to accelerate the Bitcoin payment validation. BFastPay employs a smart contract called BFPayArbitrator to host the payer's security deposit and fulfills the role of a trusted payment arbitrator which guarantees that a payee always receives the payment even if attacks occur. BFastpay is a routing-free solution that eliminates the requirement for payment routing in the traditional payment routing network (e.g., Lightning Network). The theoretical and experimental results show that BFast is able to significantly reduce the Bitcoin payment waiting time (e.g., from 60 mins to less than 1 second) with nearly no extra operation cost.","PeriodicalId":187272,"journal":{"name":"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130203952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}