Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00029
Jiarui Zhang, Yaodong Huang
The blockchain is introduced as a safe and decentralized technology widely used in cryptocurrencies. It provides a distributed and disintermediation system to securely process and store transactions between peer devices. Traditionally, every transaction in the blockchain is broadcasted throughout the network, which leaves huge computational and communicational overhead to nodes. Nodes may refuse to forward transactions, thereby hindering the consensus of the blockchain. In this paper, we design a blockchain system with Incentive Transaction Forwarding (ITF). ITF allows nodes to share the revenue from transaction fees as the incentive for transaction forwarding. We propose a mechanism keeping the topology updated for computing incentive allocations. We develop an incentive allocation algorithm to distribute revenue among nodes that forward transactions. We analyze the security of ITF and prove that nodes cannot get unfair advantages in our system by common attacks. Extensive simulations show that our system can have fair incentive allocations for relay nodes and against several attacks from adversaries.
{"title":"ITF: A Blockchain System with Incentivized Transaction Forwarding","authors":"Jiarui Zhang, Yaodong Huang","doi":"10.1109/ICDCS54860.2022.00029","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00029","url":null,"abstract":"The blockchain is introduced as a safe and decentralized technology widely used in cryptocurrencies. It provides a distributed and disintermediation system to securely process and store transactions between peer devices. Traditionally, every transaction in the blockchain is broadcasted throughout the network, which leaves huge computational and communicational overhead to nodes. Nodes may refuse to forward transactions, thereby hindering the consensus of the blockchain. In this paper, we design a blockchain system with Incentive Transaction Forwarding (ITF). ITF allows nodes to share the revenue from transaction fees as the incentive for transaction forwarding. We propose a mechanism keeping the topology updated for computing incentive allocations. We develop an incentive allocation algorithm to distribute revenue among nodes that forward transactions. We analyze the security of ITF and prove that nodes cannot get unfair advantages in our system by common attacks. Extensive simulations show that our system can have fair incentive allocations for relay nodes and against several attacks from adversaries.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122219594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00145
Hamza Safri, Mohamed Mehdi Kandi, Youssef Miloudi, C. Bortolaso, D. Trystram, F. Desprez
Federated learning (FL) is an approach that enables collaborative machine learning (ML) without sharing data over the network. Internet of Things (IoT) and Industry 4.0 are promising areas for FL adoption. Nevertheless, there are several challenges to overcome before the deployment of FL methods in existing large-scale IoT environments. In this paper, we present one step further toward the adoption of FL systems for IoT. More specifically, we developed a prototype that enables distributed ML model deployment, federated task orchestration, and monitoring of system state and model performance. We tested the prototype on a network that contains multiple Raspberry Pi for a use case of modeling the states of conveyors in an airport.
{"title":"Towards Developing a Global Federated Learning Platform for IoT","authors":"Hamza Safri, Mohamed Mehdi Kandi, Youssef Miloudi, C. Bortolaso, D. Trystram, F. Desprez","doi":"10.1109/ICDCS54860.2022.00145","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00145","url":null,"abstract":"Federated learning (FL) is an approach that enables collaborative machine learning (ML) without sharing data over the network. Internet of Things (IoT) and Industry 4.0 are promising areas for FL adoption. Nevertheless, there are several challenges to overcome before the deployment of FL methods in existing large-scale IoT environments. In this paper, we present one step further toward the adoption of FL systems for IoT. More specifically, we developed a prototype that enables distributed ML model deployment, federated task orchestration, and monitoring of system state and model performance. We tested the prototype on a network that contains multiple Raspberry Pi for a use case of modeling the states of conveyors in an airport.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131320787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00086
Siping Shi, Chuang Hu, Dan Wang, Yifei Zhu, Zhu Han
Local differential privacy (LDP) is a prominent approach and widely adopted in federated learning (FL) to preserve the privacy of local training data. It also nicely provides a rigorous privacy guarantee with computational efficiency in theory. However, a strong privacy guarantee with local differential privacy can degrade the adversarial robustness of the learned global model. To date, very few studies focus on the interplay between LDP and the adversarial robustness of federated learning. In this paper, we observe that LDP adds random noise to the data to achieve privacy guarantee of local data, and thus introduces uncertainty to the training dataset of federated learning. This leads to decreased robustness. To solve this robustness problem caused by uncertainty, we propose to leverage the promising distributionally robust optimization (DRO) modeling approach. Specifically, we first formulate a distributionally robust and private federated learning problem (DRPri). While our formulation successfully captures the uncertainty generated by the LDP, we show that it is not easily tractable. We thus transform our DRPri problem to another equivalent problem, under the Wasserstein distance-based uncertainty set, which is named the DRPri-W problem. We then design a robust and private federated learning algorithm, RPFL, to solve the DRPri-W problem. We analyze RPFL and theoretically show it satisfies differential privacy with a robustness guarantee. We evaluate algorithm RPFL by training classifiers on real-world datasets under a set of well-known attacks. Our experimental results show our algorithm RPFL can significantly improve the robustness of the trained global model under differentially private data by up to 4.33 times.
{"title":"Distributionally Robust Federated Learning for Differentially Private Data","authors":"Siping Shi, Chuang Hu, Dan Wang, Yifei Zhu, Zhu Han","doi":"10.1109/ICDCS54860.2022.00086","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00086","url":null,"abstract":"Local differential privacy (LDP) is a prominent approach and widely adopted in federated learning (FL) to preserve the privacy of local training data. It also nicely provides a rigorous privacy guarantee with computational efficiency in theory. However, a strong privacy guarantee with local differential privacy can degrade the adversarial robustness of the learned global model. To date, very few studies focus on the interplay between LDP and the adversarial robustness of federated learning. In this paper, we observe that LDP adds random noise to the data to achieve privacy guarantee of local data, and thus introduces uncertainty to the training dataset of federated learning. This leads to decreased robustness. To solve this robustness problem caused by uncertainty, we propose to leverage the promising distributionally robust optimization (DRO) modeling approach. Specifically, we first formulate a distributionally robust and private federated learning problem (DRPri). While our formulation successfully captures the uncertainty generated by the LDP, we show that it is not easily tractable. We thus transform our DRPri problem to another equivalent problem, under the Wasserstein distance-based uncertainty set, which is named the DRPri-W problem. We then design a robust and private federated learning algorithm, RPFL, to solve the DRPri-W problem. We analyze RPFL and theoretically show it satisfies differential privacy with a robustness guarantee. We evaluate algorithm RPFL by training classifiers on real-world datasets under a set of well-known attacks. Our experimental results show our algorithm RPFL can significantly improve the robustness of the trained global model under differentially private data by up to 4.33 times.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125592789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00020
Tianfeng Liu, Dan Li
Graph processing mainly includes two stages, namely, preprocessing and algorithm execution. Most previous proposals for performance enhancement of graph processing systems focus on the algorithm execution stage, and simple ignore the preprocessing overhead. However, in this work, we argue that the cost of preprocessing can not be ignored since the preprocessing time is much longer than the algorithm execution time in state-of-the-art systems.We propose EndGraph, a distributed graph preprocessing system, to improve preprocessing performance. Firstly, for graph partitioning, we find existing systems either assign imbalanced preprocessing workloads or spend too much time on graph partitioning. Hence, EndGraph proposes a novel chunk-based partition algorithm to balance preprocessing workloads and achieve theoretical lower bound of time complexity. Secondly, for graph construction (converting data layout from edge array to adjacency list), existing systems use counting sort, which is not efficient for computation and communication. EndGraph employs a novel two-level graph construction method by carefully decoupling the graph construction into intra-machine and inter-machine construction. Our extensive evaluation results show that, compared with five state-of-the-art systems, LFGraph, PowerLyra, PowerGraph, D-Galois, and Gemini, EndGraph can improve the preprocessing performance up to 35.76 ×(from 4.72×). To show the generality of EndGraph, we integrate it with D-Galois and Gemini, and it improves the end-to-end (including preprocessing and algorithm execution) graph processing performance up to 7.44× (from 2.96×).
{"title":"EndGraph: An Efficient Distributed Graph Preprocessing System","authors":"Tianfeng Liu, Dan Li","doi":"10.1109/ICDCS54860.2022.00020","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00020","url":null,"abstract":"Graph processing mainly includes two stages, namely, preprocessing and algorithm execution. Most previous proposals for performance enhancement of graph processing systems focus on the algorithm execution stage, and simple ignore the preprocessing overhead. However, in this work, we argue that the cost of preprocessing can not be ignored since the preprocessing time is much longer than the algorithm execution time in state-of-the-art systems.We propose EndGraph, a distributed graph preprocessing system, to improve preprocessing performance. Firstly, for graph partitioning, we find existing systems either assign imbalanced preprocessing workloads or spend too much time on graph partitioning. Hence, EndGraph proposes a novel chunk-based partition algorithm to balance preprocessing workloads and achieve theoretical lower bound of time complexity. Secondly, for graph construction (converting data layout from edge array to adjacency list), existing systems use counting sort, which is not efficient for computation and communication. EndGraph employs a novel two-level graph construction method by carefully decoupling the graph construction into intra-machine and inter-machine construction. Our extensive evaluation results show that, compared with five state-of-the-art systems, LFGraph, PowerLyra, PowerGraph, D-Galois, and Gemini, EndGraph can improve the preprocessing performance up to 35.76 ×(from 4.72×). To show the generality of EndGraph, we integrate it with D-Galois and Gemini, and it improves the end-to-end (including preprocessing and algorithm execution) graph processing performance up to 7.44× (from 2.96×).","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129800108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated learning (FL) is an emerging machine learning paradigm where multiple distributed clients collaboratively train a model without centrally collecting their raw data. In FL setting, it is a common case that the data on local clients come from different domains, e.g., photos taken by different mobile phones can vary in intensity and contrast due to the difference of imaging parameters. In such a cross-domain case, features extracted from data of different clients deviate from each other in the feature space, leading to the so-called feature shift. The feature shift can reduce the discrimination of features and degrade the performance of the learned model. However, most existing FL methods are not particularly designed for cross-domain setting. In this paper, we propose a novel cross-domain FL method, named AlignFed. In AlignFed, the model on each client is separated to a personalized feature extractor and a shared classifier. The former extracts consistent features among clients by aligning features of different clients to some specific points in the feature space. The latter aggregates the knowledge across clients over the consistent feature space, which can mitigate the performance degradation caused by the feature shift in cross-domain FL. We conduct experiments on common-used multi-domain datasets, including Digits-Five, Office-Caltech10, and DomainNet. The experimental results demonstrate that AlignFed can outperform the state-of-art FL methods.
{"title":"Aligning before Aggregating: Enabling Cross-domain Federated Learning via Consistent Feature Extraction","authors":"Guogang Zhu, Xuefeng Liu, Shaojie Tang, Jianwei Niu","doi":"10.1109/ICDCS54860.2022.00083","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00083","url":null,"abstract":"Federated learning (FL) is an emerging machine learning paradigm where multiple distributed clients collaboratively train a model without centrally collecting their raw data. In FL setting, it is a common case that the data on local clients come from different domains, e.g., photos taken by different mobile phones can vary in intensity and contrast due to the difference of imaging parameters. In such a cross-domain case, features extracted from data of different clients deviate from each other in the feature space, leading to the so-called feature shift. The feature shift can reduce the discrimination of features and degrade the performance of the learned model. However, most existing FL methods are not particularly designed for cross-domain setting. In this paper, we propose a novel cross-domain FL method, named AlignFed. In AlignFed, the model on each client is separated to a personalized feature extractor and a shared classifier. The former extracts consistent features among clients by aligning features of different clients to some specific points in the feature space. The latter aggregates the knowledge across clients over the consistent feature space, which can mitigate the performance degradation caused by the feature shift in cross-domain FL. We conduct experiments on common-used multi-domain datasets, including Digits-Five, Office-Caltech10, and DomainNet. The experimental results demonstrate that AlignFed can outperform the state-of-art FL methods.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129871789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00121
Xin Luo, Zhuo Xu, Kaiping Xue, Qiantong Jiang, Ruidong Li, David S. L. Wei
As the voucher for identity, digital certificates and the public key infrastructure (PKI) system have always played a vital role to provide the authentication services. In recent years, with the increase in attacks on traditional centralized PKIs and the extensive deployment of blockchains, researchers have tried to establish blockchain-based secure decentralized PKIs and have made significant progress. Although blockchain enhances security, it brings new problems in scalability due to the inherent limitations of blockchain’s data structure and consensus mechanism, which become much severe for the massive access in the era of 5G and B5G. In this paper, we propose ScalaCert to mitigate the scalability problems of blockchain-based PKIs by utilizing redactable blockchain for "on-cert" revocation. Specifically, we utilize the redactable blockchain to record revocation information directly on the original certificate ("on-cert") and remove additional data structures such as CRL, significantly reducing storage overhead. Moreover, the combination of redactable and consortium blockchains brings a new kind of attack called deception of versions (DoV) attack. To defend against it, we design a random-block-node-check (RBNC) based freshness check mechanism. Security and performance analyses show that ScalaCert has sufficient security and effectively solves the scalability problem of the blockchain-based PKI system.
{"title":"ScalaCert: Scalability-Oriented PKI with Redactable Consortium Blockchain Enabled \"On-Cert\" Certificate Revocation","authors":"Xin Luo, Zhuo Xu, Kaiping Xue, Qiantong Jiang, Ruidong Li, David S. L. Wei","doi":"10.1109/ICDCS54860.2022.00121","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00121","url":null,"abstract":"As the voucher for identity, digital certificates and the public key infrastructure (PKI) system have always played a vital role to provide the authentication services. In recent years, with the increase in attacks on traditional centralized PKIs and the extensive deployment of blockchains, researchers have tried to establish blockchain-based secure decentralized PKIs and have made significant progress. Although blockchain enhances security, it brings new problems in scalability due to the inherent limitations of blockchain’s data structure and consensus mechanism, which become much severe for the massive access in the era of 5G and B5G. In this paper, we propose ScalaCert to mitigate the scalability problems of blockchain-based PKIs by utilizing redactable blockchain for \"on-cert\" revocation. Specifically, we utilize the redactable blockchain to record revocation information directly on the original certificate (\"on-cert\") and remove additional data structures such as CRL, significantly reducing storage overhead. Moreover, the combination of redactable and consortium blockchains brings a new kind of attack called deception of versions (DoV) attack. To defend against it, we design a random-block-node-check (RBNC) based freshness check mechanism. Security and performance analyses show that ScalaCert has sufficient security and effectively solves the scalability problem of the blockchain-based PKI system.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131083009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In modern networks, administrators realize their desired functions such as network measurement in several data plane programs. They often employ the network-wide program deployment paradigm that decomposes input programs into match-action tables (MATs) while deploying each MAT on a specific programmable switch. Since MATs may be deployed on different switches, existing solutions propose the inter-switch coordination that uses the per-packet header space to deliver crucial packet processing information among switches. However, such coordination introduces non-trivial per-packet byte overhead, leading to significant end-to-end network performance degradation. In this paper, we propose Hermes, a program deployment framework that aims to minimize the per-packet byte overhead. The key idea of Hermes is to formulate the network-wide program deployment as a mixed-integer linear programming (MILP) problem with the objective of minimizing the per-packet byte overhead. In view of the NP hardness of the MILP problem, Hermes further offers a greedy-based heuristic that solves the problem in a near-optimal and timely manner. We have implemented Hermes on Tofino-based switches. Our experiments show that compared to existing frameworks, Hermes decreases the per-packet byte overhead by 156 bytes while preserving end-to-end performance in terms of flow completion time and goodput.
{"title":"Toward Low-Overhead Inter-Switch Coordination in Network-Wide Data Plane Program Deployment","authors":"Xiang Chen, Hongyan Liu, Qingjiang Xiao, Kaiwei Guo, Tingxin Sun, Xiang Ling, Xuan Liu, Qun Huang, Dong Zhang, Haifeng Zhou, Fan Zhang, Chunming Wu","doi":"10.1109/ICDCS54860.2022.00043","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00043","url":null,"abstract":"In modern networks, administrators realize their desired functions such as network measurement in several data plane programs. They often employ the network-wide program deployment paradigm that decomposes input programs into match-action tables (MATs) while deploying each MAT on a specific programmable switch. Since MATs may be deployed on different switches, existing solutions propose the inter-switch coordination that uses the per-packet header space to deliver crucial packet processing information among switches. However, such coordination introduces non-trivial per-packet byte overhead, leading to significant end-to-end network performance degradation. In this paper, we propose Hermes, a program deployment framework that aims to minimize the per-packet byte overhead. The key idea of Hermes is to formulate the network-wide program deployment as a mixed-integer linear programming (MILP) problem with the objective of minimizing the per-packet byte overhead. In view of the NP hardness of the MILP problem, Hermes further offers a greedy-based heuristic that solves the problem in a near-optimal and timely manner. We have implemented Hermes on Tofino-based switches. Our experiments show that compared to existing frameworks, Hermes decreases the per-packet byte overhead by 156 bytes while preserving end-to-end performance in terms of flow completion time and goodput.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132636807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00152
Mariya Shmalko, A. Abuadbba, R. Gaire, Tingmin Wu, Hye-young Paik, Surya Nepal
Many Machine Learning (ML) based phishing detection algorithms are not adept to recognise "concept drift"; attackers introduce small changes in the statistical characteristics of their phishing attempts to successfully bypass detection. This leads to the classification problem of frequent false positives and false negatives, and a reliance on manual reporting of phishing by users. Profiler is a distributed phishing risk assessment tool that combines three email profiling dimensions: (1) threat level, (2) cognitive manipulation, and (3) email content type to detect email phishing. Unlike pure ML-based approaches, Profiler does not require large data sets to be effective and evaluations on real-world data sets show that it can be useful in conjunction with ML algorithms to mitigate the impact of concept drift.
{"title":"Profiler: Distributed Model to Detect Phishing","authors":"Mariya Shmalko, A. Abuadbba, R. Gaire, Tingmin Wu, Hye-young Paik, Surya Nepal","doi":"10.1109/ICDCS54860.2022.00152","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00152","url":null,"abstract":"Many Machine Learning (ML) based phishing detection algorithms are not adept to recognise \"concept drift\"; attackers introduce small changes in the statistical characteristics of their phishing attempts to successfully bypass detection. This leads to the classification problem of frequent false positives and false negatives, and a reliance on manual reporting of phishing by users. Profiler is a distributed phishing risk assessment tool that combines three email profiling dimensions: (1) threat level, (2) cognitive manipulation, and (3) email content type to detect email phishing. Unlike pure ML-based approaches, Profiler does not require large data sets to be effective and evaluations on real-world data sets show that it can be useful in conjunction with ML algorithms to mitigate the impact of concept drift.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129474464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00104
Jingyang Hu, Hongbo Jiang, Daibo Liu, Zhu Xiao, S. Dustdar, Jiangchuan Liu, Geyong Min
The eye-blink pattern is crucial for drowsy driving diagnostics, which has become an increasingly serious social issue. However, traditional methods (e.g., with EOG, camera, wearable, and acoustic sensors) are less applicable to real-life scenarios due to the disharmony between user-friendliness, monitoring accuracy, and privacy-preserving. In this work, we design and implement BlinkRadar as a low-cost and contact-free system to conduct fine-grained eye-blink monitoring in a driving situation using a customized impulse-radio ultra-wideband (IR-UWB) radar which has superior spatial resolution with the ultra-wide bandwidth. BlinkRadar leverages an IR-UWB radar to achieve contact-free sensing, and it fully exploits the complex radar signal for data augmentation. BlinkRadar aims to single out the eye-blink induced waveforms modulated by body movements and vehicle status. It solves the serious interference caused by the unique characteristics of blinking (i.e., subtle, sparse, and non-periodic) and from the human target itself and surrounding objects. We evaluate BlinkRadar in a laboratory environment and during actual road testing. Experimental results show that BlinkRadar can achieve a robust performance of drowsy driving with a median detection accuracy of 92.2% and eye blink detection of 95.5%.
{"title":"BlinkRadar: Non-Intrusive Driver Eye-Blink Detection with UWB Radar","authors":"Jingyang Hu, Hongbo Jiang, Daibo Liu, Zhu Xiao, S. Dustdar, Jiangchuan Liu, Geyong Min","doi":"10.1109/ICDCS54860.2022.00104","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00104","url":null,"abstract":"The eye-blink pattern is crucial for drowsy driving diagnostics, which has become an increasingly serious social issue. However, traditional methods (e.g., with EOG, camera, wearable, and acoustic sensors) are less applicable to real-life scenarios due to the disharmony between user-friendliness, monitoring accuracy, and privacy-preserving. In this work, we design and implement BlinkRadar as a low-cost and contact-free system to conduct fine-grained eye-blink monitoring in a driving situation using a customized impulse-radio ultra-wideband (IR-UWB) radar which has superior spatial resolution with the ultra-wide bandwidth. BlinkRadar leverages an IR-UWB radar to achieve contact-free sensing, and it fully exploits the complex radar signal for data augmentation. BlinkRadar aims to single out the eye-blink induced waveforms modulated by body movements and vehicle status. It solves the serious interference caused by the unique characteristics of blinking (i.e., subtle, sparse, and non-periodic) and from the human target itself and surrounding objects. We evaluate BlinkRadar in a laboratory environment and during actual road testing. Experimental results show that BlinkRadar can achieve a robust performance of drowsy driving with a median detection accuracy of 92.2% and eye blink detection of 95.5%.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129844412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDCS54860.2022.00034
Jiang Xiao, Shijie Zhang, Zhiwei Zhang, Bo Li, Xiaohai Dai, Hai Jin
A Directed Acyclic Graph (DAG)-based blockchain with its inherent parallel structure can potentially significantly improve the throughput performance over conventional blockchains. Such a performance improvement can be further enhanced through concurrent transaction processing. This, however, brings new challenges in concurrency control design in that there is an increased number of concurrent reads and writes to the same address in a DAG-based blockchain, which leads to a considerable rise of potential conflicts. Therefore, one critical problem is how to effectively and efficiently detect and order conflicting transactions. In this work, for the first time, we aim to improve system throughput and processing latency by exploring the address dependencies among different transactions. We propose NEZHA, an efficient concurrency control scheme for DAG-based blockchains. Specifically, NEZHA intelligently constructs an address-based conflict graph (ACG) while incorporating address dependencies as edges to capture all conflicting transactions. To generate a total order between transactions, we propose a hierarchical sorting (HS) algorithm to derive sorting ranks of addresses based on the ACG and sort transactions on each address. Extensive experiments demonstrate that, even under high data contention, NEZHA can increase the throughput over the conventional conflict graph scheme by up to 8 ×, while decreasing the transaction processing latency up to 10 ×.
{"title":"Nezha: Exploiting Concurrency for Transaction Processing in DAG-based Blockchains","authors":"Jiang Xiao, Shijie Zhang, Zhiwei Zhang, Bo Li, Xiaohai Dai, Hai Jin","doi":"10.1109/ICDCS54860.2022.00034","DOIUrl":"https://doi.org/10.1109/ICDCS54860.2022.00034","url":null,"abstract":"A Directed Acyclic Graph (DAG)-based blockchain with its inherent parallel structure can potentially significantly improve the throughput performance over conventional blockchains. Such a performance improvement can be further enhanced through concurrent transaction processing. This, however, brings new challenges in concurrency control design in that there is an increased number of concurrent reads and writes to the same address in a DAG-based blockchain, which leads to a considerable rise of potential conflicts. Therefore, one critical problem is how to effectively and efficiently detect and order conflicting transactions. In this work, for the first time, we aim to improve system throughput and processing latency by exploring the address dependencies among different transactions. We propose NEZHA, an efficient concurrency control scheme for DAG-based blockchains. Specifically, NEZHA intelligently constructs an address-based conflict graph (ACG) while incorporating address dependencies as edges to capture all conflicting transactions. To generate a total order between transactions, we propose a hierarchical sorting (HS) algorithm to derive sorting ranks of addresses based on the ACG and sort transactions on each address. Extensive experiments demonstrate that, even under high data contention, NEZHA can increase the throughput over the conventional conflict graph scheme by up to 8 ×, while decreasing the transaction processing latency up to 10 ×.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"3 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113959874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}