Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00069
Han Zhao, Weihao Cui, Quan Chen, Jingwen Leng, Kai Yu, Deze Zeng, Chao Li, M. Guo
While deep neural network (DNN) models are often trained on GPUs, many companies and research institutes build GPU clusters that are shared by different groups. On such GPU cluster, DNN training jobs also require CPU cores to run pre-processing, gradient synchronization. Our investigation shows that the number of cores allocated to a training job significantly impact its performance. To this end, we characterize representative deep learning models on their requirement for CPU cores under different GPU resource configurations, and study the sensitivity of these models to other CPU-side shared resources. Based on the characterization, we propose CODA, a scheduling system that is comprised of an adaptive CPU allocator, a real-time contention eliminator, and a multi-array job scheduler. Experimental results show that CODA improves GPU utilization by 20.8% on average without increasing the queuing time of CPU jobs.
{"title":"CODA: Improving Resource Utilization by Slimming and Co-locating DNN and CPU Jobs","authors":"Han Zhao, Weihao Cui, Quan Chen, Jingwen Leng, Kai Yu, Deze Zeng, Chao Li, M. Guo","doi":"10.1109/ICDCS47774.2020.00069","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00069","url":null,"abstract":"While deep neural network (DNN) models are often trained on GPUs, many companies and research institutes build GPU clusters that are shared by different groups. On such GPU cluster, DNN training jobs also require CPU cores to run pre-processing, gradient synchronization. Our investigation shows that the number of cores allocated to a training job significantly impact its performance. To this end, we characterize representative deep learning models on their requirement for CPU cores under different GPU resource configurations, and study the sensitivity of these models to other CPU-side shared resources. Based on the characterization, we propose CODA, a scheduling system that is comprised of an adaptive CPU allocator, a real-time contention eliminator, and a multi-array job scheduler. Experimental results show that CODA improves GPU utilization by 20.8% on average without increasing the queuing time of CPU jobs.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126163544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00184
Nandish Chattopadhyay, Ritabrata Maiti, A. Chattopadhyay
In the data-driven world, emerging technologies like the Internet of Things (IoT) and other crowd-sourced data sources like mobile devices etc. generate a tremendous volume of decentralized data that needs to be analyzed for obtaining useful insights, necessary for reliable decision making. Although the overall data is rich, contributors of such kind of data are reluctant to share their own data due to serious concerns regarding protection of their privacy; while those interested in harvesting the data are constrained by the limited computational resources available with each participant. In this paper, we propose an end-to-end algorithm that puts in coalescence the mechanism of learning collaboratively in a decentralized fashion, using Federated Learning, while preserving differential privacy of each participating client, which are typically conceived as resource-constrained edge devices. We have developed the proposed infrastructure and analyzed its performance from the standpoint of a machine learning task using standard metrics. We observed that the collaborative learning framework actually increases prediction capabilities in comparison to a centrally trained model (by 1-2%), without having to share data amongst the participants, while strong guarantees on privacy (ϵ, δ) can be provided with some compromise on performance (about 2-4%). Additionally, quantization of the model for deployment on edge devices do not degrade its capability, whilst enhancing the overall system efficiency.
{"title":"Deploy-able Privacy Preserving Collaborative ML","authors":"Nandish Chattopadhyay, Ritabrata Maiti, A. Chattopadhyay","doi":"10.1109/ICDCS47774.2020.00184","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00184","url":null,"abstract":"In the data-driven world, emerging technologies like the Internet of Things (IoT) and other crowd-sourced data sources like mobile devices etc. generate a tremendous volume of decentralized data that needs to be analyzed for obtaining useful insights, necessary for reliable decision making. Although the overall data is rich, contributors of such kind of data are reluctant to share their own data due to serious concerns regarding protection of their privacy; while those interested in harvesting the data are constrained by the limited computational resources available with each participant. In this paper, we propose an end-to-end algorithm that puts in coalescence the mechanism of learning collaboratively in a decentralized fashion, using Federated Learning, while preserving differential privacy of each participating client, which are typically conceived as resource-constrained edge devices. We have developed the proposed infrastructure and analyzed its performance from the standpoint of a machine learning task using standard metrics. We observed that the collaborative learning framework actually increases prediction capabilities in comparison to a centrally trained model (by 1-2%), without having to share data amongst the participants, while strong guarantees on privacy (ϵ, δ) can be provided with some compromise on performance (about 2-4%). Additionally, quantization of the model for deployment on edge devices do not degrade its capability, whilst enhancing the overall system efficiency.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125268944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00091
Binbing Hou, Feng Chen
Bitcoin is the world’s first blockchain-based, peer-to-peer cryptocurrency system. Being tremendously successful, the Bitcoin system is designed to support reliable, secure, and trusted transactions between untrusted peers. Since its release in 2009, the Bitcoin system has rapidly grown to an unprecedentedly large scale. However, the real-world behaviors of miners and users in the system and the efficacy of the original Bitcoin system design in the field deployment still remain unclear, hindering us from understanding its internals and developing the next-generation cryptocurrency system.In this paper, we study the behaviors of Bitcoin miners and users and their interactions based on quantitative analysis of more than nine years of Bitcoin transaction history, from its first release on January 3rd, 2009 to April 30th, 2018. We have analyzed over 300 million transaction records to study the transactions’ processing, confirmation, and implementation. We have obtained several critical findings regarding how the miners and users exploit the high degree of freedom provided by the Bitcoin system to achieve their own interests. For example, we find that miners often attempt to maximize their profits even by sacrificing system performance; users could try to speed up the transaction processing by mistakenly trading off security for reduced latency. Such unexpected behaviors, to some degree, deviate from the original design purposes of the Bitcoin system and could bring undesirable consequences. Besides revealing several unexpected behaviors of the Bitcoin miners and users in the real world, we have also discussed the associated system implications as well as optimization opportunities in the future.
{"title":"A Study on Nine Years of Bitcoin Transactions: Understanding Real-world Behaviors of Bitcoin Miners and Users","authors":"Binbing Hou, Feng Chen","doi":"10.1109/ICDCS47774.2020.00091","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00091","url":null,"abstract":"Bitcoin is the world’s first blockchain-based, peer-to-peer cryptocurrency system. Being tremendously successful, the Bitcoin system is designed to support reliable, secure, and trusted transactions between untrusted peers. Since its release in 2009, the Bitcoin system has rapidly grown to an unprecedentedly large scale. However, the real-world behaviors of miners and users in the system and the efficacy of the original Bitcoin system design in the field deployment still remain unclear, hindering us from understanding its internals and developing the next-generation cryptocurrency system.In this paper, we study the behaviors of Bitcoin miners and users and their interactions based on quantitative analysis of more than nine years of Bitcoin transaction history, from its first release on January 3rd, 2009 to April 30th, 2018. We have analyzed over 300 million transaction records to study the transactions’ processing, confirmation, and implementation. We have obtained several critical findings regarding how the miners and users exploit the high degree of freedom provided by the Bitcoin system to achieve their own interests. For example, we find that miners often attempt to maximize their profits even by sacrificing system performance; users could try to speed up the transaction processing by mistakenly trading off security for reduced latency. Such unexpected behaviors, to some degree, deviate from the original design purposes of the Bitcoin system and could bring undesirable consequences. Besides revealing several unexpected behaviors of the Bitcoin miners and users in the real world, we have also discussed the associated system implications as well as optimization opportunities in the future.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114682460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00054
Xiaobing Guo, Qingxiao Guo, Min Liu, Yunhao Wang, Yilong Ma, Bofu Yang
Blockchain is multi-centralized, immutable and traceable, thus is very suitable for distributed storage, privacy and security management in IoTs. However, most existing researches focus on the integration of public blockchain and IoTs. In fact, problems such as slow consensus, low transmission throughput, and completely open storage on the public blockchain are intolerable in IoT scenarios. Although consortium blockchain represented by Hyperledger Fabric has improved the transmission rate, its data security completely relies on the PKI-based certificate mechanism, resulting in transmission inefficiency and privacy leakage. In this paper, a key-derived Controllable Lightweight Secure Certificateless Signature (CLS2) algorithm is proposed to significantly improve the transmission efficiency and keep similar computation overhead of consortium blockchain. Compared with the existing certificateless signatures, CLS2 achieves more secure transactions, whose controllable anonymity and key-derived mechanism not only prevents public key replacement attacks and forged signature attacks, but also supports hierarchical privacy protection. Armed with CLS2, we design a consortium blockchain security architecture based on Hyper-ledger Fabric and edge computing. To the best of our knowledge, this is the first implementation of certificateless signature in consortium blockchain. We formally prove the security of our schemes in the random oracle model. Specifically, the security of the proposed scheme is reduced to the Elliptic curve discrete logarithm problem (ECDLP). Security analysis and experiments in IoT scenarios verify the feasibility and effectiveness of CLS2.
{"title":"A Certificateless Consortium Blockchain for IoTs","authors":"Xiaobing Guo, Qingxiao Guo, Min Liu, Yunhao Wang, Yilong Ma, Bofu Yang","doi":"10.1109/ICDCS47774.2020.00054","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00054","url":null,"abstract":"Blockchain is multi-centralized, immutable and traceable, thus is very suitable for distributed storage, privacy and security management in IoTs. However, most existing researches focus on the integration of public blockchain and IoTs. In fact, problems such as slow consensus, low transmission throughput, and completely open storage on the public blockchain are intolerable in IoT scenarios. Although consortium blockchain represented by Hyperledger Fabric has improved the transmission rate, its data security completely relies on the PKI-based certificate mechanism, resulting in transmission inefficiency and privacy leakage. In this paper, a key-derived Controllable Lightweight Secure Certificateless Signature (CLS2) algorithm is proposed to significantly improve the transmission efficiency and keep similar computation overhead of consortium blockchain. Compared with the existing certificateless signatures, CLS2 achieves more secure transactions, whose controllable anonymity and key-derived mechanism not only prevents public key replacement attacks and forged signature attacks, but also supports hierarchical privacy protection. Armed with CLS2, we design a consortium blockchain security architecture based on Hyper-ledger Fabric and edge computing. To the best of our knowledge, this is the first implementation of certificateless signature in consortium blockchain. We formally prove the security of our schemes in the random oracle model. Specifically, the security of the proposed scheme is reduced to the Elliptic curve discrete logarithm problem (ECDLP). Security analysis and experiments in IoT scenarios verify the feasibility and effectiveness of CLS2.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125519015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00189
Jiaping Yu, Haiwen Chen, Kui Wu, Zhiping Cai, Jinhua Cui
Surveillance cameras have been extensively used in smart cities and high security zones. Recent incidents have posed a new, powerful geo-range attack, where the attacker may compromise a group of surveillance cameras located within an area. To tackle the problem, we develop a distributed camera storage system that distributes video content across geographically dispersed surveillance cameras. It generates secure copies for the video content and enhances robustness by judiciously distributing erasure coded video blocks across optimally-chosen surveillance cameras. We implement the distributed storage system for surveillance cameras and evaluate its performance via real-world field test. Our system is the first solution that can defend against geo-range attacks in a robust and privacy-preserving manner.
{"title":"A Distributed Storage System for Robust, Privacy-Preserving Surveillance Cameras","authors":"Jiaping Yu, Haiwen Chen, Kui Wu, Zhiping Cai, Jinhua Cui","doi":"10.1109/ICDCS47774.2020.00189","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00189","url":null,"abstract":"Surveillance cameras have been extensively used in smart cities and high security zones. Recent incidents have posed a new, powerful geo-range attack, where the attacker may compromise a group of surveillance cameras located within an area. To tackle the problem, we develop a distributed camera storage system that distributes video content across geographically dispersed surveillance cameras. It generates secure copies for the video content and enhances robustness by judiciously distributing erasure coded video blocks across optimally-chosen surveillance cameras. We implement the distributed storage system for surveillance cameras and evaluate its performance via real-world field test. Our system is the first solution that can defend against geo-range attacks in a robust and privacy-preserving manner.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131808290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00055
John E. Augustine, Seth Gilbert, F. Kuhn, Peter Robinson, S. Sourav
We study the cost of distributed MST construction in the setting where each edge has a latency and a capacity, along with the weight. Edge latencies capture the delay on the links of the communication network, while capacity captures their throughput (the rate at which messages can be sent). Depending on how the edge latencies relate to the edge weights, we provide several tight bounds on the time and messages required to construct an MST.When edge weights exactly correspond with the latencies, we show that, perhaps interestingly, the bottleneck parameter in determining the running time of an algorithm is the total weight W of the MST (rather than the total number of nodes n, as in the standard CONGEST model). That is, we show a tight bound of $tilde Theta $ (D + $sqrt {W/c} $) rounds, where D refers to the latency diameter of the graph, W refers to the total weight of the constructed MST and edges have capacity c. The proposed algorithm sends Õ (m + W) messages, where m, the total number of edges in the network graph under consideration, is a known lower bound on message complexity for MST construction. We also show that Ω(W) is a lower bound for fast MST constructions.When the edge latencies and the corresponding edge weights are unrelated, and either can take arbitrary values, we show that (unlike the sub-linear time algorithms in the standard CONGEST model, on small diameter graphs), the best time complexity that can be achieved is Θ(D + n/c). However, if we restrict all edges to have equal latency ℓ and capacity c while having possibly different weights (weights could deviate arbitrarily from ℓ), we give an algorithm that constructs an MST in Õ (D + $sqrt {nell /c} $) time. In each case, we provide nearly matching upper and lower bounds.
{"title":"Latency, Capacity, and Distributed Minimum Spanning Tree†","authors":"John E. Augustine, Seth Gilbert, F. Kuhn, Peter Robinson, S. Sourav","doi":"10.1109/ICDCS47774.2020.00055","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00055","url":null,"abstract":"We study the cost of distributed MST construction in the setting where each edge has a latency and a capacity, along with the weight. Edge latencies capture the delay on the links of the communication network, while capacity captures their throughput (the rate at which messages can be sent). Depending on how the edge latencies relate to the edge weights, we provide several tight bounds on the time and messages required to construct an MST.When edge weights exactly correspond with the latencies, we show that, perhaps interestingly, the bottleneck parameter in determining the running time of an algorithm is the total weight W of the MST (rather than the total number of nodes n, as in the standard CONGEST model). That is, we show a tight bound of $tilde Theta $ (D + $sqrt {W/c} $) rounds, where D refers to the latency diameter of the graph, W refers to the total weight of the constructed MST and edges have capacity c. The proposed algorithm sends Õ (m + W) messages, where m, the total number of edges in the network graph under consideration, is a known lower bound on message complexity for MST construction. We also show that Ω(W) is a lower bound for fast MST constructions.When the edge latencies and the corresponding edge weights are unrelated, and either can take arbitrary values, we show that (unlike the sub-linear time algorithms in the standard CONGEST model, on small diameter graphs), the best time complexity that can be achieved is Θ(D + n/c). However, if we restrict all edges to have equal latency ℓ and capacity c while having possibly different weights (weights could deviate arbitrarily from ℓ), we give an algorithm that constructs an MST in Õ (D + $sqrt {nell /c} $) time. In each case, we provide nearly matching upper and lower bounds.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133913159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00100
A. Kshemkalyani, A. R. Molla, Gokarna Sharma
The dispersion problem on graphs asks k ≤n robots placed initially arbitrarily on the nodes of an n-node anonymous graph to reposition autonomously to reach a configuration in which each robot is on a distinct node of the graph. This problem is of significant interest due to its relationship to other fundamental robot coordination problems, such as exploration, scattering, load balancing, and relocation of self-driving electric cars (robots) to recharge stations (nodes). The objective is to simultaneously minimize (or provide trade-off between) two fundamental performance metrics: (i) time to achieve dispersion and (ii) memory requirement at each robot. This problem has been relatively well-studied on static graphs. In this paper, we investigate it for the very first time on dynamic graphs. Particularly, we show that, even with unlimited memory at each robot and 1-neighborhood knowledge, dispersion is impossible to solve on dynamic graphs in the local communication model, where a robot can only communicate with other robots that are present at the same node. We then show that, even with unlimited memory at each robot but without 1-neighborhood knowledge, dispersion is impossible to solve in the global communication model, where a robot can communicate with any other robot in the graph possibly at different nodes. We then consider the global communication model with 1-neighborhood knowledge and establish a tight bound of Θ(k) on the time complexity of solving dispersion in any n-node arbitrary anonymous dynamic graph with Θ(log k) bits memory at each robot. Finally, we extend the fault-free algorithm to solve dispersion for (crash) faulty robots under the global model with 1-neighborhood knowledge.
{"title":"Efficient Dispersion of Mobile Robots on Dynamic Graphs","authors":"A. Kshemkalyani, A. R. Molla, Gokarna Sharma","doi":"10.1109/ICDCS47774.2020.00100","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00100","url":null,"abstract":"The dispersion problem on graphs asks k ≤n robots placed initially arbitrarily on the nodes of an n-node anonymous graph to reposition autonomously to reach a configuration in which each robot is on a distinct node of the graph. This problem is of significant interest due to its relationship to other fundamental robot coordination problems, such as exploration, scattering, load balancing, and relocation of self-driving electric cars (robots) to recharge stations (nodes). The objective is to simultaneously minimize (or provide trade-off between) two fundamental performance metrics: (i) time to achieve dispersion and (ii) memory requirement at each robot. This problem has been relatively well-studied on static graphs. In this paper, we investigate it for the very first time on dynamic graphs. Particularly, we show that, even with unlimited memory at each robot and 1-neighborhood knowledge, dispersion is impossible to solve on dynamic graphs in the local communication model, where a robot can only communicate with other robots that are present at the same node. We then show that, even with unlimited memory at each robot but without 1-neighborhood knowledge, dispersion is impossible to solve in the global communication model, where a robot can communicate with any other robot in the graph possibly at different nodes. We then consider the global communication model with 1-neighborhood knowledge and establish a tight bound of Θ(k) on the time complexity of solving dispersion in any n-node arbitrary anonymous dynamic graph with Θ(log k) bits memory at each robot. Finally, we extend the fault-free algorithm to solve dispersion for (crash) faulty robots under the global model with 1-neighborhood knowledge.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130919856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00197
Yuanhu Yang, Jing Hu, Yusi Yang
In a social network, it needs to protect the network data effectively. To improve the security and privacy protection ability of the network data, a social network data protection algorithm is proposed based on dynamic cyclic encryption and link equilibrium configuration. The architecture model and routing control protocol of mobile social network are constructed. The mixed recommended values of user behavior attribution data of social network are calculated, and the data encryption in social network is realized by using sub-key random amplitude modulation method. The dynamic cyclic encryption algorithm is used to encrypt and transmit the data and the adaptive equalization scheduling of the data output of the social network is carried out by using the link equalization configuration method to improve the protection ability in the process of data transmission. The simulation results show that the proposed algorithm has good encryption ability, and the ability of data storage and transmission is improved.
{"title":"Research on Data Protection Algorithm Based on Social Network","authors":"Yuanhu Yang, Jing Hu, Yusi Yang","doi":"10.1109/ICDCS47774.2020.00197","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00197","url":null,"abstract":"In a social network, it needs to protect the network data effectively. To improve the security and privacy protection ability of the network data, a social network data protection algorithm is proposed based on dynamic cyclic encryption and link equilibrium configuration. The architecture model and routing control protocol of mobile social network are constructed. The mixed recommended values of user behavior attribution data of social network are calculated, and the data encryption in social network is realized by using sub-key random amplitude modulation method. The dynamic cyclic encryption algorithm is used to encrypt and transmit the data and the adaptive equalization scheduling of the data output of the social network is carried out by using the link equalization configuration method to improve the protection ability in the process of data transmission. The simulation results show that the proposed algorithm has good encryption ability, and the ability of data storage and transmission is improved.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132048989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00158
Sheng Zhang, Yung-Shiuan Liang, Zhuzhong Qian, Mingjun Xiao, Jidong Ge, Jie Wu, Sanglu Lu
In this paper, we consider a fundamental problem: given one mobile charger that can charge multiple sensor nodes simultaneously, how we can schedule it to charge a given WSN to maximize the energy usage effectiveness (EUE)? We propose a novel charging paradigm–Overlapped Mobile Charging (OMC)– the first of its kind to the best of our knowledge. Firstly, OMC clusters sensor nodes into multiple non-overlapped sets using k-means evaluated by the Davies-Bouldin Index, such that the sensor nodes in each set have similar recharging cycles. Secondly, for each set of sensor nodes, OMC further divides them into multiple overlapped groups, and charges each group at different locations for different time durations to make sure that each overlapped sensor node just receives its required energy from multiple charging locations.
{"title":"Overlapped Mobile Charging for Sensor Networks","authors":"Sheng Zhang, Yung-Shiuan Liang, Zhuzhong Qian, Mingjun Xiao, Jidong Ge, Jie Wu, Sanglu Lu","doi":"10.1109/ICDCS47774.2020.00158","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00158","url":null,"abstract":"In this paper, we consider a fundamental problem: given one mobile charger that can charge multiple sensor nodes simultaneously, how we can schedule it to charge a given WSN to maximize the energy usage effectiveness (EUE)? We propose a novel charging paradigm–Overlapped Mobile Charging (OMC)– the first of its kind to the best of our knowledge. Firstly, OMC clusters sensor nodes into multiple non-overlapped sets using k-means evaluated by the Davies-Bouldin Index, such that the sensor nodes in each set have similar recharging cycles. Secondly, for each set of sensor nodes, OMC further divides them into multiple overlapped groups, and charges each group at different locations for different time durations to make sure that each overlapped sensor node just receives its required energy from multiple charging locations.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133854335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00183
Kuldeep Sharma, N. Ramakrishnan, Alok Prakash, S. Lam, T. Srikanthan
Pruning of channels in trained deep neural networks has been widely used to implement efficient DNNs that can be deployed on embedded/mobile devices. Majority of existing techniques employ criteria-based sorting of the channels to preserve salient channels during pruning as well as to automatically determine the pruned network architecture. However, recent studies on widely used DNNs, such as VGG-16, have shown that selecting and preserving salient channels using pruning criteria is not necessary since the plasticity of the network allows the accuracy to be recovered through fine-tuning. In this work, we further explore the value of the ranking criteria in pruning to show that if channels are removed gradually and iteratively, alternating with fine-tuning on the target dataset, ranking criteria are indeed not necessary to select redundant channels. Experimental results confirm that even a random selection of channels for pruning leads to similar performance (accuracy). In addition, we demonstrate that even a simple pruning technique that uniformly removes channels from all layers in the network, performs similar to existing ranking criteria-based approaches, while leading to lower inference time (GFLOPs). Our extensive evaluations include the context of embedded implementations of DNNs - specifically, on small networks such as SqueezeNet and at aggressive pruning percentages. We leverage these insights, to propose a GFLOPs-aware iterative pruning strategy that does not rely on any ranking criteria and yet can further lead to lower inference time by 15% without sacrificing accuracy.
{"title":"Evaluating the Merits of Ranking in Structured Network Pruning","authors":"Kuldeep Sharma, N. Ramakrishnan, Alok Prakash, S. Lam, T. Srikanthan","doi":"10.1109/ICDCS47774.2020.00183","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00183","url":null,"abstract":"Pruning of channels in trained deep neural networks has been widely used to implement efficient DNNs that can be deployed on embedded/mobile devices. Majority of existing techniques employ criteria-based sorting of the channels to preserve salient channels during pruning as well as to automatically determine the pruned network architecture. However, recent studies on widely used DNNs, such as VGG-16, have shown that selecting and preserving salient channels using pruning criteria is not necessary since the plasticity of the network allows the accuracy to be recovered through fine-tuning. In this work, we further explore the value of the ranking criteria in pruning to show that if channels are removed gradually and iteratively, alternating with fine-tuning on the target dataset, ranking criteria are indeed not necessary to select redundant channels. Experimental results confirm that even a random selection of channels for pruning leads to similar performance (accuracy). In addition, we demonstrate that even a simple pruning technique that uniformly removes channels from all layers in the network, performs similar to existing ranking criteria-based approaches, while leading to lower inference time (GFLOPs). Our extensive evaluations include the context of embedded implementations of DNNs - specifically, on small networks such as SqueezeNet and at aggressive pruning percentages. We leverage these insights, to propose a GFLOPs-aware iterative pruning strategy that does not rely on any ranking criteria and yet can further lead to lower inference time by 15% without sacrificing accuracy.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127179275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}