Pub Date : 2018-10-01DOI: 10.1109/NAS.2018.8515722
Li Yang, Jian Liu, Chengsheng Pan, Qijie Zou, Debin Wei
As an indispensable part of the global communication system, satellite networks are facing a sharp increase of service types and numbers, while users want more reliable and faster services, that is, more stringent QoS requirements. The traditional routing algorithm only considers the single QoS requirement which cannot meet the multi-QoS requirements of services, the priority relationship between different QoS requirements is neglected which will affect the overall utilization of the network. In this paper, the Multi-QoS Routing oPtimization algorithm (MQRP) is proposed which is based on PROMETHEE (preference ranking organization methods for enrichment evaluation). MQRP is based on the characteristics that different types of service on the satellite have difference link attribute requirements. The eigenvector method is used to set different QoS weight vectors for many kinds of services, and the link multi attribute evaluation index is constructed by the priority function and evaluation criterion. The attribute priority and priority index are used to establish the path evaluation model. Under different business requirements, the priority of each path of the feasible path concentration is evaluated, and then the Pareto optimal path is made. The simulation results show that the algorithm can distinguish between different QoS requirements and balance network traffic, and obviously improve the resource utilization rate of the network.
{"title":"MQRP: QoS Attribute Decision Optimization for Satellite Network Routing","authors":"Li Yang, Jian Liu, Chengsheng Pan, Qijie Zou, Debin Wei","doi":"10.1109/NAS.2018.8515722","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515722","url":null,"abstract":"As an indispensable part of the global communication system, satellite networks are facing a sharp increase of service types and numbers, while users want more reliable and faster services, that is, more stringent QoS requirements. The traditional routing algorithm only considers the single QoS requirement which cannot meet the multi-QoS requirements of services, the priority relationship between different QoS requirements is neglected which will affect the overall utilization of the network. In this paper, the Multi-QoS Routing oPtimization algorithm (MQRP) is proposed which is based on PROMETHEE (preference ranking organization methods for enrichment evaluation). MQRP is based on the characteristics that different types of service on the satellite have difference link attribute requirements. The eigenvector method is used to set different QoS weight vectors for many kinds of services, and the link multi attribute evaluation index is constructed by the priority function and evaluation criterion. The attribute priority and priority index are used to establish the path evaluation model. Under different business requirements, the priority of each path of the feasible path concentration is evaluated, and then the Pareto optimal path is made. The simulation results show that the algorithm can distinguish between different QoS requirements and balance network traffic, and obviously improve the resource utilization rate of the network.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130372209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/NAS.2018.8515735
Penglin Dai, Kai Liu, Ke Xiao, Junhua Wang, Zhaofei Yu, Huanlai Xing
Heterogeneous network resources are expected to cooperate with each other to support data services in vehicular networks. However, individual wireless interface cannot complete services within short vehicular dwelling time. Further, the network heterogeneity further complicates the transmission task assignment among multiple wireless interfaces. To resolve such an issue, we propose a novel architecture, where a scheduler is able to manage heterogeneous network resources in a centralized way. Then, we formulate the heterogeneous wireless interface management (HWIM) problem by considering both the heterogeneities of wireless interfaces and the delay constraints of service requests. On this basis, we design a heuristic algorithm called Adaptive Task Assignment (ATA), which synthesizes mobility feature, broadcast efficiency and service deadline into priority design. Accordingly, ATA is able to adaptively distribute broadcast task of each request among multiple interfaces, so as to improve overall system performance. Last but not the least, we build the simulation model and implement the proposed algorithm. The comprehensive simulation results show the superiority of the proposed algorithm.
{"title":"An Adaptive Task Assignment Scheme for Data Service in Heterogeneous Vehicular Networks","authors":"Penglin Dai, Kai Liu, Ke Xiao, Junhua Wang, Zhaofei Yu, Huanlai Xing","doi":"10.1109/NAS.2018.8515735","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515735","url":null,"abstract":"Heterogeneous network resources are expected to cooperate with each other to support data services in vehicular networks. However, individual wireless interface cannot complete services within short vehicular dwelling time. Further, the network heterogeneity further complicates the transmission task assignment among multiple wireless interfaces. To resolve such an issue, we propose a novel architecture, where a scheduler is able to manage heterogeneous network resources in a centralized way. Then, we formulate the heterogeneous wireless interface management (HWIM) problem by considering both the heterogeneities of wireless interfaces and the delay constraints of service requests. On this basis, we design a heuristic algorithm called Adaptive Task Assignment (ATA), which synthesizes mobility feature, broadcast efficiency and service deadline into priority design. Accordingly, ATA is able to adaptively distribute broadcast task of each request among multiple interfaces, so as to improve overall system performance. Last but not the least, we build the simulation model and implement the proposed algorithm. The comprehensive simulation results show the superiority of the proposed algorithm.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132046294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/NAS.2018.8515697
Tong Liu, Xubin He, Shakeel Alibhai, Chentao Wu
In modern distributed storage systems, space efficiency and system reliability are two major concerns. As a result, contemporary storage systems often employ data deduplication and erasure coding to reduce the storage overhead and provide fault tolerance, respectively. However, little work has been done to explore the relationship between these two techniques. In this paper, we propose Reference-counter Aware Deduplication (RAD), which employs the features of deduplication into erasure coding to improve garbage collection performance when deletion occurs. RAD wisely encodes the data according to the reference counter, which is provided by the deduplication level and thus reduces the encoding overhead when garbage collection is conducted. Further, since the reference counter also represents the reliability levels of the data chunks, we additionally made some effort to explore the trade-offs between storage overhead and reliability level among different erasure codes. The experiment results show that RAD can effectively improve the GC performance by up to 24.8% and the reliability analysis shows that, with certain data features, RAD can provide both better reliability and better storage efficiency compared to the traditional Round- Robin placement.
{"title":"Reference-Counter Aware Deduplication in Erasure-Coded Distributed Storage System","authors":"Tong Liu, Xubin He, Shakeel Alibhai, Chentao Wu","doi":"10.1109/NAS.2018.8515697","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515697","url":null,"abstract":"In modern distributed storage systems, space efficiency and system reliability are two major concerns. As a result, contemporary storage systems often employ data deduplication and erasure coding to reduce the storage overhead and provide fault tolerance, respectively. However, little work has been done to explore the relationship between these two techniques. In this paper, we propose Reference-counter Aware Deduplication (RAD), which employs the features of deduplication into erasure coding to improve garbage collection performance when deletion occurs. RAD wisely encodes the data according to the reference counter, which is provided by the deduplication level and thus reduces the encoding overhead when garbage collection is conducted. Further, since the reference counter also represents the reliability levels of the data chunks, we additionally made some effort to explore the trade-offs between storage overhead and reliability level among different erasure codes. The experiment results show that RAD can effectively improve the GC performance by up to 24.8% and the reliability analysis shows that, with certain data features, RAD can provide both better reliability and better storage efficiency compared to the traditional Round- Robin placement.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132645644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/NAS.2018.8515693
Hailong Hu, Yantao Li, Zhangqian Zhu, Gang Zhou
We present a two-stream convolutional neural network based authentication system, CNNAuth, for continuously monitoring users' behavioral patterns, by leveraging the accelerometer and gyroscope on smartphones. We are among the first to exploit two streams of the time-domain data and frequency-domain data from raw sensor data for learning and extracting universal effective and efficient feature representations as the inputs of the convolutional neural network (CNN), and the extracted features are further selected by the principal component analysis (PCA). With these features, we use the one-class support vector machine (SVM) to train the classifier in the enrollment phase, and with the trained classifier and testing features, CNNAuth classifies the current user as a legitimate user or an impostor in the continuous authentication phase. We evaluate the performance of the two-stream CNN and CNNAuth, respectively, and the experimental results show that the two-stream CNN achieves an accuracy of 87.14%, and CNNAuth reaches the lowest authentication EER of 2.3% and consumes approximately 3 seconds for authentication.
我们提出了一个基于双流卷积神经网络的认证系统,CNNAuth,通过利用智能手机上的加速度计和陀螺仪,连续监测用户的行为模式。我们首先利用原始传感器数据的时域数据和频域数据两种流来学习和提取通用有效和高效的特征表示,作为卷积神经网络(CNN)的输入,并通过主成分分析(PCA)进一步选择提取的特征。利用这些特征,我们在注册阶段使用单类支持向量机(one-class support vector machine, SVM)训练分类器,利用训练好的分类器和测试特征,CNNAuth在持续身份验证阶段将当前用户分类为合法用户或冒名顶替者。我们分别对两流CNN和CNNAuth的性能进行了评估,实验结果表明,两流CNN的准确率达到87.14%,CNNAuth的最低认证EER为2.3%,认证耗时约为3秒。
{"title":"CNNAuth: Continuous Authentication via Two-Stream Convolutional Neural Networks","authors":"Hailong Hu, Yantao Li, Zhangqian Zhu, Gang Zhou","doi":"10.1109/NAS.2018.8515693","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515693","url":null,"abstract":"We present a two-stream convolutional neural network based authentication system, CNNAuth, for continuously monitoring users' behavioral patterns, by leveraging the accelerometer and gyroscope on smartphones. We are among the first to exploit two streams of the time-domain data and frequency-domain data from raw sensor data for learning and extracting universal effective and efficient feature representations as the inputs of the convolutional neural network (CNN), and the extracted features are further selected by the principal component analysis (PCA). With these features, we use the one-class support vector machine (SVM) to train the classifier in the enrollment phase, and with the trained classifier and testing features, CNNAuth classifies the current user as a legitimate user or an impostor in the continuous authentication phase. We evaluate the performance of the two-stream CNN and CNNAuth, respectively, and the experimental results show that the two-stream CNN achieves an accuracy of 87.14%, and CNNAuth reaches the lowest authentication EER of 2.3% and consumes approximately 3 seconds for authentication.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129097596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/NAS.2018.8515732
Donghong Qin, Jiahai Yang, Lina Ge
Multipath inter-domain routing can improve the reliability, robustness and path diversity of the Internet. This paper proposes one User-customizing oriented Multi-path Inter-domain Routing protocol called as UMIR. Its basic idea is as follows: based on user routing requirements, it selects some nodes from the feasible BGP paths and requests their path-lets information, and then it builds a local topology to calculate user routes. The experimental analysis shows that this UMIR protocol can achieve rich candidate and high-quality paths.
{"title":"User-Customizing Oriented Multipath Inter-Domain Routing","authors":"Donghong Qin, Jiahai Yang, Lina Ge","doi":"10.1109/NAS.2018.8515732","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515732","url":null,"abstract":"Multipath inter-domain routing can improve the reliability, robustness and path diversity of the Internet. This paper proposes one User-customizing oriented Multi-path Inter-domain Routing protocol called as UMIR. Its basic idea is as follows: based on user routing requirements, it selects some nodes from the feasible BGP paths and requests their path-lets information, and then it builds a local topology to calculate user routes. The experimental analysis shows that this UMIR protocol can achieve rich candidate and high-quality paths.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130594126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/NAS.2018.8515723
Shuo Wang, Jing Li, Hongjie Zhang, Qiqi Wang
In cloud datacenter, it should be rational to enforce fair allocation on network resources among VDCs (virtual datacenters) in terms of multi-tenant model. Traditionally, cloud networks are shared in a best-effort manner, making it hard to reason about how network resources are allocated. Prior works concentrate on either providing minimum bandwidth guarantee or achieving work-conserving based on the VM-to-VM flow policy or per-source policy, or both. However, fair allocation on redundant bandwidth among VDCs is ignored. In this paper, we design NXT-Freedom, a bandwidth guarantees enforcement framework that divides network capacity based on per-VDC fairness while achieving work-conservation. To ensure per-VDC fair allocation, a hierarchical max-min fairness algorithm is proposed. To be applicable to non-congestion-free network core and to be scalable, NXT-Freedom decouples computing per-VDC allocation from enforcing the allocation. Through evaluation of a prototype, we show that NXT-Freedom achieves per-VDC performance isolation, and can be rapidly adapted to flow variation in cloud datacenter.
{"title":"Nxt-Freedom: Considering VDC-based Fairness in Enforcing Bandwidth Guarantees in Cloud Datacenter","authors":"Shuo Wang, Jing Li, Hongjie Zhang, Qiqi Wang","doi":"10.1109/NAS.2018.8515723","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515723","url":null,"abstract":"In cloud datacenter, it should be rational to enforce fair allocation on network resources among VDCs (virtual datacenters) in terms of multi-tenant model. Traditionally, cloud networks are shared in a best-effort manner, making it hard to reason about how network resources are allocated. Prior works concentrate on either providing minimum bandwidth guarantee or achieving work-conserving based on the VM-to-VM flow policy or per-source policy, or both. However, fair allocation on redundant bandwidth among VDCs is ignored. In this paper, we design NXT-Freedom, a bandwidth guarantees enforcement framework that divides network capacity based on per-VDC fairness while achieving work-conservation. To ensure per-VDC fair allocation, a hierarchical max-min fairness algorithm is proposed. To be applicable to non-congestion-free network core and to be scalable, NXT-Freedom decouples computing per-VDC allocation from enforcing the allocation. Through evaluation of a prototype, we show that NXT-Freedom achieves per-VDC performance isolation, and can be rapidly adapted to flow variation in cloud datacenter.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134151226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Flash-based SSDs are usually equipped with an onboard cache to further improve system performance by smoothing the gap between the upper-level applications and lower-level flash chips. Since modern SSDs are usually composed of multiple flash chips, and the load of flash chips are significantly different, it is very meaningful to be aware of the chip load condition when designing a cache replacement algorithm. Nevertheless, existing cache replacement algorithms only consider to reduce the cache miss ratio so as to reduce the I/O requests to the underlying flash memory as much as possible, none of them considers the load condition of flash chips. In this paper, we propose a Load- aware Cache Replacement algorithm, called LCR, to improve the performance of flash-based SSDs. The basic idea is to give a higher priority to cache the blocks on overloaded flash chips. We evaluate the performance of our scheme by using a trace- driven simulator with multiple real-world workloads, and results show that compared with the most common algorithm LRU and the state-of-the-art algorithm GCaR, LCR reduces the average response time by as much as 39.2% and 12.3%, respectively.
{"title":"LCR: Load-Aware Cache Replacement Algorithm for Flash-Based SSDs","authors":"Caiyin Liu, Min Lv, Yubiao Pan, Hao Chen, Yongkun Li, Cheng Li, Yinlong Xu","doi":"10.1109/NAS.2018.8515727","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515727","url":null,"abstract":"Flash-based SSDs are usually equipped with an onboard cache to further improve system performance by smoothing the gap between the upper-level applications and lower-level flash chips. Since modern SSDs are usually composed of multiple flash chips, and the load of flash chips are significantly different, it is very meaningful to be aware of the chip load condition when designing a cache replacement algorithm. Nevertheless, existing cache replacement algorithms only consider to reduce the cache miss ratio so as to reduce the I/O requests to the underlying flash memory as much as possible, none of them considers the load condition of flash chips. In this paper, we propose a Load- aware Cache Replacement algorithm, called LCR, to improve the performance of flash-based SSDs. The basic idea is to give a higher priority to cache the blocks on overloaded flash chips. We evaluate the performance of our scheme by using a trace- driven simulator with multiple real-world workloads, and results show that compared with the most common algorithm LRU and the state-of-the-art algorithm GCaR, LCR reduces the average response time by as much as 39.2% and 12.3%, respectively.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133801579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/NAS.2018.8515728
Hao Lv, You Zhou, Fei Wu, Weijun Xiao, Xubin He, Zhonghai Lu, C. Xie
Pushing NAND flash memory to higher density, manufacturers are aggressively enlarging the flash page size. However, the sizes of I/O requests in a wide range of scenarios do not grow accordingly. Since a page is the unit of flash read/write operations, traditional flash translation layers (FTLs) maintain the page mapping regularity. Hence, small random write requests become common, leading to extensive partial logical page writes. This write inefficiency significantly degrades the performance and increases the write amplification of flash storage. In this paper, we first propose a configurable mapping layer, called minipage, whose size is set to match I/O request sizes. The minipage-level mapping provides better flexibility in handling small writes at the cost of sequential read performance degradation and a larger mapping table. Then, we propose a new FTL, called PM-FTL, that exploits the minipage-level mapping to improve write efficiency and utilizes the page-level mapping to reduce the costs caused by the minipage-level mapping. Finally, trace-driven simulation results show that compared to traditional FTLs, PM-FTL reduces the write amplification and flash storage response time by an average of 33.4% and 19.1%, up to 57.7% and 34%, respectively, under 16KB flash pages and 4KB minipages.
{"title":"Exploiting Minipage-Level Mapping to Improve Write Efficiency of NAND Flash","authors":"Hao Lv, You Zhou, Fei Wu, Weijun Xiao, Xubin He, Zhonghai Lu, C. Xie","doi":"10.1109/NAS.2018.8515728","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515728","url":null,"abstract":"Pushing NAND flash memory to higher density, manufacturers are aggressively enlarging the flash page size. However, the sizes of I/O requests in a wide range of scenarios do not grow accordingly. Since a page is the unit of flash read/write operations, traditional flash translation layers (FTLs) maintain the page mapping regularity. Hence, small random write requests become common, leading to extensive partial logical page writes. This write inefficiency significantly degrades the performance and increases the write amplification of flash storage. In this paper, we first propose a configurable mapping layer, called minipage, whose size is set to match I/O request sizes. The minipage-level mapping provides better flexibility in handling small writes at the cost of sequential read performance degradation and a larger mapping table. Then, we propose a new FTL, called PM-FTL, that exploits the minipage-level mapping to improve write efficiency and utilizes the page-level mapping to reduce the costs caused by the minipage-level mapping. Finally, trace-driven simulation results show that compared to traditional FTLs, PM-FTL reduces the write amplification and flash storage response time by an average of 33.4% and 19.1%, up to 57.7% and 34%, respectively, under 16KB flash pages and 4KB minipages.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129080251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/NAS.2018.8515733
Feng Yi, Huang Yi Cai, F. Z. Xin
In this paper, we present LAPA, a framework for automatically analyzing network security risk and generating attack graph for potential attack. The key novelty in our work is that we represent the properties of networks and zero day vulnerabilities, and use logical reasoning algorithm to generate potential attack path to determine if the attacker can exploit these vulnerabilities. In order to demonstrate the efficacy, we have implemented the LAPA framework and compared with three previous network vulnerability analysis methods. Our analysis results have a low rate of false negatives and less cost of processing time due to the worst case assumption and logical property specification and reasoning. We have also conducted a detailed study of the efficiency for generation attack graph with different value of attack path number, attack path depth and network size, which affect the processing time mostly. We estimate that LAPA can produce high quality results for a large portion of networks.
{"title":"A Logic-Based Attack Graph for Analyzing Network Security Risk Against Potential Attack","authors":"Feng Yi, Huang Yi Cai, F. Z. Xin","doi":"10.1109/NAS.2018.8515733","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515733","url":null,"abstract":"In this paper, we present LAPA, a framework for automatically analyzing network security risk and generating attack graph for potential attack. The key novelty in our work is that we represent the properties of networks and zero day vulnerabilities, and use logical reasoning algorithm to generate potential attack path to determine if the attacker can exploit these vulnerabilities. In order to demonstrate the efficacy, we have implemented the LAPA framework and compared with three previous network vulnerability analysis methods. Our analysis results have a low rate of false negatives and less cost of processing time due to the worst case assumption and logical property specification and reasoning. We have also conducted a detailed study of the efficiency for generation attack graph with different value of attack path number, attack path depth and network size, which affect the processing time mostly. We estimate that LAPA can produce high quality results for a large portion of networks.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131985618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An exciting network called smart IoT has great potential to improve the level of our daily activities and the communication. Source location privacy is one of the critical problems in the wireless sensor network (WSN). Privacy protections, especially source location protection, prevent sensor nodes from revealing valuable information about targets. In this paper, we first discuss about the current security architecture and attack modes. Then we propose a scheme based on cloud for protecting source location, which is named CPSLP. This proposed CPSLP scheme transforms the location of the hotspot to cause an obvious traffic inconsistency. We adopt multiple sinks to change the destination of packet randomly in each transmission. The intermediate node makes routing path more varied. The simulation results demonstrate that our scheme can confuse the detection of adversary and reduce the capture probability.
{"title":"A Protecting Source-Location Privacy Scheme for Wireless Sensor Networks","authors":"Xu Miao, Guangjie Han, Yu He, Hao Wang, Jinfang Jiang","doi":"10.1109/NAS.2018.8515721","DOIUrl":"https://doi.org/10.1109/NAS.2018.8515721","url":null,"abstract":"An exciting network called smart IoT has great potential to improve the level of our daily activities and the communication. Source location privacy is one of the critical problems in the wireless sensor network (WSN). Privacy protections, especially source location protection, prevent sensor nodes from revealing valuable information about targets. In this paper, we first discuss about the current security architecture and attack modes. Then we propose a scheme based on cloud for protecting source location, which is named CPSLP. This proposed CPSLP scheme transforms the location of the hotspot to cause an obvious traffic inconsistency. We adopt multiple sinks to change the destination of packet randomly in each transmission. The intermediate node makes routing path more varied. The simulation results demonstrate that our scheme can confuse the detection of adversary and reduce the capture probability.","PeriodicalId":115970,"journal":{"name":"2018 IEEE International Conference on Networking, Architecture and Storage (NAS)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121947449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}