Pub Date : 2017-10-01DOI: 10.1109/ICCSNT.2017.8343730
Hongsheng Zhou, Xiaoqiang Song, Li Lin, Li Du
The deployment of PCE (Path Computation Element) architecture in high-speed MPLS/GMPLS networks is widely accepted and facilitates path setup operations for applications with explicitly defined objective functions. In this paper, we propose a new link-abstraction mechanism, which improves the method of establishing domain topology of parent PCE, and further aggregates the abstract topology in the hierarchical PCE method, thus simplifying the topology aggregation of the multi-domain. Meanwhile, considering the constraints of multiple factors which affect cross-domain path calculations, we provide a relatively simple way to select, which solves the key problem of determining the “domain sequence” in the process of cross-domain path calculation. Simulation results show that the proposed method has better performance in terms of blocking probability, end-to-end delay and resource utilization rate.
{"title":"Multi-domain routing technology based on PCE for intelligent optical networks","authors":"Hongsheng Zhou, Xiaoqiang Song, Li Lin, Li Du","doi":"10.1109/ICCSNT.2017.8343730","DOIUrl":"https://doi.org/10.1109/ICCSNT.2017.8343730","url":null,"abstract":"The deployment of PCE (Path Computation Element) architecture in high-speed MPLS/GMPLS networks is widely accepted and facilitates path setup operations for applications with explicitly defined objective functions. In this paper, we propose a new link-abstraction mechanism, which improves the method of establishing domain topology of parent PCE, and further aggregates the abstract topology in the hierarchical PCE method, thus simplifying the topology aggregation of the multi-domain. Meanwhile, considering the constraints of multiple factors which affect cross-domain path calculations, we provide a relatively simple way to select, which solves the key problem of determining the “domain sequence” in the process of cross-domain path calculation. Simulation results show that the proposed method has better performance in terms of blocking probability, end-to-end delay and resource utilization rate.","PeriodicalId":163433,"journal":{"name":"2017 6th International Conference on Computer Science and Network Technology (ICCSNT)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115499332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICCSNT.2017.8343705
Junjie Gao, Wei Xiao, Yanan Xie, Feng Gu, Baozhen Yao
The study is dedicated to an intelligent fault diagnosis approach for vehicle maintenance which integrates both the cloud model and case-based reasoning (CBR). The cloud model is used to transform the uncertainty of the subjective quantitative information into qualitative values to calculate the case similarity, which greatly simplifies the input conditions in case retrieval and improves the operability of fault diagnosis. The improved Euclidean distance formula is taken as a measure of the similarity between the fault cases. Compared with the traditional method, it eliminates the similarity deviation and improves the accuracy of case retrieval. The case study of vehicle electrical and electronic equipment is reported, which can prove the approach proposed in this paper is correct and efficient.
{"title":"An intelligent fault diagnosis approach integrating cloud model and CBR","authors":"Junjie Gao, Wei Xiao, Yanan Xie, Feng Gu, Baozhen Yao","doi":"10.1109/ICCSNT.2017.8343705","DOIUrl":"https://doi.org/10.1109/ICCSNT.2017.8343705","url":null,"abstract":"The study is dedicated to an intelligent fault diagnosis approach for vehicle maintenance which integrates both the cloud model and case-based reasoning (CBR). The cloud model is used to transform the uncertainty of the subjective quantitative information into qualitative values to calculate the case similarity, which greatly simplifies the input conditions in case retrieval and improves the operability of fault diagnosis. The improved Euclidean distance formula is taken as a measure of the similarity between the fault cases. Compared with the traditional method, it eliminates the similarity deviation and improves the accuracy of case retrieval. The case study of vehicle electrical and electronic equipment is reported, which can prove the approach proposed in this paper is correct and efficient.","PeriodicalId":163433,"journal":{"name":"2017 6th International Conference on Computer Science and Network Technology (ICCSNT)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122569587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICCSNT.2017.8343724
Chunyu Han, Yongzheng Zhang
Domain plays an important role as one of the components on the Internet, so more and more malicious behavior has been conducted by using domains, such as spam, botnet, phishing and the like. DGA (Domain Generation Algorithm), one kind of DNS technology, has been used by domain-flux commonly in botnets. In this paper, we propose a method called CODDULM (C&c domains Of Dga Detection Using Lexical feature and sparse Matrix). Firstly, it finds the NXDomains (Non-existent domains) on the passive DNS traffic to locate the suspicious infected hosts. Secondly, it selects DGA domains by lexical features according to suspicious infected hosts. Lastly, it discovers DGA C&C (Command and Control) domains through SVM (Support Vector Machine algorithm) classifier. At the end of this paper, we conduct the experiment to verify the effect of the method and the high accuracy of it.
{"title":"CODDULM: An approach for detecting C&C domains of DGA on passive DNS traffic","authors":"Chunyu Han, Yongzheng Zhang","doi":"10.1109/ICCSNT.2017.8343724","DOIUrl":"https://doi.org/10.1109/ICCSNT.2017.8343724","url":null,"abstract":"Domain plays an important role as one of the components on the Internet, so more and more malicious behavior has been conducted by using domains, such as spam, botnet, phishing and the like. DGA (Domain Generation Algorithm), one kind of DNS technology, has been used by domain-flux commonly in botnets. In this paper, we propose a method called CODDULM (C&c domains Of Dga Detection Using Lexical feature and sparse Matrix). Firstly, it finds the NXDomains (Non-existent domains) on the passive DNS traffic to locate the suspicious infected hosts. Secondly, it selects DGA domains by lexical features according to suspicious infected hosts. Lastly, it discovers DGA C&C (Command and Control) domains through SVM (Support Vector Machine algorithm) classifier. At the end of this paper, we conduct the experiment to verify the effect of the method and the high accuracy of it.","PeriodicalId":163433,"journal":{"name":"2017 6th International Conference on Computer Science and Network Technology (ICCSNT)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116753218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICCSNT.2017.8343476
Qing-quan Tan, Qun Liu, Hua-Chun Luo, Bo Liu, Jian Liu
Earthquake often inflicts severe casualties and property losses. Building information data play an important role in earthquake damage evaluation and emergency countermeasures. In this paper, a novel approach for building information collection is proposed and implemented, which is based on GIS technology. The system design is concisely presented, and the research results are introduced. The research results are applied in real earthquake work, and are significant to enhance the earthquake emergency response capability.
{"title":"Design and implementation of building information collection system for earthquake disaster scenario construction based on GIS","authors":"Qing-quan Tan, Qun Liu, Hua-Chun Luo, Bo Liu, Jian Liu","doi":"10.1109/ICCSNT.2017.8343476","DOIUrl":"https://doi.org/10.1109/ICCSNT.2017.8343476","url":null,"abstract":"Earthquake often inflicts severe casualties and property losses. Building information data play an important role in earthquake damage evaluation and emergency countermeasures. In this paper, a novel approach for building information collection is proposed and implemented, which is based on GIS technology. The system design is concisely presented, and the research results are introduced. The research results are applied in real earthquake work, and are significant to enhance the earthquake emergency response capability.","PeriodicalId":163433,"journal":{"name":"2017 6th International Conference on Computer Science and Network Technology (ICCSNT)","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115533116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICCSNT.2017.8343472
Shi Wenhe, L. Xiangjun, Li Mailin
Attribute recognition algorithms are widely used in engineering evaluation. However, the confidence parameters in this algorithm are usually chosen empirically, which has a great influence on the accuracy of engineering evaluation. In this paper, in order to improve the accuracy of the algorithm, a so-called confidence dispersion technical parameter is proposed to describe the influences of the sample data on confidence parameters in attribute recognition model. The numerical dispersion characteristics and the statistical distribution of the optimal confidence intervals are analyzed and the validity of confidence dispersion index has been proved by empirical model derivation and experimental simulation. Then a data-driven attribute recognition evaluation method is proposed based on the proposed confidence dispersion technical parameter, with the confidence parameters adaptive to sample data. According to contrastive simulation results on the technical design evaluation database of 220kV overhead line engineering projects, it has been verified that the proposed algorithm is feasible and effective, which also provides a new idea for the future design quality evaluation tasks of over-head line engineering projects.
{"title":"An improved attribute recognition algorithm of overhead line engineering evaluation based on confidence dispersion","authors":"Shi Wenhe, L. Xiangjun, Li Mailin","doi":"10.1109/ICCSNT.2017.8343472","DOIUrl":"https://doi.org/10.1109/ICCSNT.2017.8343472","url":null,"abstract":"Attribute recognition algorithms are widely used in engineering evaluation. However, the confidence parameters in this algorithm are usually chosen empirically, which has a great influence on the accuracy of engineering evaluation. In this paper, in order to improve the accuracy of the algorithm, a so-called confidence dispersion technical parameter is proposed to describe the influences of the sample data on confidence parameters in attribute recognition model. The numerical dispersion characteristics and the statistical distribution of the optimal confidence intervals are analyzed and the validity of confidence dispersion index has been proved by empirical model derivation and experimental simulation. Then a data-driven attribute recognition evaluation method is proposed based on the proposed confidence dispersion technical parameter, with the confidence parameters adaptive to sample data. According to contrastive simulation results on the technical design evaluation database of 220kV overhead line engineering projects, it has been verified that the proposed algorithm is feasible and effective, which also provides a new idea for the future design quality evaluation tasks of over-head line engineering projects.","PeriodicalId":163433,"journal":{"name":"2017 6th International Conference on Computer Science and Network Technology (ICCSNT)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115316365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICCSNT.2017.8343738
Yongqing Qian, Weizhen Chen
Based on the fact that some image signals possess the block sparsity in practical application environment, a novel Compressed Sensing (CS) algorithm for block sparse image is proposed in this paper. Namely, a Double-level Binary Tree (DBT) Bayesian model is proposed for the block sparse image at the same time the relationship of the root node and the leaf node of this DBT structure is defined as “genetic characteristic”. Then, the block clustering for the block sparse image can be executed successfully and effectively by utilizing Markov Chain Monte Carlo (MCMC) method. The simulation results prove that, our proposed method for the block sparse image signal can get better recovery results with less computation time.
{"title":"Double-Level Binary Tree Bayesian compressed sensing for block sparse image","authors":"Yongqing Qian, Weizhen Chen","doi":"10.1109/ICCSNT.2017.8343738","DOIUrl":"https://doi.org/10.1109/ICCSNT.2017.8343738","url":null,"abstract":"Based on the fact that some image signals possess the block sparsity in practical application environment, a novel Compressed Sensing (CS) algorithm for block sparse image is proposed in this paper. Namely, a Double-level Binary Tree (DBT) Bayesian model is proposed for the block sparse image at the same time the relationship of the root node and the leaf node of this DBT structure is defined as “genetic characteristic”. Then, the block clustering for the block sparse image can be executed successfully and effectively by utilizing Markov Chain Monte Carlo (MCMC) method. The simulation results prove that, our proposed method for the block sparse image signal can get better recovery results with less computation time.","PeriodicalId":163433,"journal":{"name":"2017 6th International Conference on Computer Science and Network Technology (ICCSNT)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116302798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICCSNT.2017.8343678
Wang Jie, Tian Pei, Shi Wen-qing, Xiao Yan
The generation of software reliability test cases is based on its operational profile, which characterizes the usage of software. So, software reliability test is on the basis of the statistics of software practical usage. Software reuse technology not only enhanced the development efficiency of software but also ensured software quality. So the test on modules with high reliable components can be reduced by applying software reuse analysis to software test, and the modules which contain more unreliable components can allocate more test cases. In this paper, we introduce the reuse analysis and reliability assessment on operations into the generation of software reliability test cases, so the generation method can combines the software operational profile and software reuse.
{"title":"The generation of software reliability test cases based on software reuse","authors":"Wang Jie, Tian Pei, Shi Wen-qing, Xiao Yan","doi":"10.1109/ICCSNT.2017.8343678","DOIUrl":"https://doi.org/10.1109/ICCSNT.2017.8343678","url":null,"abstract":"The generation of software reliability test cases is based on its operational profile, which characterizes the usage of software. So, software reliability test is on the basis of the statistics of software practical usage. Software reuse technology not only enhanced the development efficiency of software but also ensured software quality. So the test on modules with high reliable components can be reduced by applying software reuse analysis to software test, and the modules which contain more unreliable components can allocate more test cases. In this paper, we introduce the reuse analysis and reliability assessment on operations into the generation of software reliability test cases, so the generation method can combines the software operational profile and software reuse.","PeriodicalId":163433,"journal":{"name":"2017 6th International Conference on Computer Science and Network Technology (ICCSNT)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124884554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICCSNT.2017.8343671
Lei Jiang, Wei Chen
In the finite element mesh analysis, researches are always done on displacement and stress in a certain area and higher accuracy is expected to be obtained in that area. If the accuracy of mesh grid is higher or the number of units is increased, the storage space of the computer relatively needs to be larger, the computing time becomes longer and some problems may not be solved by computer. In the local mesh encryption method, only the initial mesh parts of the model are encrypted. Under the condition of accuracy ensured, only the calculating number of nodes of local model needs to be increased in this method without unit dividing for the entire model. In this method, model unit density is increased, storage space and computing time is greatly reduced and the accuracy of calculating is improved.
{"title":"A new local optimization method of finite-element mesh","authors":"Lei Jiang, Wei Chen","doi":"10.1109/ICCSNT.2017.8343671","DOIUrl":"https://doi.org/10.1109/ICCSNT.2017.8343671","url":null,"abstract":"In the finite element mesh analysis, researches are always done on displacement and stress in a certain area and higher accuracy is expected to be obtained in that area. If the accuracy of mesh grid is higher or the number of units is increased, the storage space of the computer relatively needs to be larger, the computing time becomes longer and some problems may not be solved by computer. In the local mesh encryption method, only the initial mesh parts of the model are encrypted. Under the condition of accuracy ensured, only the calculating number of nodes of local model needs to be increased in this method without unit dividing for the entire model. In this method, model unit density is increased, storage space and computing time is greatly reduced and the accuracy of calculating is improved.","PeriodicalId":163433,"journal":{"name":"2017 6th International Conference on Computer Science and Network Technology (ICCSNT)","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132893782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICCSNT.2017.8343679
Song Qiang, Han Xing, Gao Jian
This paper presents the radar task scheduling problem for a prior way which will ignore the issues of the urgency and importance of task time. For this problem, we present a method based on the three characteristic parameters: the task priority, deadline and idle time, composed of an algorithm which accommodates the different load conditions of the radar scheduler by adjusting the parameter weight and composed of a way with time window which makes sure that more high priority tasks can be scheduled in a scheduling interval.
{"title":"A task scheduling simulation for phased array radar with time windows","authors":"Song Qiang, Han Xing, Gao Jian","doi":"10.1109/ICCSNT.2017.8343679","DOIUrl":"https://doi.org/10.1109/ICCSNT.2017.8343679","url":null,"abstract":"This paper presents the radar task scheduling problem for a prior way which will ignore the issues of the urgency and importance of task time. For this problem, we present a method based on the three characteristic parameters: the task priority, deadline and idle time, composed of an algorithm which accommodates the different load conditions of the radar scheduler by adjusting the parameter weight and composed of a way with time window which makes sure that more high priority tasks can be scheduled in a scheduling interval.","PeriodicalId":163433,"journal":{"name":"2017 6th International Conference on Computer Science and Network Technology (ICCSNT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129937435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICCSNT.2017.8343695
Xiaoli Sun, Yusong Tan, Q. Wu, Jing Wang
Continuous subgraph pattern matching is an extension of the traditional subgraph pattern matching and becoming a subject that attracts increasing interest. It requires the near real-time responses and is used in many applications, for example, abnormal monitoring in social networks, cyber attacks monitoring in cyber networks. As the dynamic graph changes with time, the temporal subgraph pattern (i.e., the edges have temporal relation) is considered. In this paper, the Hasse diagram is introduced to represent the temporal relation of the query graph. Then we design the Hasse-cache structure, and propose a continuous temporal subgraph pattern matching algorithm based on the Hasse diagram. The algorithm uses the probability of dynamic graph to reduce the intermediate results, and can implement the matching of topology and the verification of temporal relation simultaneously. Our experiments with real-world datasets show that the proposed algorithm has 10x speedups over the previous approaches.
{"title":"Hasse diagram based algorithm for continuous temporal subgraph query in graph stream","authors":"Xiaoli Sun, Yusong Tan, Q. Wu, Jing Wang","doi":"10.1109/ICCSNT.2017.8343695","DOIUrl":"https://doi.org/10.1109/ICCSNT.2017.8343695","url":null,"abstract":"Continuous subgraph pattern matching is an extension of the traditional subgraph pattern matching and becoming a subject that attracts increasing interest. It requires the near real-time responses and is used in many applications, for example, abnormal monitoring in social networks, cyber attacks monitoring in cyber networks. As the dynamic graph changes with time, the temporal subgraph pattern (i.e., the edges have temporal relation) is considered. In this paper, the Hasse diagram is introduced to represent the temporal relation of the query graph. Then we design the Hasse-cache structure, and propose a continuous temporal subgraph pattern matching algorithm based on the Hasse diagram. The algorithm uses the probability of dynamic graph to reduce the intermediate results, and can implement the matching of topology and the verification of temporal relation simultaneously. Our experiments with real-world datasets show that the proposed algorithm has 10x speedups over the previous approaches.","PeriodicalId":163433,"journal":{"name":"2017 6th International Conference on Computer Science and Network Technology (ICCSNT)","volume":"384 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114898592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}