Content authentication of text document has become a major concern in the current digital era. In this paper, a zero-watermark algorithm is proposed for Chinese text documents content authentication. Firstly, the frequencies of different part-of-speech (POS) tags are obtained through natural language processing technology. And they are used to calculate the expect value and entropy, which can be as text features. Then a watermark is generated by one-dimensional forward cloud model generator using the expect value and entropy. The watermark is sent to be registered and stored in the trusted third party called Certificate Authority (CA). If authentication is necessary, we calculate the similarity between the watermark of disputed text and its watermark registered in CA. Experimental results show that the algorithm is robust against content-preserving attacks while sensitive to malicious tampering.
{"title":"Cloud Model Based Zero-Watermarking Algorithm for Authentication of Text Document","authors":"Xitong Qi, Yuling Liu","doi":"10.1109/CIS.2013.155","DOIUrl":"https://doi.org/10.1109/CIS.2013.155","url":null,"abstract":"Content authentication of text document has become a major concern in the current digital era. In this paper, a zero-watermark algorithm is proposed for Chinese text documents content authentication. Firstly, the frequencies of different part-of-speech (POS) tags are obtained through natural language processing technology. And they are used to calculate the expect value and entropy, which can be as text features. Then a watermark is generated by one-dimensional forward cloud model generator using the expect value and entropy. The watermark is sent to be registered and stored in the trusted third party called Certificate Authority (CA). If authentication is necessary, we calculate the similarity between the watermark of disputed text and its watermark registered in CA. Experimental results show that the algorithm is robust against content-preserving attacks while sensitive to malicious tampering.","PeriodicalId":294223,"journal":{"name":"2013 Ninth International Conference on Computational Intelligence and Security","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132210272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
From the perspective of information security engineering, ISO/IEC 15408, one of ISO/IEC security standards, plays an important role to ensure the whole security of an information/software system. ISO/IEC 15408 is a complex security standard which requires involvement of wide range of participants to perform a quite number of tasks as well as various documents. ISO/IEC 15408 is periodically reviewed and maintained to achieve ongoing improvement so that workflow of tasks and contents/format of documents related with the standard are changed according to changes of the standards. Consequently, it is difficult to do all of the tasks related with ISO/IEC 15408 without any supporting tools. However, there is no study to identify which tasks related with ISO/IEC 15408 can be supported by software tools. Indeed, no one makes clear what the tasks and participants exist. This paper presents the first analysis to identify all software supportable tasks related with ISO/IEC 15408. The paper enumerates all of the participants, documents, and tasks related with ISO/IEC 15408 and shows relationship among them, and identifies all software supportable tasks. The analysis and its results become a basis to construct an information security engineering environment based on ISO/IEC 15408 for ensuring the whole security of an information/software system.
{"title":"An Analysis of Software Supportable Tasks Related with ISO/IEC 15408","authors":"Ning Zhang, A. Suhaimi, Y. Goto, Jingde Cheng","doi":"10.1109/CIS.2013.132","DOIUrl":"https://doi.org/10.1109/CIS.2013.132","url":null,"abstract":"From the perspective of information security engineering, ISO/IEC 15408, one of ISO/IEC security standards, plays an important role to ensure the whole security of an information/software system. ISO/IEC 15408 is a complex security standard which requires involvement of wide range of participants to perform a quite number of tasks as well as various documents. ISO/IEC 15408 is periodically reviewed and maintained to achieve ongoing improvement so that workflow of tasks and contents/format of documents related with the standard are changed according to changes of the standards. Consequently, it is difficult to do all of the tasks related with ISO/IEC 15408 without any supporting tools. However, there is no study to identify which tasks related with ISO/IEC 15408 can be supported by software tools. Indeed, no one makes clear what the tasks and participants exist. This paper presents the first analysis to identify all software supportable tasks related with ISO/IEC 15408. The paper enumerates all of the participants, documents, and tasks related with ISO/IEC 15408 and shows relationship among them, and identifies all software supportable tasks. The analysis and its results become a basis to construct an information security engineering environment based on ISO/IEC 15408 for ensuring the whole security of an information/software system.","PeriodicalId":294223,"journal":{"name":"2013 Ninth International Conference on Computational Intelligence and Security","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131581612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of the storage medium, such as the emerging of the SSD, the tradition way of data distribution can't keep up with the pace of the storage device development. Specifically, for example, Traditional RAID only enhanced the performance or reliability in the single storage medium and all the data are distributed in the same storage medium. But with the wide range of using raid, the problem of RAID data adventure has come to the surface gradually. Furthermore, under the mode of RAID-5 read modify-write (R-M-W), operation system collapse or the storage medium write error would lead to the damage of data, which could not be recovered and this problem is named write hole. According to the above problem, this paper proposes a new data distribution strategy, which is called high reliable and performance data distribution strategy (RPDD). RPDD use the high performance storage medium as a cache which is named storage cache. The most of the data is wrote to the storage cache and then distribute the data to the other logic relationship storage medium. As a result, the data is distributed in a hybrid storage medium. In order to take a better analysis of the RPDD in the next, we have combined RPDD in the RAID-5, which is named RAID-5-RPDD. RAID5-RPDD employs the dynamic scheduling mechanism, data transfer and hybrid consistency mechanism between the SSD and magnetic media storage devices to resolve the problem. In consequence, it can protect the RAID stripe data consistency under the condition of R-M-W and increase the I/O performance without the impact of the cost. In addition, it can improve the reliability of the RAID-5. The simulation test results show that RAID-5-RPDD I/O performance increase at the range of 9% with little penalty of the resource consumption.
{"title":"A High Reliable and Performance Data Distribution Strategy: A RAID-5 Case Study","authors":"Saifeng Zeng, Ligu Zhu, Lei Zhang","doi":"10.1109/CIS.2013.74","DOIUrl":"https://doi.org/10.1109/CIS.2013.74","url":null,"abstract":"With the development of the storage medium, such as the emerging of the SSD, the tradition way of data distribution can't keep up with the pace of the storage device development. Specifically, for example, Traditional RAID only enhanced the performance or reliability in the single storage medium and all the data are distributed in the same storage medium. But with the wide range of using raid, the problem of RAID data adventure has come to the surface gradually. Furthermore, under the mode of RAID-5 read modify-write (R-M-W), operation system collapse or the storage medium write error would lead to the damage of data, which could not be recovered and this problem is named write hole. According to the above problem, this paper proposes a new data distribution strategy, which is called high reliable and performance data distribution strategy (RPDD). RPDD use the high performance storage medium as a cache which is named storage cache. The most of the data is wrote to the storage cache and then distribute the data to the other logic relationship storage medium. As a result, the data is distributed in a hybrid storage medium. In order to take a better analysis of the RPDD in the next, we have combined RPDD in the RAID-5, which is named RAID-5-RPDD. RAID5-RPDD employs the dynamic scheduling mechanism, data transfer and hybrid consistency mechanism between the SSD and magnetic media storage devices to resolve the problem. In consequence, it can protect the RAID stripe data consistency under the condition of R-M-W and increase the I/O performance without the impact of the cost. In addition, it can improve the reliability of the RAID-5. The simulation test results show that RAID-5-RPDD I/O performance increase at the range of 9% with little penalty of the resource consumption.","PeriodicalId":294223,"journal":{"name":"2013 Ninth International Conference on Computational Intelligence and Security","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131673598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huaijun Wang, Dingyi Fang, Guanghui Li, Xiaoyan Yin, Bo Zhang, Y. Gu
The VM (Virtual Machine)-based software protection technique provides an effective solution to protect software, making it extremely difficult to analyze and crack. This technique has become the research focus of software protection. In this paper, we introduce the general design ideas of this technique. However, there exist some vulnerabilities in the design. We introduce these vulnerabilities in detail and come up with some improvements to mitigate them. We design and develop a VM-based protection system, named NISLVMP, and carry out some experiments with it. The results show that the improvements are effective.
{"title":"NISLVMP: Improved Virtual Machine-Based Software Protection","authors":"Huaijun Wang, Dingyi Fang, Guanghui Li, Xiaoyan Yin, Bo Zhang, Y. Gu","doi":"10.1109/CIS.2013.107","DOIUrl":"https://doi.org/10.1109/CIS.2013.107","url":null,"abstract":"The VM (Virtual Machine)-based software protection technique provides an effective solution to protect software, making it extremely difficult to analyze and crack. This technique has become the research focus of software protection. In this paper, we introduce the general design ideas of this technique. However, there exist some vulnerabilities in the design. We introduce these vulnerabilities in detail and come up with some improvements to mitigate them. We design and develop a VM-based protection system, named NISLVMP, and carry out some experiments with it. The results show that the improvements are effective.","PeriodicalId":294223,"journal":{"name":"2013 Ninth International Conference on Computational Intelligence and Security","volume":"826 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117177992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) are two of the most widely used and important risk measures in financial risk management models. Because VaR and CVaR portfolio optimization models are often nonlinear and non-convex optimization models, traditional optimization methods usually can not get their global optimal solutions, instead, they often get a local optimal solution. In this paper, the uniform design is integrated into evolutionary algorithm to enhance the search ability of the evolutionary algorithm. The resulted algorithm will has a strong search ability and has more possibility to get the global optimal solution. Based on this idea, a new evolutionary algorithm is proposed for VaR and CVaR optimization models. Computer simulations on ten randomly chosen stocks from Shenzhen Stock Exchange in China are conducted and the analysis to the results is given. The experiment results indicate the proposed algorithm is efficient.
{"title":"A New Evolutionary Algorithm for Portfolio Optimization and Its Application","authors":"Weijia Wang, Jie Hu","doi":"10.1109/CIS.2013.24","DOIUrl":"https://doi.org/10.1109/CIS.2013.24","url":null,"abstract":"Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) are two of the most widely used and important risk measures in financial risk management models. Because VaR and CVaR portfolio optimization models are often nonlinear and non-convex optimization models, traditional optimization methods usually can not get their global optimal solutions, instead, they often get a local optimal solution. In this paper, the uniform design is integrated into evolutionary algorithm to enhance the search ability of the evolutionary algorithm. The resulted algorithm will has a strong search ability and has more possibility to get the global optimal solution. Based on this idea, a new evolutionary algorithm is proposed for VaR and CVaR optimization models. Computer simulations on ten randomly chosen stocks from Shenzhen Stock Exchange in China are conducted and the analysis to the results is given. The experiment results indicate the proposed algorithm is efficient.","PeriodicalId":294223,"journal":{"name":"2013 Ninth International Conference on Computational Intelligence and Security","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121551523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is devoted to show the effect of the continuous variational parameter in the objective function coefficients on the optimum solution, which is an extension of the sensitivity analysis of fuzzy number linear programming. We prove that if the parameter is in a certain range, this problem has a unique optimal solution, otherwise it has non-unique optimal solutions or is unbounded or infeasible. Then, the optimal value function is a fuzzy linear function of the parameter. Finally, numerical examples demonstrate the theorem and illustrate the computational procedure.
{"title":"Parametric Study of Fuzzy Number Linear Programming","authors":"Yanling Jia, Yan Yang, Yihua Zhong","doi":"10.1109/CIS.2013.78","DOIUrl":"https://doi.org/10.1109/CIS.2013.78","url":null,"abstract":"This paper is devoted to show the effect of the continuous variational parameter in the objective function coefficients on the optimum solution, which is an extension of the sensitivity analysis of fuzzy number linear programming. We prove that if the parameter is in a certain range, this problem has a unique optimal solution, otherwise it has non-unique optimal solutions or is unbounded or infeasible. Then, the optimal value function is a fuzzy linear function of the parameter. Finally, numerical examples demonstrate the theorem and illustrate the computational procedure.","PeriodicalId":294223,"journal":{"name":"2013 Ninth International Conference on Computational Intelligence and Security","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123797705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clustering is an important technology in data mining. Squeezer is one such clustering algorithm for categorical data and it is more efficient than most existing algorithms for categorical data. But Squeezer is time consuming for very large datasets which are distributed in different servers. Thus, we employ the distributed thinking to improve Squeezer and a distributed algorithm for categorical data called Coercion is proposed in this paper. In order to present detailed complexity results for Coercion, we also conduct an experimental study with standard as well as synthetic data sets to demonstrate the effectiveness of the new algorithm.
{"title":"Coercion: A Distributed Clustering Algorithm for Categorical Data","authors":"Bin Wang, Yang Zhou, Xinhong Hei","doi":"10.1109/CIS.2013.149","DOIUrl":"https://doi.org/10.1109/CIS.2013.149","url":null,"abstract":"Clustering is an important technology in data mining. Squeezer is one such clustering algorithm for categorical data and it is more efficient than most existing algorithms for categorical data. But Squeezer is time consuming for very large datasets which are distributed in different servers. Thus, we employ the distributed thinking to improve Squeezer and a distributed algorithm for categorical data called Coercion is proposed in this paper. In order to present detailed complexity results for Coercion, we also conduct an experimental study with standard as well as synthetic data sets to demonstrate the effectiveness of the new algorithm.","PeriodicalId":294223,"journal":{"name":"2013 Ninth International Conference on Computational Intelligence and Security","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123389262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a technique for the automatic vocal segments detection in an acoustical polyphonic music signal. We use a combination of several characteristics specific to singing voice as the feature and employ a Gaussian Mixture Model (GMM) classifier for vocal and non-vocal classification. We have employed a pre-processing of spectral whitening and archived a performance of 81.3% over the RWC popular music dataset.
{"title":"Automatic Vocal Segments Detection in Popular Music","authors":"Liming Song, Ming Li, Yonghong Yan","doi":"10.1109/CIS.2013.80","DOIUrl":"https://doi.org/10.1109/CIS.2013.80","url":null,"abstract":"We propose a technique for the automatic vocal segments detection in an acoustical polyphonic music signal. We use a combination of several characteristics specific to singing voice as the feature and employ a Gaussian Mixture Model (GMM) classifier for vocal and non-vocal classification. We have employed a pre-processing of spectral whitening and archived a performance of 81.3% over the RWC popular music dataset.","PeriodicalId":294223,"journal":{"name":"2013 Ninth International Conference on Computational Intelligence and Security","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124258965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Along with the development of electronic and intelligence in the world's stock market advances, the accumulation of the stock data grows larger over time. It is of great concern on the ways to find the hidden rules of information in the mass of data. Given the background above, this paper explores the methods of data mining by using the combination of Decision tree algorithm and Clustering algorithm. In addition, this paper accomplishes stock forecasting by combining CART algorithm and DBSCAN algorithm to build a predictive model with good applicability through a large number of experiments for parameter testing. According to the works above, the predictive model has a high accuracy and provides a scientific theory supporting the investment decisions.
{"title":"The Research of Stock Predictive Model Based on the Combination of CART and DBSCAN","authors":"Yibu Ma","doi":"10.1109/CIS.2013.40","DOIUrl":"https://doi.org/10.1109/CIS.2013.40","url":null,"abstract":"Along with the development of electronic and intelligence in the world's stock market advances, the accumulation of the stock data grows larger over time. It is of great concern on the ways to find the hidden rules of information in the mass of data. Given the background above, this paper explores the methods of data mining by using the combination of Decision tree algorithm and Clustering algorithm. In addition, this paper accomplishes stock forecasting by combining CART algorithm and DBSCAN algorithm to build a predictive model with good applicability through a large number of experiments for parameter testing. According to the works above, the predictive model has a high accuracy and provides a scientific theory supporting the investment decisions.","PeriodicalId":294223,"journal":{"name":"2013 Ninth International Conference on Computational Intelligence and Security","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114480072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Botnets are one of the most serious threats to Internet security. Isolated and single point security defense technologies can't effectively counteract large-scale, distributed botnet attacks, such as Spamming and Distributed Denial of service attack. Collaboration among different kind of security devices is needed. To solve this problem, we proposed a conceptual model of botnet collaborative defense scheme and designed a Botnet Collaborative Defense Scheme Description Language (BCDSDL).Then, we outlined its EBNF expressions. The BCDSDL can uniformly describe the defense tasks and relations among tasks of different kinds of security devices, and provides a language level interface for diverse security devices achieving information sharing and linkage defense. At last, we realized the simulation of collaborative defense schemes described by BCDSDL in GTNetS. The experiment results show that BCDSDL is efficient and easy to use.
{"title":"A Botnet-Oriented Collaborative Defense Scheme Description Language","authors":"Liming Huan, Yang-Zhe Yu, Liangshuang Lv, Shiying Li, Chunhe Xia","doi":"10.1109/CIS.2013.143","DOIUrl":"https://doi.org/10.1109/CIS.2013.143","url":null,"abstract":"Botnets are one of the most serious threats to Internet security. Isolated and single point security defense technologies can't effectively counteract large-scale, distributed botnet attacks, such as Spamming and Distributed Denial of service attack. Collaboration among different kind of security devices is needed. To solve this problem, we proposed a conceptual model of botnet collaborative defense scheme and designed a Botnet Collaborative Defense Scheme Description Language (BCDSDL).Then, we outlined its EBNF expressions. The BCDSDL can uniformly describe the defense tasks and relations among tasks of different kinds of security devices, and provides a language level interface for diverse security devices achieving information sharing and linkage defense. At last, we realized the simulation of collaborative defense schemes described by BCDSDL in GTNetS. The experiment results show that BCDSDL is efficient and easy to use.","PeriodicalId":294223,"journal":{"name":"2013 Ninth International Conference on Computational Intelligence and Security","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121094594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}