With the development of storage and computing technologies, digital content such as music, digital movies, games, cartoon and DV et al gets more and more popular for entertainment, how to control the rights of the digital content is now becoming a very important issue. In this paper, a secure and flexible Content Protection Secure DRM scheme(named CPSec DRM) is proposed for online/ offline rights management, in which content objects(COs) and rights objects(ROs) were separated respectively, the COs was encrypted by content encryption key(CEK), while ROs was encapsulated rights encryption key(REK) that is related to the device information of end user's, thus even if ROs were illegally copied and spread, however it will not pass the authentication of license verification. As for domain and offline license management, a license transfer scheme was developed for N-total sub-licenses redistribution, once the sub- licenses was released to N copies, then the master license can not be transferred and redistributed again. The proposed CPSec DRM scheme does not only support online rights management, but can still works in an offline mode for pervasive usage.
{"title":"Secure and Flexible Digital Rights Management in a Pervasive Usage Mode","authors":"Zhaofeng Ma, Yixian Yang, Xinxin Niu","doi":"10.1109/CIS.2007.204","DOIUrl":"https://doi.org/10.1109/CIS.2007.204","url":null,"abstract":"With the development of storage and computing technologies, digital content such as music, digital movies, games, cartoon and DV et al gets more and more popular for entertainment, how to control the rights of the digital content is now becoming a very important issue. In this paper, a secure and flexible Content Protection Secure DRM scheme(named CPSec DRM) is proposed for online/ offline rights management, in which content objects(COs) and rights objects(ROs) were separated respectively, the COs was encrypted by content encryption key(CEK), while ROs was encapsulated rights encryption key(REK) that is related to the device information of end user's, thus even if ROs were illegally copied and spread, however it will not pass the authentication of license verification. As for domain and offline license management, a license transfer scheme was developed for N-total sub-licenses redistribution, once the sub- licenses was released to N copies, then the master license can not be transferred and redistributed again. The proposed CPSec DRM scheme does not only support online rights management, but can still works in an offline mode for pervasive usage.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115679058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many efforts have been made on survivability model of various systems, but little of them focus on control or application layer of telecommunication network, even less can be realized and applied. In this paper, an applicable method is provided to model the control layer of telecommunication network such as IMS (IP Multimedia System) which has complicated message exchanges. How to calculate survivability quan- titatively is also introduced.
{"title":"A Quantitative Security Model of IMS System","authors":"Xiaofeng Qiu, Ning Zhi, Xinxin Niu","doi":"10.1109/CIS.2007.197","DOIUrl":"https://doi.org/10.1109/CIS.2007.197","url":null,"abstract":"Many efforts have been made on survivability model of various systems, but little of them focus on control or application layer of telecommunication network, even less can be realized and applied. In this paper, an applicable method is provided to model the control layer of telecommunication network such as IMS (IP Multimedia System) which has complicated message exchanges. How to calculate survivability quan- titatively is also introduced.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115761755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Question classification is one of the most important sub- tasks in Question Answering systems. Now question tax- onomy is getting larger and more fine-grained for better answer generation. Many approaches to question classifi- cation have been proposed and achieve reasonable results. However, all previous approaches use certain learning al- gorithm to learn a classifier from binary feature vectors, extracted from small size of labeled examples. In this pa- per we propose a feature-weighting model which assigns different weights to features instead of simple binary val- ues. The main characteristic of this model is assigning more reasonable weight to features: these weights can be used to differentiate features each other according to their contri- bution to question classification. Furthermore, features are weighted depending on not only small labeled question col- lection but also large unlabeled question collection. Exper- imental results show that with this new feature-weighting model the SVM-based classifier outperforms the one with- out it to some extent.
{"title":"An Effective Feature-Weighting Model for Question Classification","authors":"Peng Huang, Jiajun Bu, Chun Chen, Guang Qiu","doi":"10.1109/CIS.2007.12","DOIUrl":"https://doi.org/10.1109/CIS.2007.12","url":null,"abstract":"Question classification is one of the most important sub- tasks in Question Answering systems. Now question tax- onomy is getting larger and more fine-grained for better answer generation. Many approaches to question classifi- cation have been proposed and achieve reasonable results. However, all previous approaches use certain learning al- gorithm to learn a classifier from binary feature vectors, extracted from small size of labeled examples. In this pa- per we propose a feature-weighting model which assigns different weights to features instead of simple binary val- ues. The main characteristic of this model is assigning more reasonable weight to features: these weights can be used to differentiate features each other according to their contri- bution to question classification. Furthermore, features are weighted depending on not only small labeled question col- lection but also large unlabeled question collection. Exper- imental results show that with this new feature-weighting model the SVM-based classifier outperforms the one with- out it to some extent.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116801260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of cipher instruction set extension, design of s-box instruction has received more and more attention. An s-box instruction named SboxPer is designed in this paper for fast and efficient implementation of s-boxes in common symmetric-key ciphers. By introducing PLUT, this instruction improves the efficiency of table lookup. Half-byte permutation is performed after the table lookup operation, which leads to the result that no additional instruction is needed to obtain the final result of S-box operation. Results of performance estimate show that this instruction can improve the execution speed of s- box lookup in symmetric-key ciphers significantly and occupies little memory space.
{"title":"Design of an Instruction for Fast and Efficient S-Box Implementation","authors":"Meifeng Li, Guanzhong Dai, Hang Liu, Wei Hu","doi":"10.1109/CIS.2007.103","DOIUrl":"https://doi.org/10.1109/CIS.2007.103","url":null,"abstract":"With the development of cipher instruction set extension, design of s-box instruction has received more and more attention. An s-box instruction named SboxPer is designed in this paper for fast and efficient implementation of s-boxes in common symmetric-key ciphers. By introducing PLUT, this instruction improves the efficiency of table lookup. Half-byte permutation is performed after the table lookup operation, which leads to the result that no additional instruction is needed to obtain the final result of S-box operation. Results of performance estimate show that this instruction can improve the execution speed of s- box lookup in symmetric-key ciphers significantly and occupies little memory space.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126191503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is very important to find out a smoothing support vec- tor machine. This paper studies a smoothing support vec- tor machine (SVM) by using quarter penalty function. We introduce the optimization problem of SVM with an uncon- strained and nonsmooth optimization problem via quarter penalty function. Then, we define a one-order differentiable function to approximately smooth the penalty function, and get an unconstrained and smooth optimization problem. By error analysis, we may obtain approximate solution of SVM by solving its approximately smooth penalty optimization problem without constraints. The numerical experiment shows that our smoothing SVM is efficient.
{"title":"A Smoothing Support Vector Machine Based on Quarter Penalty Function","authors":"M. Jiang, Z. Meng, Gengui Zhou","doi":"10.1109/CIS.2007.92","DOIUrl":"https://doi.org/10.1109/CIS.2007.92","url":null,"abstract":"It is very important to find out a smoothing support vec- tor machine. This paper studies a smoothing support vec- tor machine (SVM) by using quarter penalty function. We introduce the optimization problem of SVM with an uncon- strained and nonsmooth optimization problem via quarter penalty function. Then, we define a one-order differentiable function to approximately smooth the penalty function, and get an unconstrained and smooth optimization problem. By error analysis, we may obtain approximate solution of SVM by solving its approximately smooth penalty optimization problem without constraints. The numerical experiment shows that our smoothing SVM is efficient.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125556499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unreliable failure detectors have been an important abstraction to build dependable distributed applications over asynchronous distributed systems subject to faults. Their implementations are commonly based on timeouts to ensure algorithm termination. However, for systems built on the Internet, it is hard to estimate this time value due to traffic variations. In order to increase the performance, self-tuned failure detectors dynamically adapt their timeouts to the communication delay behavior added of a safety margin. In this paper, we propose a new implementation of a failure detector. This implementation is a variant of the heartbeat failure detector which is adaptable and can support scalable applications. In this implementation we dissociate two aspects: a basic estimation of the expected arrival date to provide a short detection time, and an adaptation of the quality of service. The latter is based on two principles: an adaptation layer and a heuristic to adapt the sending period of "I am alive" messages.
{"title":"Implementation and Performance Evaluation of an Adaptable Failure Detector for Distributed System","authors":"Jing-li Zhou, Guang Yang, Lijun Dong, Gang Liu","doi":"10.1109/CIS.2007.61","DOIUrl":"https://doi.org/10.1109/CIS.2007.61","url":null,"abstract":"Unreliable failure detectors have been an important abstraction to build dependable distributed applications over asynchronous distributed systems subject to faults. Their implementations are commonly based on timeouts to ensure algorithm termination. However, for systems built on the Internet, it is hard to estimate this time value due to traffic variations. In order to increase the performance, self-tuned failure detectors dynamically adapt their timeouts to the communication delay behavior added of a safety margin. In this paper, we propose a new implementation of a failure detector. This implementation is a variant of the heartbeat failure detector which is adaptable and can support scalable applications. In this implementation we dissociate two aspects: a basic estimation of the expected arrival date to provide a short detection time, and an adaptation of the quality of service. The latter is based on two principles: an adaptation layer and a heuristic to adapt the sending period of \"I am alive\" messages.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"34 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114036565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Because of the complicated interaction of the sludge compost components, it makes the compost quality evaluation system appear the non-linearity and uncertainty. According to the physical circumstances of sludge compost, a compost quality evaluation modeling method based on wavelet neural network is presented. We adopt a method of reduce the number of the wavelet basic function by analysis the sparse property of sample data, and use the learning algorithm based on gradient descent to train network. We select the index of sludge compost quality and take the high temperature duration, degradation rate, nitrogen content, average oxygen concentration and maturity degree as the evaluation parameters. With the ability of strong self-learning and function approach and fast convergence rate of wavelet neural network, the modeling method can truly evaluate the compost quality by learning the index information of sludge compost quality. The experimental results show that this method is feasible and effective.
{"title":"The Study of Compost Quality Evaluation Modeling Method Based on Wavelet Neural Network for Sewage Treatment","authors":"Jingwen Tian, Meijuan Gao, Yanxia Liu, Hao Zhou","doi":"10.1109/CIS.2007.122","DOIUrl":"https://doi.org/10.1109/CIS.2007.122","url":null,"abstract":"Because of the complicated interaction of the sludge compost components, it makes the compost quality evaluation system appear the non-linearity and uncertainty. According to the physical circumstances of sludge compost, a compost quality evaluation modeling method based on wavelet neural network is presented. We adopt a method of reduce the number of the wavelet basic function by analysis the sparse property of sample data, and use the learning algorithm based on gradient descent to train network. We select the index of sludge compost quality and take the high temperature duration, degradation rate, nitrogen content, average oxygen concentration and maturity degree as the evaluation parameters. With the ability of strong self-learning and function approach and fast convergence rate of wavelet neural network, the modeling method can truly evaluate the compost quality by learning the index information of sludge compost quality. The experimental results show that this method is feasible and effective.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114503972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a new method to estimate the intrinsic dimensionality of high dimensional dataset is proposed. Based on neighborhood graph, our method calculates the non-negative weight coefficients from its neighbors for each data point and the numbers of those dominant positive weights in reconstructing coefficients are regarded as a faithful guide to the intrinsic dimensionality of dataset. The proposed method requires no parametric assumption on data distribution and is easy to implement in the general framework of manifold learning. Experimental results on several synthesized datasets and real datasets have shown the facility of our method.
{"title":"Intrinsic Dimensionality Estimation with Neighborhood Convex Hull","authors":"Chun-Guang Li, Jun Guo, Xiangfei Nie","doi":"10.1109/CIS.2007.104","DOIUrl":"https://doi.org/10.1109/CIS.2007.104","url":null,"abstract":"In this paper, a new method to estimate the intrinsic dimensionality of high dimensional dataset is proposed. Based on neighborhood graph, our method calculates the non-negative weight coefficients from its neighbors for each data point and the numbers of those dominant positive weights in reconstructing coefficients are regarded as a faithful guide to the intrinsic dimensionality of dataset. The proposed method requires no parametric assumption on data distribution and is easy to implement in the general framework of manifold learning. Experimental results on several synthesized datasets and real datasets have shown the facility of our method.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121872724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aimed at the stability of clusters and load balancing, a novel cluster-based routing algorithm is proposed in this paper. In order to maintain the stability of clusters, speed and energy of mobile nodes, but not the identity and connectivity, are taken as the basis of cluster-head election. Try to make all the nodes share the role of cluster-head, so as to balance the traffic loads of the network, and to avoid invalidity caused by certain nodes exhausting energy. Based on clustering, the backbone network composed by cluster-heads, gateways, and compound gateways is constructed, which reduces the complexity of maintaining routing information and topology information, and simplifies the routing process in large hierarchical ad hoc networks. Simulation results show that compared to the lowest ID and largest connectivity algorithms, it has better performance on network life duration, energy consumption, and signaling overhead.
{"title":"A Novel Cluster-Based Routing Algorithm in Ad Hoc Networks","authors":"Dongni Li","doi":"10.1109/CIS.2007.26","DOIUrl":"https://doi.org/10.1109/CIS.2007.26","url":null,"abstract":"Aimed at the stability of clusters and load balancing, a novel cluster-based routing algorithm is proposed in this paper. In order to maintain the stability of clusters, speed and energy of mobile nodes, but not the identity and connectivity, are taken as the basis of cluster-head election. Try to make all the nodes share the role of cluster-head, so as to balance the traffic loads of the network, and to avoid invalidity caused by certain nodes exhausting energy. Based on clustering, the backbone network composed by cluster-heads, gateways, and compound gateways is constructed, which reduces the complexity of maintaining routing information and topology information, and simplifies the routing process in large hierarchical ad hoc networks. Simulation results show that compared to the lowest ID and largest connectivity algorithms, it has better performance on network life duration, energy consumption, and signaling overhead.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128520130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The experiment design of PIGA test on centrifuge has been studied. Based on the identification method for high-order coefficients of PIGA (pendulous integrating gyro accelerometer) on precision centrifuge with counter-rotating platform, which can isolate the rotary movement caused by centrifuge arm so as to improve the environment of PIGA test on centrifuge, the method of the D-optimal designs is used in the data processing for separating the error model coefficients to optimize the test plans. The relation between the different values of the factors taken in the testing procedures and the estimated accuracy is discussed according to the D-criterion by way of simulation analysis. The results of the simulation analysis show that by optimizing the values of the factors in a test plan, the calibrating accuracy can be greatly improved.
{"title":"Study on PIGA Test Method on Centrifuge","authors":"Yong-hui Qiao, Yu Liu, Bao-ku Su","doi":"10.1109/CIS.2007.234","DOIUrl":"https://doi.org/10.1109/CIS.2007.234","url":null,"abstract":"The experiment design of PIGA test on centrifuge has been studied. Based on the identification method for high-order coefficients of PIGA (pendulous integrating gyro accelerometer) on precision centrifuge with counter-rotating platform, which can isolate the rotary movement caused by centrifuge arm so as to improve the environment of PIGA test on centrifuge, the method of the D-optimal designs is used in the data processing for separating the error model coefficients to optimize the test plans. The relation between the different values of the factors taken in the testing procedures and the estimated accuracy is discussed according to the D-criterion by way of simulation analysis. The results of the simulation analysis show that by optimizing the values of the factors in a test plan, the calibrating accuracy can be greatly improved.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124594692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}