Credit risk is the primary source of risk to financial institutions. Support vector machine (SVM) is a good classifier to solve binary classification problem. The learning results of SVM possess stronger robustness. We adjust these penalty parameters to achieve better generalization performances with using grid-search method in our application. In this paper the attribute reduction of rough set has been applied as preprocessor so that we can delete redundant attributes, then default prediction model of the housing mortgage loan is established by using SVM. Classification performance is better than some other classification algorithms.
{"title":"Defaults Assessment of Mortgage Loan with Rough Set and SVM","authors":"Bo Wang, Yongkui Liu, Yanyou Hao, Shuang Liu","doi":"10.1109/CIS.2007.159","DOIUrl":"https://doi.org/10.1109/CIS.2007.159","url":null,"abstract":"Credit risk is the primary source of risk to financial institutions. Support vector machine (SVM) is a good classifier to solve binary classification problem. The learning results of SVM possess stronger robustness. We adjust these penalty parameters to achieve better generalization performances with using grid-search method in our application. In this paper the attribute reduction of rough set has been applied as preprocessor so that we can delete redundant attributes, then default prediction model of the housing mortgage loan is established by using SVM. Classification performance is better than some other classification algorithms.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"19 4-5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123694489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Extraction of video keyframe is convenient for browsing and retrieving of video content. However, since the "keyframe" is a subjective concept which involves in vision and psychology, it is difficult to be described by low-level features of video. In this paper, we propose a method of keyframe extraction based on visual attention and affective models. To be concrete, film elements such as character, lighting and camera motion, crucial to human attention, are fused into a visual attention model, and the film is segmented into scenes according to a short-time memory model. The "scene importance" is then computed by using the affective arousal which determines audience's excitability in the 2D emotion space. Finally, according to the attention model and the scene importance, scene keyframes are extracted. Experimental results indicate that keyframes extracted by our approach are coincident with human perception, and would be in favor of further semantic analysis.
{"title":"Extraction of Semantic Keyframes Based on Visual Attention and Affective Models","authors":"Zhicheng Zhao, A. Cai","doi":"10.1109/CIS.2007.9","DOIUrl":"https://doi.org/10.1109/CIS.2007.9","url":null,"abstract":"The Extraction of video keyframe is convenient for browsing and retrieving of video content. However, since the \"keyframe\" is a subjective concept which involves in vision and psychology, it is difficult to be described by low-level features of video. In this paper, we propose a method of keyframe extraction based on visual attention and affective models. To be concrete, film elements such as character, lighting and camera motion, crucial to human attention, are fused into a visual attention model, and the film is segmented into scenes according to a short-time memory model. The \"scene importance\" is then computed by using the affective arousal which determines audience's excitability in the 2D emotion space. Finally, according to the attention model and the scene importance, scene keyframes are extracted. Experimental results indicate that keyframes extracted by our approach are coincident with human perception, and would be in favor of further semantic analysis.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"413 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122803419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, a class of post-processing approaches has been proposed to improve the recognition performance of LDA in face recognition. In-depth analysis, however, has not been presented to reveal the effectiveness of the post-processing approach. In this paper, we investigate the rationale of the post-processing approach, and demonstrate the interrelationship of the post-processing approach and the image Euclidean distance method (IMED). We then use the FERET face and the PolyU palmprint databases to evaluate the post-processed LDA method. Experimental results indicate the effectiveness of the post-processing approach and reveal its relation to IMED.
{"title":"Theoretical Investigation on Post-Processed LDA for Face and Palmprint Recognition","authors":"Jian-jun Hao, W. Zuo, Kuanquan Wang","doi":"10.1109/CIS.2007.106","DOIUrl":"https://doi.org/10.1109/CIS.2007.106","url":null,"abstract":"Recently, a class of post-processing approaches has been proposed to improve the recognition performance of LDA in face recognition. In-depth analysis, however, has not been presented to reveal the effectiveness of the post-processing approach. In this paper, we investigate the rationale of the post-processing approach, and demonstrate the interrelationship of the post-processing approach and the image Euclidean distance method (IMED). We then use the FERET face and the PolyU palmprint databases to evaluate the post-processed LDA method. Experimental results indicate the effectiveness of the post-processing approach and reveal its relation to IMED.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131303884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet worm is a menace for the security of the Internet users. To detect and protect the Internet worm becomes an important research topic in the field of Internet security. A robust estimation method for evaluating worm infection rate is proposed in this paper. The robust estimator of worm infection rate is derived based on the robust maximum likelihood estimation principle at first; The corresponding elements of the equivalent weight matrix constructed by the residuals and some chosen weight functions are given; The error influence functions related to the robust estimator and the least squares estimator are respectively analyzed; At last, a simulated example is carried out. It is shown that the robust estimation is effective and reliable in resisting the bad influence of the outlying scan data on the estimated worm infection rate with high computation convergence speed.
{"title":"A Robust Estimator for Evaluating Internet Worm Infection Rate","authors":"Y. Deng, Guanzhong Dai, Shuxin Chen","doi":"10.1109/CIS.2007.116","DOIUrl":"https://doi.org/10.1109/CIS.2007.116","url":null,"abstract":"The Internet worm is a menace for the security of the Internet users. To detect and protect the Internet worm becomes an important research topic in the field of Internet security. A robust estimation method for evaluating worm infection rate is proposed in this paper. The robust estimator of worm infection rate is derived based on the robust maximum likelihood estimation principle at first; The corresponding elements of the equivalent weight matrix constructed by the residuals and some chosen weight functions are given; The error influence functions related to the robust estimator and the least squares estimator are respectively analyzed; At last, a simulated example is carried out. It is shown that the robust estimation is effective and reliable in resisting the bad influence of the outlying scan data on the estimated worm infection rate with high computation convergence speed.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116558962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although adopting feature reduction in classic rough set theory to select informative genes is an effective method, its classification accuracy rate is usually not higher compared with other tumor-related gene selection and tumor classification approaches; for gene expression values must be discretized before gene reduction, which leads to information loss in tumor classification. Therefore, the neighborhood rough set model proposed by Hu Qing-Hua is introduced to tumor classification, which omits the discretization procedure, so no information loss occurs before gene reduction. Experiments on two well-known tumor datasets show that gene selection using neighborhood rough set model obviously outperforms using classic rough set theory and experiment results also prove that the most of the selected gene subset not only has higher accuracy rate but also are related to tumor.
{"title":"Gene Selection Using Neighborhood Rough Set from Gene Expression Profiles","authors":"Shulin Wang, Huowang Chen, Shutao Li","doi":"10.1109/CIS.2007.169","DOIUrl":"https://doi.org/10.1109/CIS.2007.169","url":null,"abstract":"Although adopting feature reduction in classic rough set theory to select informative genes is an effective method, its classification accuracy rate is usually not higher compared with other tumor-related gene selection and tumor classification approaches; for gene expression values must be discretized before gene reduction, which leads to information loss in tumor classification. Therefore, the neighborhood rough set model proposed by Hu Qing-Hua is introduced to tumor classification, which omits the discretization procedure, so no information loss occurs before gene reduction. Experiments on two well-known tumor datasets show that gene selection using neighborhood rough set model obviously outperforms using classic rough set theory and experiment results also prove that the most of the selected gene subset not only has higher accuracy rate but also are related to tumor.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129956779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Based on Schnorr cryptosystem, this paper proposes a new forward-secure threshold signature scheme. It ensures that both the signature's secret key and the signature are forward-secure through efficiently hiding the current secret key in the signature phase and using the time-parameter effectively in the verification phase. This scheme has the new property that it is infeasible for an attacker to forge any valid signature pertaining to the past even if he has corrupted up to more than or equal to the threshold members and has obtained the current key. It is also proven to be forward secure based on the hardness of factoring in the random oracle model.
{"title":"A New Forward-Secure Threshold Signature Scheme Based on Schnorr Cryptosystem","authors":"Guosheng Cheng, Cuilan Yun","doi":"10.1109/CIS.2007.18","DOIUrl":"https://doi.org/10.1109/CIS.2007.18","url":null,"abstract":"Based on Schnorr cryptosystem, this paper proposes a new forward-secure threshold signature scheme. It ensures that both the signature's secret key and the signature are forward-secure through efficiently hiding the current secret key in the signature phase and using the time-parameter effectively in the verification phase. This scheme has the new property that it is infeasible for an attacker to forge any valid signature pertaining to the past even if he has corrupted up to more than or equal to the threshold members and has obtained the current key. It is also proven to be forward secure based on the hardness of factoring in the random oracle model.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132600777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As cross-domain file sharing is important and existing schemes have some shortcomings, an efficient credential-based scheme is proposed in this paper. Symmetric-key credential, delegation without intervention of any centralized administrator, no need for traditional ACL and mapping remote group names to local identifiers are the features of the scheme. Symmetric-key credential is flexible, computationally efficient and can provide some useful revocation means. The way of no centralized administrator can ease management and administration overheads, and also avoid center point failure. The features of no need for traditional ACL and mapping remote group names to local identifiers have the advantage of reducing the burden of storage server. The processes of cross- domain delegation, authentication and revocation are discussed. And the security analysis indicates that the scheme is good.
{"title":"An Efficient Credential-Based Scheme for Cross-Domain File Sharing","authors":"Lanxiang Chen, D. Feng","doi":"10.1109/CIS.2007.66","DOIUrl":"https://doi.org/10.1109/CIS.2007.66","url":null,"abstract":"As cross-domain file sharing is important and existing schemes have some shortcomings, an efficient credential-based scheme is proposed in this paper. Symmetric-key credential, delegation without intervention of any centralized administrator, no need for traditional ACL and mapping remote group names to local identifiers are the features of the scheme. Symmetric-key credential is flexible, computationally efficient and can provide some useful revocation means. The way of no centralized administrator can ease management and administration overheads, and also avoid center point failure. The features of no need for traditional ACL and mapping remote group names to local identifiers have the advantage of reducing the burden of storage server. The processes of cross- domain delegation, authentication and revocation are discussed. And the security analysis indicates that the scheme is good.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132745515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Supervised anomaly intrusion detection systems (IDSs) based on Support Vector Machines (SVMs) classification technique have attracted much more attention today. In these systems, the characteristics of kernels have great in- fluence on learning and prediction results for IDSs. How- ever, selecting feasible parameters can be time-consuming as the number of parameters and the size of the dataset in- crease. In this paper, an immune evolutionary based ker- nel parameter selection approach is proposed. Through the simulation of the denial of service attacks in mobile ad-hoc networks (MANETs), the result dataset is used for compar- ing the prediction performance using different types of ker- nels. At the same time, the parameter selection efficiency of the proposed approach is also compared with the differen- tial evolution algorithm.
{"title":"A Parameter Selection Approach for Mixtures of Kernels Using Immune Evolutionary Algorithm and its Application to IDSs","authors":"Chun Yang, Haidong Yang, F. Deng","doi":"10.1109/CIS.2007.188","DOIUrl":"https://doi.org/10.1109/CIS.2007.188","url":null,"abstract":"Supervised anomaly intrusion detection systems (IDSs) based on Support Vector Machines (SVMs) classification technique have attracted much more attention today. In these systems, the characteristics of kernels have great in- fluence on learning and prediction results for IDSs. How- ever, selecting feasible parameters can be time-consuming as the number of parameters and the size of the dataset in- crease. In this paper, an immune evolutionary based ker- nel parameter selection approach is proposed. Through the simulation of the denial of service attacks in mobile ad-hoc networks (MANETs), the result dataset is used for compar- ing the prediction performance using different types of ker- nels. At the same time, the parameter selection efficiency of the proposed approach is also compared with the differen- tial evolution algorithm.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132863160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
According to local information of images, region- based image retrieval is the focus of recent research works, as the approaches based global features can not achieve the expectation querying results. The objects of interest generally occupy only one small part of images, so the image segmentation with different object regions must be conducted for the region-based image retrieval schemes. However, accurate object segmentation is still beyond current computer vision technique. Here, we proposed one feasible image retrieval scheme based the hierarchical uniform segmentations, which avoid the complexity of image segmentations. Firstly, the querying image is segmented into equal blocks at different hierarchical levels, and the more blocks with larger hierarchical levels. Then, according to the similar metrics of these different size blocks to the expectation image into segmentations, the images containing querying objects can be retrieved with information about scales and locations of query objects in retrieved images. Finally, the proposed image retrieval schemes are tested by experiments via database with 500 images, and the retrieval accuracy can achieve 78% for the optimal similar metric threshold, and is comparable to that of region-based schemes.
{"title":"Image Retrieval with Simple Invariant Features Based Hierarchical Uniform Segmentation","authors":"Ming-xin Zhang, Zhaogan Lu, Junyi Shen","doi":"10.1109/CIS.2007.139","DOIUrl":"https://doi.org/10.1109/CIS.2007.139","url":null,"abstract":"According to local information of images, region- based image retrieval is the focus of recent research works, as the approaches based global features can not achieve the expectation querying results. The objects of interest generally occupy only one small part of images, so the image segmentation with different object regions must be conducted for the region-based image retrieval schemes. However, accurate object segmentation is still beyond current computer vision technique. Here, we proposed one feasible image retrieval scheme based the hierarchical uniform segmentations, which avoid the complexity of image segmentations. Firstly, the querying image is segmented into equal blocks at different hierarchical levels, and the more blocks with larger hierarchical levels. Then, according to the similar metrics of these different size blocks to the expectation image into segmentations, the images containing querying objects can be retrieved with information about scales and locations of query objects in retrieved images. Finally, the proposed image retrieval schemes are tested by experiments via database with 500 images, and the retrieval accuracy can achieve 78% for the optimal similar metric threshold, and is comparable to that of region-based schemes.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131095821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the situation that there is not enough consideration about the trust level of the buyers and sellers in the existing connection methods which considers buyers' attributes more, a connection method based on the buyers and sellers is proposed, which heightens the importance of the trust level of the users in the computing of the transaction success probability. In this paper, we propose a material computing method for the attribute value, which is used in the new connection method. We use an improved trust model based on multiple factors for computing the users' creditability degree, which heightens the validity of the user's creditability degree. Theory and experiments all show that this method is more objective.
{"title":"A New Method Considering Creditability Degree and Connection of Buyers and Sellers in E-Broker System","authors":"Shaomin Zhang, Wei Wang","doi":"10.1109/CIS.2007.147","DOIUrl":"https://doi.org/10.1109/CIS.2007.147","url":null,"abstract":"In the situation that there is not enough consideration about the trust level of the buyers and sellers in the existing connection methods which considers buyers' attributes more, a connection method based on the buyers and sellers is proposed, which heightens the importance of the trust level of the users in the computing of the transaction success probability. In this paper, we propose a material computing method for the attribute value, which is used in the new connection method. We use an improved trust model based on multiple factors for computing the users' creditability degree, which heightens the validity of the user's creditability degree. Theory and experiments all show that this method is more objective.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134358082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}