Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687464
Jianping Feng, Lihua Wu, Yu Zhang
In this paper, the life characteristics of a computer virus and algorithm characteristics are studied. A evolution model of computer virus based on immune genetic algorithm, which draws inspirations from artificial life, is proposed. The formal definition of computer virus is introduced, and the evolution operators which include selection operator, crossover operator, mutation operator and immune operator are presented. It reveals that computer virus is a possible form of the characteristics of biological evolution. The simulation experiments were conducted and it indicates that computer viruses have enormously potential capability of self-propagation and self-evolution. Computer viruses have the characteristics of biological evolution, and the model can provides research thinking for anti-virus technology to improve and enhance.
{"title":"A evolution model of computer virus based on immune genetic algorithm","authors":"Jianping Feng, Lihua Wu, Yu Zhang","doi":"10.1109/PIC.2010.5687464","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687464","url":null,"abstract":"In this paper, the life characteristics of a computer virus and algorithm characteristics are studied. A evolution model of computer virus based on immune genetic algorithm, which draws inspirations from artificial life, is proposed. The formal definition of computer virus is introduced, and the evolution operators which include selection operator, crossover operator, mutation operator and immune operator are presented. It reveals that computer virus is a possible form of the characteristics of biological evolution. The simulation experiments were conducted and it indicates that computer viruses have enormously potential capability of self-propagation and self-evolution. Computer viruses have the characteristics of biological evolution, and the model can provides research thinking for anti-virus technology to improve and enhance.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122643727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687941
M. Doke, N. Hamaguchi, H. Kaneko, S. Inoue
We have been conducting research and development on a new system called TV for you (TV4U), for production, distribution and viewing of video content on the Internet. With TV4U, ordinary users with no special skills are able to produce and publish video content based on real-time 3D computer graphics (CG), and these can be viewed easily by anyone. TV4U makes use of a mechanism we have devised, called Automatic Production Engine (APE), which involve templates for automatically generating production direction for video content. Through introduction of APE, produced video content can be created easily, even by ordinary users with no special skills, by writing a simple description of a scenario. So far, APE have only been capable of generating productions within the range of predefined templates already in the system, limiting the video content that can be produced. As such, we have devised a mechanism able to generate production direction according to the current conditions in the video content. Introducing this mechanism into APE has made it more sophisticated, and has allowed APE to generate sophisticated production direction for video content that was not possible with previous versions of APE.
{"title":"Innovative CG content production through advanced APE","authors":"M. Doke, N. Hamaguchi, H. Kaneko, S. Inoue","doi":"10.1109/PIC.2010.5687941","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687941","url":null,"abstract":"We have been conducting research and development on a new system called TV for you (TV4U), for production, distribution and viewing of video content on the Internet. With TV4U, ordinary users with no special skills are able to produce and publish video content based on real-time 3D computer graphics (CG), and these can be viewed easily by anyone. TV4U makes use of a mechanism we have devised, called Automatic Production Engine (APE), which involve templates for automatically generating production direction for video content. Through introduction of APE, produced video content can be created easily, even by ordinary users with no special skills, by writing a simple description of a scenario. So far, APE have only been capable of generating productions within the range of predefined templates already in the system, limiting the video content that can be produced. As such, we have devised a mechanism able to generate production direction according to the current conditions in the video content. Introducing this mechanism into APE has made it more sophisticated, and has allowed APE to generate sophisticated production direction for video content that was not possible with previous versions of APE.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122664030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687938
Yan Zhao, Weimin Wei
In this paper, a new image hashing method using Zernike moments is proposed. This method is based on rotation invariance of magnitudes and corrected phases of Zernike moments. At first the input image is divided into overlapped blocks. Zernike moments of these blocks are calculated and then each of the amplitudes and phases of modified Zernike moments is then encoded into three bits to form the intermediate hash. Lastly, the final hash sequence is obtained by pseudo-randomly permuting the intermediate hash sequence. Similarity between hashes is measured with the Hamming distance. Experimental results show that this method is robust against most content-preserving attacks. The Hamming distance of Hashes between two different images is bigger than the threshold. This method can be used to detect tampering image, and can locate the tampered region in the image.
{"title":"Perceptual image hash for tampering detection using Zernike moments","authors":"Yan Zhao, Weimin Wei","doi":"10.1109/PIC.2010.5687938","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687938","url":null,"abstract":"In this paper, a new image hashing method using Zernike moments is proposed. This method is based on rotation invariance of magnitudes and corrected phases of Zernike moments. At first the input image is divided into overlapped blocks. Zernike moments of these blocks are calculated and then each of the amplitudes and phases of modified Zernike moments is then encoded into three bits to form the intermediate hash. Lastly, the final hash sequence is obtained by pseudo-randomly permuting the intermediate hash sequence. Similarity between hashes is measured with the Hamming distance. Experimental results show that this method is robust against most content-preserving attacks. The Hamming distance of Hashes between two different images is bigger than the threshold. This method can be used to detect tampering image, and can locate the tampered region in the image.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123869649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687580
Jianneng Chen, Zhenjie Huang
Certificate-based public key cryptography was introduced to remove the use of certificate to ensure the authentication of the user's public key in the traditional cryptography and overcome the key escrow problem in the identity-based public key cryptography. The proxy signature schemes allow proxy signers to sign messages on behalf of an original signer. Combining the concept of certificate-based signature with the concept of proxy signature, in this paper, we present a notion of certificate-based proxy signature based on bilinear parings and proposed a scheme assuming the hardness of Computational Diffie-Hellman Problem.
{"title":"Certificate-based proxy signature","authors":"Jianneng Chen, Zhenjie Huang","doi":"10.1109/PIC.2010.5687580","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687580","url":null,"abstract":"Certificate-based public key cryptography was introduced to remove the use of certificate to ensure the authentication of the user's public key in the traditional cryptography and overcome the key escrow problem in the identity-based public key cryptography. The proxy signature schemes allow proxy signers to sign messages on behalf of an original signer. Combining the concept of certificate-based signature with the concept of proxy signature, in this paper, we present a notion of certificate-based proxy signature based on bilinear parings and proposed a scheme assuming the hardness of Computational Diffie-Hellman Problem.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124175429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687872
Noor Mazlina Mahmod, Syafeeza Ahmad Radzi
This paper addresses the problem of managing variance in process instances for business process modeling. In business work practice, variance is a valuable source of organizational intellectual capital that needs to be captured and capitalized as it represents a preferred and successful work practice. Therefore, it is important to provide an effective method to analyze the similarity between these variants since it can bring benefits for organization productivity and provide consistency. Through this paper, we propose a systematic approach to deal with the complexity of business process variants by analyzing the structure relationship and the execution construct in order to measure the similarity degree of the variants.
{"title":"An approach to analyse similarity of business process variants","authors":"Noor Mazlina Mahmod, Syafeeza Ahmad Radzi","doi":"10.1109/PIC.2010.5687872","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687872","url":null,"abstract":"This paper addresses the problem of managing variance in process instances for business process modeling. In business work practice, variance is a valuable source of organizational intellectual capital that needs to be captured and capitalized as it represents a preferred and successful work practice. Therefore, it is important to provide an effective method to analyze the similarity between these variants since it can bring benefits for organization productivity and provide consistency. Through this paper, we propose a systematic approach to deal with the complexity of business process variants by analyzing the structure relationship and the execution construct in order to measure the similarity degree of the variants.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126186143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687454
Yonghua Hao, Xuzhu Wang, Caiping Wu, Na Xue
The objective of the paper is to extend Bandyopadhyay's results on rationality conditions of crisp choice functions. By fuzzifying the rationality conditions in the crisp case, we present a necessary and sufficient condition for the acyclic rationality and a characterization theorem for the Wϕ-transitive rationality under a strong De Morgan triple. As a result, some rationality characterizations in the crisp choice functions are generalized.
{"title":"The characterization of Wϕ-transitive rationality and acyclic rationality of fuzzy choice functions","authors":"Yonghua Hao, Xuzhu Wang, Caiping Wu, Na Xue","doi":"10.1109/PIC.2010.5687454","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687454","url":null,"abstract":"The objective of the paper is to extend Bandyopadhyay's results on rationality conditions of crisp choice functions. By fuzzifying the rationality conditions in the crisp case, we present a necessary and sufficient condition for the acyclic rationality and a characterization theorem for the Wϕ-transitive rationality under a strong De Morgan triple. As a result, some rationality characterizations in the crisp choice functions are generalized.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126336747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687455
Y. Wu, Jianguo Zheng
Collaborative filtering recommendation algorithm is one of the most successful technologies in the e-commerce recommendation system. With the development of e-commerce, the magnitudes of users and commodities grow rapidly; the performance of traditional recommendation algorithm is getting worse. So propose a new similarity measure method, automatically generate weighting factor to combine dynamically item attribute similarity and score similarity, form a reasonable item similarity, which bring the nearest neighbors of item, and predict the item's rating to recommend. The experimental results show the algorithm enhance the steady and precision of recommendation, solve cold start issue.
{"title":"A collaborative filtering recommendation algorithm based on improved similarity measure method","authors":"Y. Wu, Jianguo Zheng","doi":"10.1109/PIC.2010.5687455","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687455","url":null,"abstract":"Collaborative filtering recommendation algorithm is one of the most successful technologies in the e-commerce recommendation system. With the development of e-commerce, the magnitudes of users and commodities grow rapidly; the performance of traditional recommendation algorithm is getting worse. So propose a new similarity measure method, automatically generate weighting factor to combine dynamically item attribute similarity and score similarity, form a reasonable item similarity, which bring the nearest neighbors of item, and predict the item's rating to recommend. The experimental results show the algorithm enhance the steady and precision of recommendation, solve cold start issue.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130017980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687405
Y. Naudet, T. Latour, Géraldine Vidou, Y. Djaghloul
In this paper, we present an approach for high stake decision making based on the processing of uncontrolled knowledge originated from e-communities. Knowledge bases fed by such communities can provide a very rich source for a diagnosis or decision-support systems. However, they inherently bear inherent problems due in particular to knowledge heterogeneity, vagueness, completeness, uncertainty, and origins. In the High Stake Community Contributed Knowledge Base (HSCCKB) approach, the challenge of exploiting heterogeneous data in high stake decisions is addressed. We propose here a dedicated knowledge model as well as tracks for the evaluation of important knowledge aspects, and focus on the decisional architecture and process involved in HSCCKB-based decision support systems.
{"title":"Improving decision support systems with a High Stake Community Contributed Knowledge Base","authors":"Y. Naudet, T. Latour, Géraldine Vidou, Y. Djaghloul","doi":"10.1109/PIC.2010.5687405","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687405","url":null,"abstract":"In this paper, we present an approach for high stake decision making based on the processing of uncontrolled knowledge originated from e-communities. Knowledge bases fed by such communities can provide a very rich source for a diagnosis or decision-support systems. However, they inherently bear inherent problems due in particular to knowledge heterogeneity, vagueness, completeness, uncertainty, and origins. In the High Stake Community Contributed Knowledge Base (HSCCKB) approach, the challenge of exploiting heterogeneous data in high stake decisions is addressed. We propose here a dedicated knowledge model as well as tracks for the evaluation of important knowledge aspects, and focus on the decisional architecture and process involved in HSCCKB-based decision support systems.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129618693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5688013
Wenhui Zhang, Huan Zhou, L. Tang, X. Zhou
Creating and rendering realistic ocean is one of the most daunting tasks in computer graphics. An efficient algorithm is used to render ocean waves by taking advantage of the parallelism and programmability of GPU and the new characteristics of vertex sampling of Shader Model 3.0. The ocean modeling is optimized by level-of-detail (LOD) technology. The real-time wave simulation is realized by the technology of animated texture associated with time, which storages 250 pictures of the height map of ocean surface and is used to superimpose the grid vertex. The illumination effects of the water, such as the reflection, refraction and Fresnel effects, are rendered. Experimental results are shown very well to meet the photorealism and real-time requirement (>60fps) and can be applied to generate real-time water in visual reality.
创建和渲染逼真的海洋是计算机图形学中最艰巨的任务之一。利用GPU的并行性和可编程性以及Shader Model 3.0顶点采样的新特性,提出了一种高效的海浪渲染算法。采用细节级(LOD)技术对海洋模型进行优化。采用与时间关联的动画纹理技术,存储了250幅海面高度图,并将网格顶点进行叠加,实现了海浪的实时模拟。水的照明效果,如反射,折射和菲涅耳效应,被渲染。实验结果表明,该方法能够很好地满足真实感和实时性要求(>60fps),可用于视觉现实中的实时水生成。
{"title":"Realistic real-time rendering for ocean waves on GPU","authors":"Wenhui Zhang, Huan Zhou, L. Tang, X. Zhou","doi":"10.1109/PIC.2010.5688013","DOIUrl":"https://doi.org/10.1109/PIC.2010.5688013","url":null,"abstract":"Creating and rendering realistic ocean is one of the most daunting tasks in computer graphics. An efficient algorithm is used to render ocean waves by taking advantage of the parallelism and programmability of GPU and the new characteristics of vertex sampling of Shader Model 3.0. The ocean modeling is optimized by level-of-detail (LOD) technology. The real-time wave simulation is realized by the technology of animated texture associated with time, which storages 250 pictures of the height map of ocean surface and is used to superimpose the grid vertex. The illumination effects of the water, such as the reflection, refraction and Fresnel effects, are rendered. Experimental results are shown very well to meet the photorealism and real-time requirement (>60fps) and can be applied to generate real-time water in visual reality.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116031536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-01DOI: 10.1109/PIC.2010.5687864
Nima Khairdoost, N. Ghahraman
Constrained policy graph (CPG) is an imaginative graph and is in a high level understanding in comparison with pure logic. In this model we can describe the policies in constrained form according to the related system. In addition to the ability of describing ACPs, CPG model is able to combine policies and analyze them in order to detect possible conflicts arising from ACPs combination. Term rewriting systems are practical systems used in different fields including automatic theorem proving and developing computational models. Using term rewriting can help us in formal description and verification of access control policies (ACPs) and models. In this article after expression of how policies are described, their combination and conflict detection in CPG model, we describe them using term rewriting rules. These rules are appropriate tools for the automatic analysis of policies and conflict detection after their combination.
{"title":"Term rewriting for describing constrained policy graph and conflict detection","authors":"Nima Khairdoost, N. Ghahraman","doi":"10.1109/PIC.2010.5687864","DOIUrl":"https://doi.org/10.1109/PIC.2010.5687864","url":null,"abstract":"Constrained policy graph (CPG) is an imaginative graph and is in a high level understanding in comparison with pure logic. In this model we can describe the policies in constrained form according to the related system. In addition to the ability of describing ACPs, CPG model is able to combine policies and analyze them in order to detect possible conflicts arising from ACPs combination. Term rewriting systems are practical systems used in different fields including automatic theorem proving and developing computational models. Using term rewriting can help us in formal description and verification of access control policies (ACPs) and models. In this article after expression of how policies are described, their combination and conflict detection in CPG model, we describe them using term rewriting rules. These rules are appropriate tools for the automatic analysis of policies and conflict detection after their combination.","PeriodicalId":142910,"journal":{"name":"2010 IEEE International Conference on Progress in Informatics and Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122434406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}