The rapid growth of the World-Wide Web poses unprecedented scaling challenges for general-purpose crawlers. Focused crawler is developed to collect relevant web pages of interested topics form the Internet. The PageRank algorithm is used in ranking web pages. It estimates the page 's authority by taking into account the link structure of the Web. However, it assigns each outlink the same weight and is independent of topics, resulting in topic-drift. In this paper, we proposed an improved PageRank algorithm, which we called "T- PageRank", and it based on "topical random surfer". The experiment in focused crawler using the T-PageRank has better performance than the Breath-first and PageRank algorithms.
{"title":"Improvement of PageRank for Focused Crawler","authors":"Fuyong Yuan, Chunxia Yin, Jian Liu","doi":"10.1109/SNPD.2007.458","DOIUrl":"https://doi.org/10.1109/SNPD.2007.458","url":null,"abstract":"The rapid growth of the World-Wide Web poses unprecedented scaling challenges for general-purpose crawlers. Focused crawler is developed to collect relevant web pages of interested topics form the Internet. The PageRank algorithm is used in ranking web pages. It estimates the page 's authority by taking into account the link structure of the Web. However, it assigns each outlink the same weight and is independent of topics, resulting in topic-drift. In this paper, we proposed an improved PageRank algorithm, which we called \"T- PageRank\", and it based on \"topical random surfer\". The experiment in focused crawler using the T-PageRank has better performance than the Breath-first and PageRank algorithms.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127788722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Design of cryptographic protocols especially authentication protocols remains error-prone, even for experts in this area. Protocol engineering is a new notion introduced in this paper for cryptographic protocol design, which is derived from software engineering idea. We present and illustrate protocol engineering principles in three groups: cryptographic protocol security requirements analysis principles, detailed protocol design principles and provable security principles. Furthermore, we illustrate that some of the well-known Abadi and Needham's principles are ambiguous. This paper is useful in that it regards cryptographic protocol design as system engineering, hence it can efficiently indicate implicit assumptions behind cryptographic protocol design, and present operational principles on uncovering these subtleties. Although our principles are informal, but they are practical, and we believe that they will benefit other researchers.
{"title":"Protocol Engineering Principles for Cryptographic Protocols Design","authors":"Ling Dong, Kefei Chen, M. Wen, Yanfei Zheng","doi":"10.1109/SNPD.2007.441","DOIUrl":"https://doi.org/10.1109/SNPD.2007.441","url":null,"abstract":"Design of cryptographic protocols especially authentication protocols remains error-prone, even for experts in this area. Protocol engineering is a new notion introduced in this paper for cryptographic protocol design, which is derived from software engineering idea. We present and illustrate protocol engineering principles in three groups: cryptographic protocol security requirements analysis principles, detailed protocol design principles and provable security principles. Furthermore, we illustrate that some of the well-known Abadi and Needham's principles are ambiguous. This paper is useful in that it regards cryptographic protocol design as system engineering, hence it can efficiently indicate implicit assumptions behind cryptographic protocol design, and present operational principles on uncovering these subtleties. Although our principles are informal, but they are practical, and we believe that they will benefit other researchers.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133447851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To solve the dilemma introduced by rate-distortion optimization (RDO) in H.264 rate control, the linear model of MAD prediction is applied. However, this model suits image characteristics locally but in some cases it could be inefficient. If there is a low spatial correlation (i.e., the encoder is processing pictures with varying characteristics), the MAD estimated by the model does not stand for the complexity resulting in the improper quantization parameter (QP) value and image quality degradation. In this paper, the mean square of AC coefficients is introduced into R-D model instead of MAD. Experiments show that the proposed rate control scheme performs well when compared with JlT-G012.
{"title":"Improvements on MB-layer Rate Control Scheme for H.264 video Using complexity estimation","authors":"Ming Yin, Yun Xie, Fen Guo, Shuting Cai","doi":"10.1109/SNPD.2007.294","DOIUrl":"https://doi.org/10.1109/SNPD.2007.294","url":null,"abstract":"To solve the dilemma introduced by rate-distortion optimization (RDO) in H.264 rate control, the linear model of MAD prediction is applied. However, this model suits image characteristics locally but in some cases it could be inefficient. If there is a low spatial correlation (i.e., the encoder is processing pictures with varying characteristics), the MAD estimated by the model does not stand for the complexity resulting in the improper quantization parameter (QP) value and image quality degradation. In this paper, the mean square of AC coefficients is introduced into R-D model instead of MAD. Experiments show that the proposed rate control scheme performs well when compared with JlT-G012.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"203 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132227110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an improved embedded zerotree wavelet (EZW) coding algorithm. According to the characteristic of coefficients and human visual system (HVS), an attempt is made to improve Shapiro's EZW algorithm. The improved algorithm pays more attention to the edge of one image because human visual system is sensitive to the distortion of image edge information. Experiments results show that the improved algorithm performs better than EZW in reconstruction image quality, especially in the case of low rate.
{"title":"Improved Image Coding Algorithm Based on Embedded Zerotree","authors":"Fuheng Liu, Xuhong Liu, Guijuan Kuang, Yi Xiu","doi":"10.1109/SNPD.2007.69","DOIUrl":"https://doi.org/10.1109/SNPD.2007.69","url":null,"abstract":"This paper presents an improved embedded zerotree wavelet (EZW) coding algorithm. According to the characteristic of coefficients and human visual system (HVS), an attempt is made to improve Shapiro's EZW algorithm. The improved algorithm pays more attention to the edge of one image because human visual system is sensitive to the distortion of image edge information. Experiments results show that the improved algorithm performs better than EZW in reconstruction image quality, especially in the case of low rate.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132346239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Throughout the years, much research has been conducted on human behavior models that focus on individual intelligent human agents. Fewer multi-agent based models have addressed group or crowd behavior from a psychological and sociological perspective. We have been focused on incorporating crowd behavior models into control force (police and military) simulations and have developed a real-time crowd simulation capable of generating multiple intelligent agent civilians that exhibit a variety of realistic individual and group behaviors at differing levels of fidelity. One important aspect of modeling realistic crowd behaviors is determining the physiological effects of weapons, both non-lethal and lethal alike, on humans. To this end, we present our categories of non-lethal weapons and their physiological effects that need to be represented. Additionally, this paper describes an injury model developed by the University of Pennsylvania and its integration into our Crowd Federate.
{"title":"Incorporating a PMF-Based Injury Model into a Multi-Agent Representation of Crowd Behavior","authors":"F. McKenzie, Herbie H. Piland, Min Song","doi":"10.1109/SNPD.2007.537","DOIUrl":"https://doi.org/10.1109/SNPD.2007.537","url":null,"abstract":"Throughout the years, much research has been conducted on human behavior models that focus on individual intelligent human agents. Fewer multi-agent based models have addressed group or crowd behavior from a psychological and sociological perspective. We have been focused on incorporating crowd behavior models into control force (police and military) simulations and have developed a real-time crowd simulation capable of generating multiple intelligent agent civilians that exhibit a variety of realistic individual and group behaviors at differing levels of fidelity. One important aspect of modeling realistic crowd behaviors is determining the physiological effects of weapons, both non-lethal and lethal alike, on humans. To this end, we present our categories of non-lethal weapons and their physiological effects that need to be represented. Additionally, this paper describes an injury model developed by the University of Pennsylvania and its integration into our Crowd Federate.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134215974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To deal with the problem of completing the information table, a new method was studied and proposed. First, define discernible vector and its addition rule by the indiscernible relation in rough set. Second, scan discernible vectors just only one time by the discernible vector addition rule in order to obtain the core attribute set and the important attributes. Then obtain a reduced attribute set by deleting redundant attributes. Finally, according to the dependence relation of condition and decision attributes, select the important breaking points,and complete the information table with the constraints of classification quality. The illustration and experiment results indicate that the method is effective and efficient.
{"title":"Rough Set Approach for Processing Information Table","authors":"E. Xu, Shaocheng Tong, Liangshan Shao, Baiqing Ye","doi":"10.1109/SNPD.2007.309","DOIUrl":"https://doi.org/10.1109/SNPD.2007.309","url":null,"abstract":"To deal with the problem of completing the information table, a new method was studied and proposed. First, define discernible vector and its addition rule by the indiscernible relation in rough set. Second, scan discernible vectors just only one time by the discernible vector addition rule in order to obtain the core attribute set and the important attributes. Then obtain a reduced attribute set by deleting redundant attributes. Finally, according to the dependence relation of condition and decision attributes, select the important breaking points,and complete the information table with the constraints of classification quality. The illustration and experiment results indicate that the method is effective and efficient.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133372721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A method KFST (Foley-Sammon transform with kernels)is proposed which is based on FST (Foley-Sammon transform) and kernel tricks. The projectors onto the directions derived by KFST can be used for class-specific feature extraction. The algorithm is carried out in a feature space associated with kernel functions, hence it can be used to construct a large class of nonlinear feature extractors. Linear feature extraction in feature space corresponds to nonlinear feature extraction in input space. KFST is proven to correspond to a generalized eigenvalue problem. Lastly, our method is applied to digits and images recognition problems, and the experimental results show that present method is superior to the existing methods in term of space distribution and correct classification rate.
{"title":"Feature Extraction by Foley-Sammon Transform with Kernels","authors":"Zhenzhou Chen","doi":"10.1109/SNPD.2007.206","DOIUrl":"https://doi.org/10.1109/SNPD.2007.206","url":null,"abstract":"A method KFST (Foley-Sammon transform with kernels)is proposed which is based on FST (Foley-Sammon transform) and kernel tricks. The projectors onto the directions derived by KFST can be used for class-specific feature extraction. The algorithm is carried out in a feature space associated with kernel functions, hence it can be used to construct a large class of nonlinear feature extractors. Linear feature extraction in feature space corresponds to nonlinear feature extraction in input space. KFST is proven to correspond to a generalized eigenvalue problem. Lastly, our method is applied to digits and images recognition problems, and the experimental results show that present method is superior to the existing methods in term of space distribution and correct classification rate.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133799296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a multi-object motion-tracking method based both on region and feature tracking is proposed for the purpose of real-time tracking in video surveillance system. Moving object is detected through foreground detection. Then we extract three features of each moving objects such as centroid, area, and average luminance. At last, the similarity function is applied to tracking. It is proved that the method has good performance under dynamic circumstances for real-time tracking.
{"title":"A Multi-object Motion-tracking Method for Video Surveillance","authors":"Jiang Dan, Yu Yuan","doi":"10.1109/SNPD.2007.17","DOIUrl":"https://doi.org/10.1109/SNPD.2007.17","url":null,"abstract":"In this paper, a multi-object motion-tracking method based both on region and feature tracking is proposed for the purpose of real-time tracking in video surveillance system. Moving object is detected through foreground detection. Then we extract three features of each moving objects such as centroid, area, and average luminance. At last, the similarity function is applied to tracking. It is proved that the method has good performance under dynamic circumstances for real-time tracking.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133831950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this paper is to implement parallel test in the single processor auto test system and to improve the test efficiency with a lower test cost. The main factor that impacts the test efficiency of test system is the performance of the parallel task scheduling algorithm. This paper puts forward a heuristic parallel task scheduling algorithm: scheduling-Q which can meet the characteristics of the auto test system. Every test tasks uses some resources to put test the units under test. So, we can use the multi-threading technique to implement single processor parallel test. In test system some test tasks can be executed with different resource allocations. The task scheduling algorithm: scheduling-Q adapts well to this characteristic. It schedules the test tasks according to the task's earliest starting time and the test generalized resource loading. The generalized resource loading is embodied as task resources set loading based on resources allocation mode and task resources set loading based on task's starting time. The test resources with bigger loading have more opportunities to obtain task and are always in a busy state. Thus resources loadings can be balanced to a degree. So the parallel performance of test system can be improved with the algorithm. In addition, the algorithm adopts the strategy of heuristic local optimum search. The time complexity of the algorithm is decreased obviously.
{"title":"A Task Scheduling Algorithm of Single Processor Parallel Test System","authors":"Jiajing Zhuo, Chen Meng, Minghu Zou","doi":"10.1109/SNPD.2007.383","DOIUrl":"https://doi.org/10.1109/SNPD.2007.383","url":null,"abstract":"The purpose of this paper is to implement parallel test in the single processor auto test system and to improve the test efficiency with a lower test cost. The main factor that impacts the test efficiency of test system is the performance of the parallel task scheduling algorithm. This paper puts forward a heuristic parallel task scheduling algorithm: scheduling-Q which can meet the characteristics of the auto test system. Every test tasks uses some resources to put test the units under test. So, we can use the multi-threading technique to implement single processor parallel test. In test system some test tasks can be executed with different resource allocations. The task scheduling algorithm: scheduling-Q adapts well to this characteristic. It schedules the test tasks according to the task's earliest starting time and the test generalized resource loading. The generalized resource loading is embodied as task resources set loading based on resources allocation mode and task resources set loading based on task's starting time. The test resources with bigger loading have more opportunities to obtain task and are always in a busy state. Thus resources loadings can be balanced to a degree. So the parallel performance of test system can be improved with the algorithm. In addition, the algorithm adopts the strategy of heuristic local optimum search. The time complexity of the algorithm is decreased obviously.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122947153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
You-xin Meng, Aiguang Young, Xianwei Wang, Kun Shao
A domain-analysis based process was developed to facilitate software reuse. This process starts with analyzing domain common features/differences systematically based on software architectural considerations. The end result of domain-analysis is a domain model with a collection of well-defined and well-developed packages/components ready for reuse. The advantages of domain-analysis were demonstrated in warehouse management software system development process. Furthermore, a number of systems integrated with this domain model were successfully applied to various industries, such as electronics, chemical, and rubber industries. The results in this paper supported the notion that domain-analysis is an effective way to develop efficient component-based software system with maximized code reuse, minimized code duplication, and enhanced software quality in a substantially reduced development timeframe.
{"title":"Domain-Analysis in Software Reuse - Application in Warehouse Management","authors":"You-xin Meng, Aiguang Young, Xianwei Wang, Kun Shao","doi":"10.1109/SNPD.2007.524","DOIUrl":"https://doi.org/10.1109/SNPD.2007.524","url":null,"abstract":"A domain-analysis based process was developed to facilitate software reuse. This process starts with analyzing domain common features/differences systematically based on software architectural considerations. The end result of domain-analysis is a domain model with a collection of well-defined and well-developed packages/components ready for reuse. The advantages of domain-analysis were demonstrated in warehouse management software system development process. Furthermore, a number of systems integrated with this domain model were successfully applied to various industries, such as electronics, chemical, and rubber industries. The results in this paper supported the notion that domain-analysis is an effective way to develop efficient component-based software system with maximized code reuse, minimized code duplication, and enhanced software quality in a substantially reduced development timeframe.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125239630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}