Zhitao Guan, Yuanda Cao, M. H. Durad, Liehuang Zhu
P2P systems have suffered a lot from free riding problem i.e. some participants consume more resources than they contribute. In order to deal with this problem efficiently, an incentive model based on peer credit has been proposed. Each peer in this model is a credit entity. Through introducing reward method to the model, peers will allocate resources according to the credits of requesting peers to maximize their own credits. A backtracking algorithm has been suggested to solve the issue of maximizing credit rewards. To avoid inflation of the credit, attenuation method is introduced. The scheme how to apply the credit model in hybrid P2P networks has also been elaborated in this work. The experimental results show that the model can effectively control free riding and improve efficiency of the system.
{"title":"An Efficient Hybrid P2P Incentive Scheme","authors":"Zhitao Guan, Yuanda Cao, M. H. Durad, Liehuang Zhu","doi":"10.1109/SNPD.2007.180","DOIUrl":"https://doi.org/10.1109/SNPD.2007.180","url":null,"abstract":"P2P systems have suffered a lot from free riding problem i.e. some participants consume more resources than they contribute. In order to deal with this problem efficiently, an incentive model based on peer credit has been proposed. Each peer in this model is a credit entity. Through introducing reward method to the model, peers will allocate resources according to the credits of requesting peers to maximize their own credits. A backtracking algorithm has been suggested to solve the issue of maximizing credit rewards. To avoid inflation of the credit, attenuation method is introduced. The scheme how to apply the credit model in hybrid P2P networks has also been elaborated in this work. The experimental results show that the model can effectively control free riding and improve efficiency of the system.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124302855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A tristate approach (TA) for image denoising processing is presented; the noise is aimed at the presence of pepper-and-salt noise. The newness of this method is that it develops a new route in the field of image restoration. The tristate approach algorithm focuses on the removal and restoration of the noisy speckles and avoids blurring and averaging edges and non-noise pixels in a way different from other known algorithms. Any noisy pixel is replaced by an estimated value. This value is the weighted mean of the pixels neighboring to the noisy pixel or the four iteration pixels got before it. This paper describes, analyzes and compares several methods and results of removing noise from an image. We have performed the experiments by adding Salt-and-Pepper in an original image.
{"title":"A Tristate Approach Based on Weighted Mean and Backward Iteration","authors":"Yanhua Ma, Chuanju Liu, Haiying Sun","doi":"10.1109/SNPD.2007.111","DOIUrl":"https://doi.org/10.1109/SNPD.2007.111","url":null,"abstract":"A tristate approach (TA) for image denoising processing is presented; the noise is aimed at the presence of pepper-and-salt noise. The newness of this method is that it develops a new route in the field of image restoration. The tristate approach algorithm focuses on the removal and restoration of the noisy speckles and avoids blurring and averaging edges and non-noise pixels in a way different from other known algorithms. Any noisy pixel is replaced by an estimated value. This value is the weighted mean of the pixels neighboring to the noisy pixel or the four iteration pixels got before it. This paper describes, analyzes and compares several methods and results of removing noise from an image. We have performed the experiments by adding Salt-and-Pepper in an original image.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114849553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Because of the complexity of data set with mixed attributes, the traditional clustering algorithms appropriate for this kind of dataset are few and the effect of clustering is not good. K-prototype clustering is one of the most commonly used methods in data mining for this kind of data. We borrow the ideas from the multiple classifiers combing technology, use k- prototype as the basis clustering algorithm to design a multi-level clustering ensemble algorithm in this paper, which adoptively selects attributes for re-clustering. Comparison experiments on Adult data set from UCI machine learning data repository show very competitive results and the proposed method is suitable for data editing.
{"title":"A New Supervised Clustering Algorithm for Data Set with Mixed Attributes","authors":"Shijin Li, Yuelong Zhu, Jing Liu, Xiaohu Zhang","doi":"10.1109/SNPD.2007.360","DOIUrl":"https://doi.org/10.1109/SNPD.2007.360","url":null,"abstract":"Because of the complexity of data set with mixed attributes, the traditional clustering algorithms appropriate for this kind of dataset are few and the effect of clustering is not good. K-prototype clustering is one of the most commonly used methods in data mining for this kind of data. We borrow the ideas from the multiple classifiers combing technology, use k- prototype as the basis clustering algorithm to design a multi-level clustering ensemble algorithm in this paper, which adoptively selects attributes for re-clustering. Comparison experiments on Adult data set from UCI machine learning data repository show very competitive results and the proposed method is suitable for data editing.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116450631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the size and complexity of cluster systems grows, failure rates accelerate dramatically. To reduce the disaster caused by failures, it is desirable to identify the potential failures ahead of their occurrence. In this paper, we survey the state of the art in failure prediction of cluster systems. The characteristic of failures in cluster systems are addressed, and some statistic results are shown. We explore the ways of the collection and preprocessing of data for failure prediction, and suggest a procedure for preprocessing the records in automatically generated log files. Focused on the main idea of five prediction methods, including statistic based threshold, time series analysis, rule-based classification, Bayesian network models and semi-Markov process models, are analyzed respectively. In addition, concerning the accuracy and practicality, we present five metrics for evaluating the failure prediction techniques and compare the five techniques with the five metrics.
{"title":"A Survey on Failure Prediction of Large-Scale Server Clusters","authors":"Zhenghua Xue, Xiaoshe Dong, Siyuan Ma, W. Dong","doi":"10.1109/SNPD.2007.284","DOIUrl":"https://doi.org/10.1109/SNPD.2007.284","url":null,"abstract":"As the size and complexity of cluster systems grows, failure rates accelerate dramatically. To reduce the disaster caused by failures, it is desirable to identify the potential failures ahead of their occurrence. In this paper, we survey the state of the art in failure prediction of cluster systems. The characteristic of failures in cluster systems are addressed, and some statistic results are shown. We explore the ways of the collection and preprocessing of data for failure prediction, and suggest a procedure for preprocessing the records in automatically generated log files. Focused on the main idea of five prediction methods, including statistic based threshold, time series analysis, rule-based classification, Bayesian network models and semi-Markov process models, are analyzed respectively. In addition, concerning the accuracy and practicality, we present five metrics for evaluating the failure prediction techniques and compare the five techniques with the five metrics.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122040964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Token is a kind of symbol. It has some controlling powers. A special token-ring structure is designed according to the characteristics of the token. The token-ring structure consists of a controlling token and a sub-token. The controlling token includes some controlling information such as clustering center. The sub-token is designed to search for data and mark the qualified data. Based on the token-ring structure, a new clustering algorithm is designed. The algorithm can modify class structure dynamically in the process of clustering. The theoretical and practical analysis showed that the algorithm had good performances in some indexes, such as scalability, accuracy and velocity.
{"title":"A New Clustering Algorithm Based on Token Ring","authors":"Yongquan Liang, Jiancong Fan, Zhongying Zhao","doi":"10.1109/SNPD.2007.340","DOIUrl":"https://doi.org/10.1109/SNPD.2007.340","url":null,"abstract":"Token is a kind of symbol. It has some controlling powers. A special token-ring structure is designed according to the characteristics of the token. The token-ring structure consists of a controlling token and a sub-token. The controlling token includes some controlling information such as clustering center. The sub-token is designed to search for data and mark the qualified data. Based on the token-ring structure, a new clustering algorithm is designed. The algorithm can modify class structure dynamically in the process of clustering. The theoretical and practical analysis showed that the algorithm had good performances in some indexes, such as scalability, accuracy and velocity.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122137229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For a large website adopting Web server cluster, how to organize and distribute web documents is a challenging problem. In this paper, we propose a strategy to distribute web documents in web server cluster, whose aim is to reduce system 's average response time. The strategy uses queuing model to analyze cluster system, and translates the document distribution problem into a 0-1 integer programming problem. Aimed at such kind of 0-1 integer programming problem, we propose a chaotic searching algorithm to solve it. The chaotic searching algorithm lets many isolated chaotic variables search in their tracks, so the corresponding 0-1 distribution matrix built by these variables can experience every possible distribution, thereby it can find the global optimal solution in enough long time. Simulation tests show that the chaotic searching algorithm can find the global optimal solution.
{"title":"Documents Distribution Strategy Based on Queuing Model and Chaotic Searching Algorithm in Web Server Cluster","authors":"Zhi Xiong, Chengcheng Guo","doi":"10.1109/SNPD.2007.419","DOIUrl":"https://doi.org/10.1109/SNPD.2007.419","url":null,"abstract":"For a large website adopting Web server cluster, how to organize and distribute web documents is a challenging problem. In this paper, we propose a strategy to distribute web documents in web server cluster, whose aim is to reduce system 's average response time. The strategy uses queuing model to analyze cluster system, and translates the document distribution problem into a 0-1 integer programming problem. Aimed at such kind of 0-1 integer programming problem, we propose a chaotic searching algorithm to solve it. The chaotic searching algorithm lets many isolated chaotic variables search in their tracks, so the corresponding 0-1 distribution matrix built by these variables can experience every possible distribution, thereby it can find the global optimal solution in enough long time. Simulation tests show that the chaotic searching algorithm can find the global optimal solution.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116909853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Slice is one of the major operations in on-line analysis processing which has played an important role in the application of decision support. Based on data cube, by mining the maximum singular value of the slices, a method was proposed in this paper to extract the inner rules of movement. Algebraic theories proved that it is feasible. And the numerical experiment also demonstrated that it is efficient.
{"title":"Slices Mining Based on Singular Value","authors":"Lijie Zhang, Haili Yin, Hui Liu","doi":"10.1109/SNPD.2007.268","DOIUrl":"https://doi.org/10.1109/SNPD.2007.268","url":null,"abstract":"Slice is one of the major operations in on-line analysis processing which has played an important role in the application of decision support. Based on data cube, by mining the maximum singular value of the slices, a method was proposed in this paper to extract the inner rules of movement. Algebraic theories proved that it is feasible. And the numerical experiment also demonstrated that it is efficient.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124760707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The minimum cross entropy thresholding (MCET) has been proven as an efficient method in image segmentation for bilevel thresholding. However, this method is computationally intensive when extended to multilevel thresholding. This paper first employs a recursive programming technique which can reduce an order of magnitude for computing the MCET fitness function. Then, a quantum particle swarm optimization (QPSO) algorithm is proposed for searching the near- optimal MCET thresholds. The experimental results show that the proposed QPSO-based algorithm can get ideal segmentation result with less computation cost.
{"title":"Multilevel Minimum Cross Entropy Threshold Selection Based on Quantum Particle Swarm Optimization","authors":"Yong Zhao, Z. Fang, Kanwei Wang, Hui Pang","doi":"10.1109/SNPD.2007.85","DOIUrl":"https://doi.org/10.1109/SNPD.2007.85","url":null,"abstract":"The minimum cross entropy thresholding (MCET) has been proven as an efficient method in image segmentation for bilevel thresholding. However, this method is computationally intensive when extended to multilevel thresholding. This paper first employs a recursive programming technique which can reduce an order of magnitude for computing the MCET fitness function. Then, a quantum particle swarm optimization (QPSO) algorithm is proposed for searching the near- optimal MCET thresholds. The experimental results show that the proposed QPSO-based algorithm can get ideal segmentation result with less computation cost.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129488181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The embedded digital instrumentations (EDI) software systems become diversified and more complex with variable requirements. Component-based frameworks (CBF) which are built on object-oriented (OO) technologies provide a better reuse. A CBF for EDI software systems is presented in this paper, and it will be demonstrated that it is convenient to reuse the components and to construct an EDI application. The system architecture and framework design of multimedia instrumentations will be presented as an example. For a maintainable, flexible and extensible design, design patterns are employed in the components and framework development; strategy, observer, command and composite patterns are discussed and implemented in examples.
{"title":"A Component-based Framework for Embedded Digital Instrumentation Software with Design Patterns","authors":"Xia Yixing, Chen Yao-wu","doi":"10.1109/SNPD.2007.9","DOIUrl":"https://doi.org/10.1109/SNPD.2007.9","url":null,"abstract":"The embedded digital instrumentations (EDI) software systems become diversified and more complex with variable requirements. Component-based frameworks (CBF) which are built on object-oriented (OO) technologies provide a better reuse. A CBF for EDI software systems is presented in this paper, and it will be demonstrated that it is convenient to reuse the components and to construct an EDI application. The system architecture and framework design of multimedia instrumentations will be presented as an example. For a maintainable, flexible and extensible design, design patterns are employed in the components and framework development; strategy, observer, command and composite patterns are discussed and implemented in examples.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129508882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generalization of the building polygon maps is a challenging problem within the world range. The paper discusses the automatic generalization algorithm of urban building polygon in GIS environment. The main research focuses on the rectangular adjustment of building polygon, partial concave and convex simplification of the large polygon, partial exaggeration of the small polygon and polygon cluster aggregation. In the practical development, we sort the different building polygon by its shape and characteristic, and simplify it with different methods. The experiment proves that the building polygon simplification products a reasonable result through this algorithm and the morph feature of block has been well preserved.
{"title":"Research on Building Polygon Map Generalization Algorithm","authors":"Zhong Xie, Zi Ye, Liang Wu","doi":"10.1109/SNPD.2007.414","DOIUrl":"https://doi.org/10.1109/SNPD.2007.414","url":null,"abstract":"Generalization of the building polygon maps is a challenging problem within the world range. The paper discusses the automatic generalization algorithm of urban building polygon in GIS environment. The main research focuses on the rectangular adjustment of building polygon, partial concave and convex simplification of the large polygon, partial exaggeration of the small polygon and polygon cluster aggregation. In the practical development, we sort the different building polygon by its shape and characteristic, and simplify it with different methods. The experiment proves that the building polygon simplification products a reasonable result through this algorithm and the morph feature of block has been well preserved.","PeriodicalId":197058,"journal":{"name":"Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128163861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}