Y. Kido, S. Seno, S. Date, Y. Takenaka, H. Matsuda
The number of biological databases has been increasing rapidly as a result of progress in biotechnology. As the amount and heterogeneity of biological data increase, it becomes more difficult to manage the data in a few centralized databases. Moreover, the number of sites storing these databases is getting larger, and the geographic distribution of these databases has become wider. In addition, biological research tends to require a large amount of computational resources, i.e., a large number of computing nodes. As such, the computational demand has been increasing with the rapid progress of biological research. Thus, the development of methods that enable computing nodes to use such widely-distributed database sites effectively is desired. In this paper, we propose a method for providing data from the database sites to computing nodes. Since it is difficult to decide which program runs on a node and which data are requested as their inputs in advance, we have introduced the notion of “data-staging” in the proposed method. Data-staging dynamically searches for the input data from the database sites and transfers the input data to the node where the program runs. We have developed a prototype system with data-staging using grid middleware. The effectiveness of the prototype system is demonstrated by measurement of the execution time of similarity search of several-hundred gene sequences against 527 prokaryotic genome data.
{"title":"A Distributed-Processing System for Accelerating Biological Research Using Data-Staging","authors":"Y. Kido, S. Seno, S. Date, Y. Takenaka, H. Matsuda","doi":"10.2197/IPSJDC.4.250","DOIUrl":"https://doi.org/10.2197/IPSJDC.4.250","url":null,"abstract":"The number of biological databases has been increasing rapidly as a result of progress in biotechnology. As the amount and heterogeneity of biological data increase, it becomes more difficult to manage the data in a few centralized databases. Moreover, the number of sites storing these databases is getting larger, and the geographic distribution of these databases has become wider. In addition, biological research tends to require a large amount of computational resources, i.e., a large number of computing nodes. As such, the computational demand has been increasing with the rapid progress of biological research. Thus, the development of methods that enable computing nodes to use such widely-distributed database sites effectively is desired. In this paper, we propose a method for providing data from the database sites to computing nodes. Since it is difficult to decide which program runs on a node and which data are requested as their inputs in advance, we have introduced the notion of “data-staging” in the proposed method. Data-staging dynamically searches for the input data from the database sites and transfers the input data to the node where the program runs. We have developed a prototype system with data-staging using grid middleware. The effectiveness of the prototype system is demonstrated by measurement of the execution time of similarity search of several-hundred gene sequences against 527 prokaryotic genome data.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114180455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel algorithm to predict transmembrane regions from a primary amino acid sequence. Previous studies have shown that the Hidden Markov Model (HMM) is one of the powerful tools known to predict transmembrane regions; however, one of the conceptual drawbacks of the standard HMM is the fact that the state duration, i.e., the duration for which the hidden dynamics remains in a particular state follows the geometric distribution. Real data, however, does not always indicate such a geometric distribution. The proposed algorithm utilizes a Generalized Hidden Markov Model (GHMM), an extension of the HMM, to cope with this problem. In the GHMM, the state duration probability can be any discrete distribution, including a geometric distribution. The proposed algorithm employs a state duration probability based on a Poisson distribution. We consider the two-dimensional vector trajectory consisting of hydropathy index and charge associated with amino acids, instead of the 20 letter symbol sequences. Also a Monte Carlo method (Forward/Backward Sampling method) is adopted for the transmembrane region prediction step. Prediction accuracies using publicly available data sets show that the proposed algorithm yields reasonably good results when compared against some existing algorithms.
{"title":"A Generalized Hidden Markov Model Approach to Transmembrane Region Prediction with Poisson Distribution as State Duration Probabilities","authors":"T. Kaburagi, Takashi Matsumoto","doi":"10.2197/IPSJDC.4.193","DOIUrl":"https://doi.org/10.2197/IPSJDC.4.193","url":null,"abstract":"We present a novel algorithm to predict transmembrane regions from a primary amino acid sequence. Previous studies have shown that the Hidden Markov Model (HMM) is one of the powerful tools known to predict transmembrane regions; however, one of the conceptual drawbacks of the standard HMM is the fact that the state duration, i.e., the duration for which the hidden dynamics remains in a particular state follows the geometric distribution. Real data, however, does not always indicate such a geometric distribution. The proposed algorithm utilizes a Generalized Hidden Markov Model (GHMM), an extension of the HMM, to cope with this problem. In the GHMM, the state duration probability can be any discrete distribution, including a geometric distribution. The proposed algorithm employs a state duration probability based on a Poisson distribution. We consider the two-dimensional vector trajectory consisting of hydropathy index and charge associated with amino acids, instead of the 20 letter symbol sequences. Also a Monte Carlo method (Forward/Backward Sampling method) is adopted for the transmembrane region prediction step. Prediction accuracies using publicly available data sets show that the proposed algorithm yields reasonably good results when compared against some existing algorithms.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"908 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132601711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel method, Hierarchical Importance Sampling (HIS) that can be used instead of population convergence in evolutionary optimization based on probability models (EOPM)such as estimation of distribution algorithms and cross entropy methods. In HIS, multiple populations are maintained simultaneously such that they have different diversities, and the probability model of one population is built through importance sampling by mixing with the other populations. This mechanism can allow populations to escape from local optima. Experimental comparisons reveal that HIS outperforms general EOPM.
{"title":"Maintaining Multiple Populations with Different Diversities for Evolutionary Optimization Based on Probability Models","authors":"Takayuki Higo, K. Takadama","doi":"10.2197/IPSJDC.4.268","DOIUrl":"https://doi.org/10.2197/IPSJDC.4.268","url":null,"abstract":"This paper proposes a novel method, Hierarchical Importance Sampling (HIS) that can be used instead of population convergence in evolutionary optimization based on probability models (EOPM)such as estimation of distribution algorithms and cross entropy methods. In HIS, multiple populations are maintained simultaneously such that they have different diversities, and the probability model of one population is built through importance sampling by mixing with the other populations. This mechanism can allow populations to escape from local optima. Experimental comparisons reveal that HIS outperforms general EOPM.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133145109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous research examined how extrinsic and intrinsic factors influence customers to shop online. Conversely, the impact of these factors on customer retention in Internet shopping has not been examined. This study is one of the few attempts to investigate the perceived benefit factors effecting customers’ continuance of purchasing items through the Internet. According to an online questionnaire filled out by 1,111 online customers to conduct a multiple regression analysis, extrinsic benefits measured in terms of time and money savings, social adjustment, and self-enhancement as well as intrinsic benefits measured in terms of pleasure and novelty as well as fashion involvement have strong effects on the continuance of purchasing. Our findings indicate that customer retention must be promoted in Internet shopping by guaranteeing not only extrinsic benefits but also intrinsic benefits. This study discusses the relevant techniques providing those benefits to customers and guidelines for future research.
{"title":"Exploring Factors Effecting the Continuance of Purchasing Behavior in Internet Shopping: Extrinsic Benefits and Intrinsic Benefits","authors":"K. Atchariyachanvanich, H. Okada, N. Sonehara","doi":"10.2197/IPSJDC.4.91","DOIUrl":"https://doi.org/10.2197/IPSJDC.4.91","url":null,"abstract":"Previous research examined how extrinsic and intrinsic factors influence customers to shop online. Conversely, the impact of these factors on customer retention in Internet shopping has not been examined. This study is one of the few attempts to investigate the perceived benefit factors effecting customers’ continuance of purchasing items through the Internet. According to an online questionnaire filled out by 1,111 online customers to conduct a multiple regression analysis, extrinsic benefits measured in terms of time and money savings, social adjustment, and self-enhancement as well as intrinsic benefits measured in terms of pleasure and novelty as well as fashion involvement have strong effects on the continuance of purchasing. Our findings indicate that customer retention must be promoted in Internet shopping by guaranteeing not only extrinsic benefits but also intrinsic benefits. This study discusses the relevant techniques providing those benefits to customers and guidelines for future research.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123446216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present LCP Merge, a novel merging algorithm for merging two ordered sequences of strings. LCP Merge substitutes string comparisons with integer comparisons whenever possible to reduce the number of character-wise comparisons as well as the number of key accesses by utilizing the longest common prefixes (LCP) between the strings. As one of the applications of LCP Merge, we built a string merge sort based on recursive merge sort by replacing the merging algorithm with LCP Merge and we call it LCP Merge sort. In case of sorting strings, the computational complexity of recursive merge sort tends to be greater than O(n lg n) because string comparisons are generally not constant time and depend on the properties of the strings. However, LCP Merge sort improves recursive merge sort to the extent that its computational complexity remains O(n lg n) on average. We performed a number of experiments to compare LCP Merge sort with other string sorting algorithms to evaluate its practical performance and the experimental results showed that LCP Merge sort is efficient even in the real-world.
{"title":"Merging String Sequences by Longest Common Prefixes","authors":"Waihong Ng, K. Kakehi","doi":"10.2197/IPSJDC.4.69","DOIUrl":"https://doi.org/10.2197/IPSJDC.4.69","url":null,"abstract":"We present LCP Merge, a novel merging algorithm for merging two ordered sequences of strings. LCP Merge substitutes string comparisons with integer comparisons whenever possible to reduce the number of character-wise comparisons as well as the number of key accesses by utilizing the longest common prefixes (LCP) between the strings. As one of the applications of LCP Merge, we built a string merge sort based on recursive merge sort by replacing the merging algorithm with LCP Merge and we call it LCP Merge sort. In case of sorting strings, the computational complexity of recursive merge sort tends to be greater than O(n lg n) because string comparisons are generally not constant time and depend on the properties of the strings. However, LCP Merge sort improves recursive merge sort to the extent that its computational complexity remains O(n lg n) on average. We performed a number of experiments to compare LCP Merge sort with other string sorting algorithms to evaluate its practical performance and the experimental results showed that LCP Merge sort is efficient even in the real-world.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128622045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose d-ACTM/VT, a network-based worm detection method that effectively detects hit-list worms using distributed virtual AC tree detection. To detect a kind of hit-list worms named Silent worms in a distributed manner, d-ACTM was proposed. d-ACTM detects the existence of worms by detecting tree structures composed of infection connections as edges. Some undetected infection connections, however, can divide the tree structures into small trees and degrade the detection performance. To address this problem, d-ACTM/VT aggregates the divided trees as a tree named Virtual AC tree in a distributed manner and utilizes the tree size for detection. Simulation result shows d-ACTM/VT reduces the number of infected hosts before detection by 20% compared to d-ACTM.
在本文中,我们提出了一种基于网络的蠕虫检测方法d-ACTM/VT,该方法使用分布式虚拟交流树检测来有效地检测命中列表蠕虫。为了对静默蠕虫进行分布式检测,提出了d-ACTM算法。d-ACTM通过检测由感染连接组成的树形结构作为边缘来检测蠕虫的存在。然而,一些未被检测到的感染连接可能会将树结构分成小树,从而降低检测性能。为了解决这一问题,d-ACTM/VT将划分的树以分布式的方式聚合成一棵树,命名为Virtual AC tree,并利用树的大小进行检测。仿真结果表明,与d-ACTM相比,d-ACTM/VT在检测前将感染主机的数量减少了20%。
{"title":"d-ACTM/VT: A Distributed Virtual AC Tree Detection Method","authors":"N. Kawaguchi, H. Shigeno, Ken-ichi Okada","doi":"10.2197/IPSJDC.4.79","DOIUrl":"https://doi.org/10.2197/IPSJDC.4.79","url":null,"abstract":"In this paper, we propose d-ACTM/VT, a network-based worm detection method that effectively detects hit-list worms using distributed virtual AC tree detection. To detect a kind of hit-list worms named Silent worms in a distributed manner, d-ACTM was proposed. d-ACTM detects the existence of worms by detecting tree structures composed of infection connections as edges. Some undetected infection connections, however, can divide the tree structures into small trees and degrade the detection performance. To address this problem, d-ACTM/VT aggregates the divided trees as a tree named Virtual AC tree in a distributed manner and utilizes the tree size for detection. Simulation result shows d-ACTM/VT reduces the number of infected hosts before detection by 20% compared to d-ACTM.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125233973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since Semantic Web is increasing in size and variety of resources, it is difficult for users to find the information that they really need. Therefore, it is necessary to provide an efficient and precise method without explicit specification for the Web resources. In this paper, we proposed the novel approach of integrating four processes for Web resource categorization. The processes can extract both the explicit relations extracted from the ontologies in a traditional way and the potential relations inferred from existing ontologies by focusing on some new challenges such as extracting important class names, using WordNet relations and detecting the methods of describing the Web resources. We evaluated the effectiveness by applying the categorization method to a Semantic Web search system, and confirmed that our proposed method achieves a notable improvement in categorizing the valuable Web resources based on incomplete ontologies.
{"title":"Web Resource Categorization by Detecting Potential Relations","authors":"Minghua Pei, Kotaro Nakayama, T. Hara, S. Nishio","doi":"10.2197/IPSJDC.4.103","DOIUrl":"https://doi.org/10.2197/IPSJDC.4.103","url":null,"abstract":"Since Semantic Web is increasing in size and variety of resources, it is difficult for users to find the information that they really need. Therefore, it is necessary to provide an efficient and precise method without explicit specification for the Web resources. In this paper, we proposed the novel approach of integrating four processes for Web resource categorization. The processes can extract both the explicit relations extracted from the ontologies in a traditional way and the potential relations inferred from existing ontologies by focusing on some new challenges such as extracting important class names, using WordNet relations and detecting the methods of describing the Web resources. We evaluated the effectiveness by applying the categorization method to a Semantic Web search system, and confirmed that our proposed method achieves a notable improvement in categorizing the valuable Web resources based on incomplete ontologies.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126921102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Future networks everywhere will be connected to innumerable Internet-ready home appliances. A device accepting connections over a network must be able to verify the identity of a connecting device in order to prevent device spoofing and other malicious actions. In this paper, we propose a security mechanism for an inter-device communication. We state the importance of a distingushing and binding mechanism between a device's identity and its ownership information to realize practical inter-device authentication. In many conventional authentication systems, the relationship between the device's identity and the ownership information is not considered. Therefore, we propose a novel inter-device authentication framework guaranteeing this relationship. Our prototype implementation employs a smart card to maintain the device's identity, the ownership information and the access control rules securely. Our framework efficiently achieves secure inter-device authentication based on the device's identity, and authorization based on the ownership information related to the device. We also show how to apply our smart card system for inter-device authentication to the existing standard security protocols.
{"title":"Design and Implementation of an Inter-Device Authentication Framework Guaranteeing Explicit Ownership","authors":"Manabu Hirano, T. Okuda, S. Yamaguchi","doi":"10.2197/IPSJDC.4.114","DOIUrl":"https://doi.org/10.2197/IPSJDC.4.114","url":null,"abstract":"Future networks everywhere will be connected to innumerable Internet-ready home appliances. A device accepting connections over a network must be able to verify the identity of a connecting device in order to prevent device spoofing and other malicious actions. In this paper, we propose a security mechanism for an inter-device communication. We state the importance of a distingushing and binding mechanism between a device's identity and its ownership information to realize practical inter-device authentication. In many conventional authentication systems, the relationship between the device's identity and the ownership information is not considered. Therefore, we propose a novel inter-device authentication framework guaranteeing this relationship. Our prototype implementation employs a smart card to maintain the device's identity, the ownership information and the access control rules securely. Our framework efficiently achieves secure inter-device authentication based on the device's identity, and authorization based on the ownership information related to the device. We also show how to apply our smart card system for inter-device authentication to the existing standard security protocols.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129029853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Program transformation by templates (Huet and Lang, 1978)is a technique to improve the efficiency of programs. In this technique, programs are transformed according to a given program transformation template. To enhance the variety of program transformation, it is important to introduce new transformation templates. Up to our knowledge, however, few works discuss about the construction of transformation templates. Chiba, et al. (2006) proposed a framework of program transformation by template based on term rewriting and automated verification of its correctness. Based on this framework, we propose a method that automatically constructs transformation templates from similar program transformations. The key idea of our method is a second-order generalization, which is an extension of Plotkin's first-order generalization (1969). We give a second-order generalization algorithm and prove the soundness of the algorithm. We then report about an implementation of the generalization procedure and an experiment on the construction of transformation templates.
模版程序转换(Huet and Lang, 1978)是一种提高程序效率的技术。在这种技术中,程序根据给定的程序转换模板进行转换。为了增强程序转换的多样性,引入新的转换模板是很重要的。然而,据我们所知,很少有著作讨论转换模板的构造。Chiba等(2006)提出了一种基于术语重写和正确性自动验证的模板程序转换框架。基于这个框架,我们提出了一种从类似的程序转换中自动构造转换模板的方法。我们的方法的关键思想是二阶泛化,这是Plotkin的一阶泛化(1969)的扩展。给出了一种二阶泛化算法,并证明了该算法的有效性。然后,我们报告了一个泛化过程的实现和一个转换模板构造的实验。
{"title":"Automatic Construction of Program Transformation Templates","authors":"Yuki Chiba, Takahito Aoto, Y. Toyama","doi":"10.2197/IPSJDC.4.44","DOIUrl":"https://doi.org/10.2197/IPSJDC.4.44","url":null,"abstract":"Program transformation by templates (Huet and Lang, 1978)is a technique to improve the efficiency of programs. In this technique, programs are transformed according to a given program transformation template. To enhance the variety of program transformation, it is important to introduce new transformation templates. Up to our knowledge, however, few works discuss about the construction of transformation templates. Chiba, et al. (2006) proposed a framework of program transformation by template based on term rewriting and automated verification of its correctness. Based on this framework, we propose a method that automatically constructs transformation templates from similar program transformations. The key idea of our method is a second-order generalization, which is an extension of Plotkin's first-order generalization (1969). We give a second-order generalization algorithm and prove the soundness of the algorithm. We then report about an implementation of the generalization procedure and an experiment on the construction of transformation templates.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129989799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently the content distribution networks (CDNs) have been highlighted as the new network paradigm which can improve latency for Web access. In CDNs, the content location strategy and request routing techniques are important technical issues. Both of them should be used in an integrated manner in general, but CDN performance applying both these technologies has not been evaluated in detail. In this paper, we investigate the effect of integration of these techniques. For request routing, we focus on a request routing technique applied active network technology, Active Anycast, which improves both network delay and server processing delay. For content distribution technology, we propose a new strategy, Popularity-Probability, whose aim corresponds with that of Active Anycast. Performance evaluation results show that integration of Active Anycast and Popularity-Probability can hold stable delay characteristics.
{"title":"Evaluation of the Integration Effect of Content Location and Request Routing in Content Distribution Networks","authors":"H. Miura, M. Yamamoto","doi":"10.2197/IPSJDC.4.1","DOIUrl":"https://doi.org/10.2197/IPSJDC.4.1","url":null,"abstract":"Recently the content distribution networks (CDNs) have been highlighted as the new network paradigm which can improve latency for Web access. In CDNs, the content location strategy and request routing techniques are important technical issues. Both of them should be used in an integrated manner in general, but CDN performance applying both these technologies has not been evaluated in detail. In this paper, we investigate the effect of integration of these techniques. For request routing, we focus on a request routing technique applied active network technology, Active Anycast, which improves both network delay and server processing delay. For content distribution technology, we propose a new strategy, Popularity-Probability, whose aim corresponds with that of Active Anycast. Performance evaluation results show that integration of Active Anycast and Popularity-Probability can hold stable delay characteristics.","PeriodicalId":432390,"journal":{"name":"Ipsj Digital Courier","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131773968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}