This paper presents an improvement to the Maekawa¿s distributed mutual exclusion algorithm. The number of messages required by the improvised algorithm is in the range 3 M to 5 M per critical section invocation where M is the number of Intersection nodes in the system. This improvement does not introduce any additional overheads over the existing Maekawa¿s algorithm which requires 3 K to 5 K number of messages per critical section invocation, where K is the number of nodes in the voting district (M ¿ K). This reduction in number of messages is achieved by restricting the communication of any node which wants to execute critical section with the Intersection nodes of the voting district, without causing any modification of the basic structure of the algorithm. This improvisation preserves all the advantages of the original Maekawa¿s algorithm.
本文提出了对Maekawa分布式互斥算法的改进。临时算法每次关键段调用所需的消息数量在3m到5m之间,其中M是系统中交集节点的数量。这个改进不引入任何额外的开销在现有Maekawa算法要求3 K > >八月刊5 K的消息数量/关键部分调用,K节点的数量在选举区害怕(M K)。这个消息数量的减少是通过限制任何节点的通信希望执行临界区交叉节点的选举区,不会引起任何修改的基本结构的算法。这种即兴创作保留了原前川算法的所有优点。
{"title":"An Improved Algorithm for Distributed Mutual Exclusion by Restricted Message Exchange in Voting Districts","authors":"A. Kumar, Pradhan Bagur Umesh","doi":"10.1109/ICIT.2008.51","DOIUrl":"https://doi.org/10.1109/ICIT.2008.51","url":null,"abstract":"This paper presents an improvement to the Maekawa¿s distributed mutual exclusion algorithm. The number of messages required by the improvised algorithm is in the range 3 M to 5 M per critical section invocation where M is the number of Intersection nodes in the system. This improvement does not introduce any additional overheads over the existing Maekawa¿s algorithm which requires 3 K to 5 K number of messages per critical section invocation, where K is the number of nodes in the voting district (M ¿ K). This reduction in number of messages is achieved by restricting the communication of any node which wants to execute critical section with the Intersection nodes of the voting district, without causing any modification of the basic structure of the algorithm. This improvisation preserves all the advantages of the original Maekawa¿s algorithm.","PeriodicalId":184201,"journal":{"name":"2008 International Conference on Information Technology","volume":"290 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125889544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate the basics of tensor based hypertext representation and perform experiments this novel hypertext representation model. Most documents have an inherent hierarchical structure that render the desirable use of multidimensional representations such as those offered by tensor objects. We focus on the advantages of Tensor Space Model, in which documents are represented using second-order tensors. We exploit the local-structure and neighborhood recommendation encapsulated by the proposed representation. We define the distance metric on tensor space of hypertext documents, which is a generalization of distance metric defined on vector space model. Our results provide evidence that tensor based model is very efficient for clustering and classification of hypertext documents compared to traditional vector based model.
{"title":"Tensor Space Model for Hypertext Representation","authors":"S. Saha, C. A. Murthy, S. Pal","doi":"10.1109/ICIT.2008.13","DOIUrl":"https://doi.org/10.1109/ICIT.2008.13","url":null,"abstract":"We investigate the basics of tensor based hypertext representation and perform experiments this novel hypertext representation model. Most documents have an inherent hierarchical structure that render the desirable use of multidimensional representations such as those offered by tensor objects. We focus on the advantages of Tensor Space Model, in which documents are represented using second-order tensors. We exploit the local-structure and neighborhood recommendation encapsulated by the proposed representation. We define the distance metric on tensor space of hypertext documents, which is a generalization of distance metric defined on vector space model. Our results provide evidence that tensor based model is very efficient for clustering and classification of hypertext documents compared to traditional vector based model.","PeriodicalId":184201,"journal":{"name":"2008 International Conference on Information Technology","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114847495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Normally, for an on-demand routing protocol in mobile ad hoc networks, the parameters that ought to be optimized include end-to-end delay and routing overhead. These parameters have a direct relationship with broken links and route discoveries, or in other words the staleness of the cache. In order to minimize the need for such a route discovery, the cache has to be constantly checked from time to time, stale cache entries has to be removed, and proper entries be added. In this paper, a novel approach to reduce this route cache problem is addressed. Hence, a control packet called the smart packet is used to traverse the network and collect the network information. This process is termed as Smart Packet based Dynamic Source Routing (DSR-SP). Once the information is collected, each node updates its route cache with the collected information. Based on the simulations results obtained, it is observed that invalid cache entries have considerably decreased with the advent of this packet, for low and medium density networks, at higher mobility.
{"title":"Route Cache Optimization Mechanism Using Smart Packets for On-demand Routing Procotol in MANET","authors":"N. Ashokraj, C. Arun, K. Murugan","doi":"10.1109/ICIT.2008.35","DOIUrl":"https://doi.org/10.1109/ICIT.2008.35","url":null,"abstract":"Normally, for an on-demand routing protocol in mobile ad hoc networks, the parameters that ought to be optimized include end-to-end delay and routing overhead. These parameters have a direct relationship with broken links and route discoveries, or in other words the staleness of the cache. In order to minimize the need for such a route discovery, the cache has to be constantly checked from time to time, stale cache entries has to be removed, and proper entries be added. In this paper, a novel approach to reduce this route cache problem is addressed. Hence, a control packet called the smart packet is used to traverse the network and collect the network information. This process is termed as Smart Packet based Dynamic Source Routing (DSR-SP). Once the information is collected, each node updates its route cache with the collected information. Based on the simulations results obtained, it is observed that invalid cache entries have considerably decreased with the advent of this packet, for low and medium density networks, at higher mobility.","PeriodicalId":184201,"journal":{"name":"2008 International Conference on Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129351034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advent of VLSI technology, the demand for higher processing has increased to a large extent. Study of parallel computer interconnection topology has been made along with the various interconnection networks emphasizing the cube based topologies in particular. This paper proposes a new cube based topology called the Folded dualcube with better features such as reduced diameter, cost and improved broadcast time in comparison to its parent topologies: viz: Folded hypercube and Dualcube. Two separate routing algorithms one-to-one and one-to-all broadcast have been proposed for the new network.
{"title":"Folded Dualcube: A New Interconnection Topology for Parallel Systems","authors":"Nibdita Adhikari, C. Tripathy","doi":"10.1109/ICIT.2008.49","DOIUrl":"https://doi.org/10.1109/ICIT.2008.49","url":null,"abstract":"With the advent of VLSI technology, the demand for higher processing has increased to a large extent. Study of parallel computer interconnection topology has been made along with the various interconnection networks emphasizing the cube based topologies in particular. This paper proposes a new cube based topology called the Folded dualcube with better features such as reduced diameter, cost and improved broadcast time in comparison to its parent topologies: viz: Folded hypercube and Dualcube. Two separate routing algorithms one-to-one and one-to-all broadcast have been proposed for the new network.","PeriodicalId":184201,"journal":{"name":"2008 International Conference on Information Technology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132823482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A software based hybrid test vector compression technique for testing system-on-chip integrated circuits using an embedded processor core was previously discussed by the authors. In this approach, a software program is loaded into the on-chip processor memory along with the compressed test data sets. To minimize on-chip storage besides testing time, the test data volume is first reduced by compaction in a hybrid manner before downloading into the processor. The proposed method utilizes a set of adaptive coding techniques for realizing lossless compression. The compaction program need not be loaded into the embedded processor, as only the decompression of test data is required for the automatic test equipment. The developed scheme necessitates minimal hardware overhead, while the on-chip embedded processor can be reused for normal operation on completion of testing. As an extension of this prior work, this paper reports further results on studies of the problem based on the use of Limpel-Ziv-Walsh coding besides Burrows-Wheeler transformation and demonstrates the feasibility of the suggested methodology with simulation results on ISCAS 85 combinational and ISCAS 89 full-scan sequential benchmark circuits.
{"title":"A Novel Technique for Input Vector Compression in System-on-Chip Testing","authors":"S. Biswas, Sunil R. Das, M. Assaf","doi":"10.1109/ICIT.2008.47","DOIUrl":"https://doi.org/10.1109/ICIT.2008.47","url":null,"abstract":"A software based hybrid test vector compression technique for testing system-on-chip integrated circuits using an embedded processor core was previously discussed by the authors. In this approach, a software program is loaded into the on-chip processor memory along with the compressed test data sets. To minimize on-chip storage besides testing time, the test data volume is first reduced by compaction in a hybrid manner before downloading into the processor. The proposed method utilizes a set of adaptive coding techniques for realizing lossless compression. The compaction program need not be loaded into the embedded processor, as only the decompression of test data is required for the automatic test equipment. The developed scheme necessitates minimal hardware overhead, while the on-chip embedded processor can be reused for normal operation on completion of testing. As an extension of this prior work, this paper reports further results on studies of the problem based on the use of Limpel-Ziv-Walsh coding besides Burrows-Wheeler transformation and demonstrates the feasibility of the suggested methodology with simulation results on ISCAS 85 combinational and ISCAS 89 full-scan sequential benchmark circuits.","PeriodicalId":184201,"journal":{"name":"2008 International Conference on Information Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127906860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data grids provide geographically distributed resources for large-scale data-intensive applications that generate and share large data sets. Replication has been used to reduce the access latencies and ensure fault tolerance for such large scale data sharing. In spite of the advantages offered, Replication results in consistency problems if the applications are allowed to modify the data in an uncontrolled manner. In this paper a synchronous replica consistency protocol with notification and response is proposed. It deals with keeping the replicas coherent in an effective manner. It can increase the performance and will achieve optimal bandwidth utilization for file replications. The protocol is simulated in Java. It has been compared with other consistency protocols and the experimental results shows that the proposed scheme offers better performance in terms of replication time still maintaining the consistency among replicas.
{"title":"Synchronous Replica Consistency Protocol with Notification and Response","authors":"S. Sathya, K. N. Seshu","doi":"10.1109/ICIT.2008.50","DOIUrl":"https://doi.org/10.1109/ICIT.2008.50","url":null,"abstract":"Data grids provide geographically distributed resources for large-scale data-intensive applications that generate and share large data sets. Replication has been used to reduce the access latencies and ensure fault tolerance for such large scale data sharing. In spite of the advantages offered, Replication results in consistency problems if the applications are allowed to modify the data in an uncontrolled manner. In this paper a synchronous replica consistency protocol with notification and response is proposed. It deals with keeping the replicas coherent in an effective manner. It can increase the performance and will achieve optimal bandwidth utilization for file replications. The protocol is simulated in Java. It has been compared with other consistency protocols and the experimental results shows that the proposed scheme offers better performance in terms of replication time still maintaining the consistency among replicas.","PeriodicalId":184201,"journal":{"name":"2008 International Conference on Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125354302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an efficient technique for slicing web applications. First we construct the system dependence graph for a web application and then perform backward slicing on that graph corresponding to a given slicing criterion. We use Java server pages for the web application.
{"title":"Slicing Java Server Pages Application","authors":"M. Sahu, D. Mohapatra","doi":"10.1109/ICIT.2008.15","DOIUrl":"https://doi.org/10.1109/ICIT.2008.15","url":null,"abstract":"We propose an efficient technique for slicing web applications. First we construct the system dependence graph for a web application and then perform backward slicing on that graph corresponding to a given slicing criterion. We use Java server pages for the web application.","PeriodicalId":184201,"journal":{"name":"2008 International Conference on Information Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132468311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social insects like ants, bees deposit pheromone (a type of chemical) in order to communicate between the members of their community. Pheromone, that causes clumping behavior in a species and brings individuals into a closer proximity, is called aggregation pheromone. This article presents a new algorithm (called, APC) for pattern classification based on the property of aggregation pheromone found in natural behavior of real ants. Here each data pattern is considered as an ant, and the training patterns (ants) form several groups or colonies depending on the number of classes present in the data set. A new (test pattern) ant will move along the direction where average aggregation pheromone density (at the location of the new ant) formed due to each colony of ants is higher and hence eventually it will join that colony. Thus each individual test ant will finally join a particular colony. The proposed algorithm is evaluated with a number of benchmark data sets in terms of classification accuracy. Results are compared with other state of the art techniques. Experimental results show the potentiality of the proposed algorithm.
{"title":"Aggregation Pheromone Density Based Classification","authors":"A. Halder, Susmita K. Ghosh, Ashish Ghosh","doi":"10.1109/ICIT.2008.27","DOIUrl":"https://doi.org/10.1109/ICIT.2008.27","url":null,"abstract":"Social insects like ants, bees deposit pheromone (a type of chemical) in order to communicate between the members of their community. Pheromone, that causes clumping behavior in a species and brings individuals into a closer proximity, is called aggregation pheromone. This article presents a new algorithm (called, APC) for pattern classification based on the property of aggregation pheromone found in natural behavior of real ants. Here each data pattern is considered as an ant, and the training patterns (ants) form several groups or colonies depending on the number of classes present in the data set. A new (test pattern) ant will move along the direction where average aggregation pheromone density (at the location of the new ant) formed due to each colony of ants is higher and hence eventually it will join that colony. Thus each individual test ant will finally join a particular colony. The proposed algorithm is evaluated with a number of benchmark data sets in terms of classification accuracy. Results are compared with other state of the art techniques. Experimental results show the potentiality of the proposed algorithm.","PeriodicalId":184201,"journal":{"name":"2008 International Conference on Information Technology","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121627087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Identity-based (ID based) public key cryptosystem gives an efficient alternative for key management as compared to certificate based public key settings. A proxy signature is a method for an entity to delegate signing capabilities to other participants so that they can sign on behalf of the entity with in a given context. In this paper, we have proposed a new ID-based proxy signature which is more efficient than. Then we have extended our study in developing a blind -signature and partial blind signature using the above proxy signing key. We also have analyzed security of our new scheme briefly.
{"title":"An Efficient ID Based Proxy Signature, Proxy Blind Signature and Proxy Partial Blind Signature","authors":"B. Majhi, Deepak Kumar Sahu, R. Subudhi","doi":"10.1109/ICIT.2008.38","DOIUrl":"https://doi.org/10.1109/ICIT.2008.38","url":null,"abstract":"Identity-based (ID based) public key cryptosystem gives an efficient alternative for key management as compared to certificate based public key settings. A proxy signature is a method for an entity to delegate signing capabilities to other participants so that they can sign on behalf of the entity with in a given context. In this paper, we have proposed a new ID-based proxy signature which is more efficient than. Then we have extended our study in developing a blind -signature and partial blind signature using the above proxy signing key. We also have analyzed security of our new scheme briefly.","PeriodicalId":184201,"journal":{"name":"2008 International Conference on Information Technology","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127135128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pairwise sequence alignment forms the basis of numerous other applications in bioinformatics. The quality of an alignment is gauged by statistical significance rather than by alignment score alone. Therefore, accurate estimation of statistical significance of a pairwise alignment is an important problem in sequence comparison. Recently, it was shown that pairwise statistical significance does better in practice than database statistical significance, and also provides quicker individual pairwise estimates of statistical significance without having to perform time-consuming database search. Under an evolutionary model, a substitution matrix can be derived using a rate matrix and a fixed distance. Although the commonly used substitution matrices like BLOSUM62, etc. were not originally derived from a rate matrix under an evolutionary model, the corresponding rate matrices can be back calculated. Many researchers have derived different rate matrices using different methods and data. In this paper, we show that pairwise statistical significance using rate matrices with sequence-pair-specific distance performs significantly better compared to using a fixed distance. Pairwise statistical significance using sequence-pair-specific distanced substitution matrices also outperforms database statistical significance reported by BLAST.
{"title":"Pairwise Statistical Significance of Local Sequence Alignment Using Substitution Matrices with Sequence-Pair-Specific Distance","authors":"Ankit Agrawal, Xiaoqiu Huang","doi":"10.1109/ICIT.2008.63","DOIUrl":"https://doi.org/10.1109/ICIT.2008.63","url":null,"abstract":"Pairwise sequence alignment forms the basis of numerous other applications in bioinformatics. The quality of an alignment is gauged by statistical significance rather than by alignment score alone. Therefore, accurate estimation of statistical significance of a pairwise alignment is an important problem in sequence comparison. Recently, it was shown that pairwise statistical significance does better in practice than database statistical significance, and also provides quicker individual pairwise estimates of statistical significance without having to perform time-consuming database search. Under an evolutionary model, a substitution matrix can be derived using a rate matrix and a fixed distance. Although the commonly used substitution matrices like BLOSUM62, etc. were not originally derived from a rate matrix under an evolutionary model, the corresponding rate matrices can be back calculated. Many researchers have derived different rate matrices using different methods and data. In this paper, we show that pairwise statistical significance using rate matrices with sequence-pair-specific distance performs significantly better compared to using a fixed distance. Pairwise statistical significance using sequence-pair-specific distanced substitution matrices also outperforms database statistical significance reported by BLAST.","PeriodicalId":184201,"journal":{"name":"2008 International Conference on Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129821679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}