Presently medical research has experienced a paradigm shift from in vivo to in silico experimentation, a development that relies mostly upon bioinformatics. The key to future commercialization of sequence data from Human Genome Project will be to develop informatics technology that transforms this data into information that is useful for diagnosis and therapy. In this new wave of revolution and commercialization our traditional system of Ayurveda needs renovation. Therefore, modern Ayurveda and Siddha practitioners need training in Ayurinformatics to keep pace with the modern system of medicine.
{"title":"Ayurinformatics- the application of bioinformatics in the ayurvedic system of medicine","authors":"L. S. Rath","doi":"10.1109/ICIT.2006.32","DOIUrl":"https://doi.org/10.1109/ICIT.2006.32","url":null,"abstract":"Presently medical research has experienced a paradigm shift from in vivo to in silico experimentation, a development that relies mostly upon bioinformatics. The key to future commercialization of sequence data from Human Genome Project will be to develop informatics technology that transforms this data into information that is useful for diagnosis and therapy. In this new wave of revolution and commercialization our traditional system of Ayurveda needs renovation. Therefore, modern Ayurveda and Siddha practitioners need training in Ayurinformatics to keep pace with the modern system of medicine.","PeriodicalId":161120,"journal":{"name":"9th International Conference on Information Technology (ICIT'06)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122477256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel particle swarm optimisation (PSO) algorithm using the concept of age of particles. Effective fitness of a particle depends both on its functional value and age. Age of a newly generated particle is taken as zero, and in every iteration age of each individual is increased by one. In this paper, a trapezoidal aging function is considered. The model aims to emulate natural swarm system in a more natural way. The effectiveness of this concept is demonstrated by cluster analysis. Results show that the model provides enhanced performance and maintains more diversity in the swarm and thereby allows the particles to be robust to trace the changing environment.
{"title":"Particles with Age for Data Clustering","authors":"Satchidananda Dehuri, Ashish Ghosh, R. Mall","doi":"10.1109/ICIT.2006.69","DOIUrl":"https://doi.org/10.1109/ICIT.2006.69","url":null,"abstract":"This paper proposes a novel particle swarm optimisation (PSO) algorithm using the concept of age of particles. Effective fitness of a particle depends both on its functional value and age. Age of a newly generated particle is taken as zero, and in every iteration age of each individual is increased by one. In this paper, a trapezoidal aging function is considered. The model aims to emulate natural swarm system in a more natural way. The effectiveness of this concept is demonstrated by cluster analysis. Results show that the model provides enhanced performance and maintains more diversity in the swarm and thereby allows the particles to be robust to trace the changing environment.","PeriodicalId":161120,"journal":{"name":"9th International Conference on Information Technology (ICIT'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123409239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rajesh C. Dash, Christopher B. Hubbard, S. Robboy, A. Proia
A new breed of medical information systems is required to more effectively address unresolved and newly emerging needs in the healthcare environment. This paper attempts to define those needs and describe the design and implementation of a solution. In order to rapidly develop a prototype solution that is both sustainable with minimal resources and adaptable to future (perhaps unforeseen) needs, certain design principles are proposed. A prototype application was developed using a high level ("abstract") object oriented approach that permits ongoing modification and adaptation at minimal cost. Standards for user interaction, data modeling, and data coding and retrieval were utilized to create a robust Web-based application. Production uses as an autopsy information system, biospecimen management system, and clinical trial management system have ensued.
{"title":"Designing the \"Next Generation\" of Medical Information Systems","authors":"Rajesh C. Dash, Christopher B. Hubbard, S. Robboy, A. Proia","doi":"10.1109/ICIT.2006.45","DOIUrl":"https://doi.org/10.1109/ICIT.2006.45","url":null,"abstract":"A new breed of medical information systems is required to more effectively address unresolved and newly emerging needs in the healthcare environment. This paper attempts to define those needs and describe the design and implementation of a solution. In order to rapidly develop a prototype solution that is both sustainable with minimal resources and adaptable to future (perhaps unforeseen) needs, certain design principles are proposed. A prototype application was developed using a high level (\"abstract\") object oriented approach that permits ongoing modification and adaptation at minimal cost. Standards for user interaction, data modeling, and data coding and retrieval were utilized to create a robust Web-based application. Production uses as an autopsy information system, biospecimen management system, and clinical trial management system have ensued.","PeriodicalId":161120,"journal":{"name":"9th International Conference on Information Technology (ICIT'06)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127381161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clustering is an important technology in mining time series, and the key is to define the similarity or dissimilarity between data. One of existing time series distance measures LB_Keogh, is tighter lower bounding than Euclidean and dynamic time warping (DTW), however, it is an asymmetrical distance measure, and has its limitation in clustering.To solve the problem, we present a symmetrical boundary distance measure called LB_HUST, and prove that it is tighter lower bounding than LB_Keogh. We apply LB_HUST to cluster time series, and update the boundary of the cluster when a new time series is added into the cluster. The experiments show that the method exceeds the approaches based on Euclidean and DTW in terms of accuracy.
{"title":"LB HUST: A Symmetrical Boundary Distance for Clustering Time Series","authors":"Li Junkui, Wang Yuanzhen, Li Xinping","doi":"10.1109/ICIT.2006.63","DOIUrl":"https://doi.org/10.1109/ICIT.2006.63","url":null,"abstract":"Clustering is an important technology in mining time series, and the key is to define the similarity or dissimilarity between data. One of existing time series distance measures LB_Keogh, is tighter lower bounding than Euclidean and dynamic time warping (DTW), however, it is an asymmetrical distance measure, and has its limitation in clustering.To solve the problem, we present a symmetrical boundary distance measure called LB_HUST, and prove that it is tighter lower bounding than LB_Keogh. We apply LB_HUST to cluster time series, and update the boundary of the cluster when a new time series is added into the cluster. The experiments show that the method exceeds the approaches based on Euclidean and DTW in terms of accuracy.","PeriodicalId":161120,"journal":{"name":"9th International Conference on Information Technology (ICIT'06)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126913928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper first presents two sequential algorithms following a new approach for finding the largest and the second largest elements from a list of n elements assuming n is multiple of 3. Another contribution of this paper is an EREW P-RAM(exclusive read and exclusive write-parallel random access machine) version of this sequential approach, runs in time O(logn) using only n/3 processors. This problem is useful to rank the winner and the runner in any event from many individuals.
{"title":"An efficient parallel algorithm for finding the largest and the second largest elements from a list of elements","authors":"B. K. Sarkar, Shahid Jamal, Bhagirath Kumar","doi":"10.1109/ICIT.2006.27","DOIUrl":"https://doi.org/10.1109/ICIT.2006.27","url":null,"abstract":"This paper first presents two sequential algorithms following a new approach for finding the largest and the second largest elements from a list of n elements assuming n is multiple of 3. Another contribution of this paper is an EREW P-RAM(exclusive read and exclusive write-parallel random access machine) version of this sequential approach, runs in time O(logn) using only n/3 processors. This problem is useful to rank the winner and the runner in any event from many individuals.","PeriodicalId":161120,"journal":{"name":"9th International Conference on Information Technology (ICIT'06)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114729661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naive Bayes classifier has gained wide popularity as a probability-based classification method despite its assumption that attributes are conditionally mutually independent given the class label. This paper makes a study into discretization techniques to improve the classification accuracy of Naive Bayes with respect to medical datasets. Our experimental results suggest that on an average, with minimum description length (MDL) discretization the Naive Bayes classifier seems to be the best performer compared to popular variants of Naive Bayes as well as some popular non-Naive Bayes statistical classifiers.
{"title":"A comparative analysis of discretization methods for Medical Datamining with Naive Bayesian classifier","authors":"Ranjit Abraham, J. B. Simha, S. S. Iyengar","doi":"10.1109/ICIT.2006.5","DOIUrl":"https://doi.org/10.1109/ICIT.2006.5","url":null,"abstract":"Naive Bayes classifier has gained wide popularity as a probability-based classification method despite its assumption that attributes are conditionally mutually independent given the class label. This paper makes a study into discretization techniques to improve the classification accuracy of Naive Bayes with respect to medical datasets. Our experimental results suggest that on an average, with minimum description length (MDL) discretization the Naive Bayes classifier seems to be the best performer compared to popular variants of Naive Bayes as well as some popular non-Naive Bayes statistical classifiers.","PeriodicalId":161120,"journal":{"name":"9th International Conference on Information Technology (ICIT'06)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123859327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In XML filtering systems, if the structure is not known, user may input structurally invalid queries. The filtering system wastes its precious time in evaluating these invalid queries before it eventually finds that the queries are structurally invalid. In this paper we propose a technique to guide the user in formulating queries, which are more likely to succeed.
{"title":"Cumulative Path Summary for Structurally Dynamic XML Documents","authors":"R. Gururaj, P. S. Kumar","doi":"10.1109/ICIT.2006.41","DOIUrl":"https://doi.org/10.1109/ICIT.2006.41","url":null,"abstract":"In XML filtering systems, if the structure is not known, user may input structurally invalid queries. The filtering system wastes its precious time in evaluating these invalid queries before it eventually finds that the queries are structurally invalid. In this paper we propose a technique to guide the user in formulating queries, which are more likely to succeed.","PeriodicalId":161120,"journal":{"name":"9th International Conference on Information Technology (ICIT'06)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116647175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent days advancement in sensor and instrumentation technology has seen a large amount of time series data being recorded in our day-to-day life. Knowledge and data mining research has taken up the responsibility of mining the hidden patterns in these huge collection of time series data during the past decade. In this paper we propose methodologies to extract hidden knowledge in a time series data through an unsupervised approach by using the novel WaveSim transform. This recently introduced transform is a novel perspective of wavelet transform and it is defined by keeping pattern analysis and recognition in mind. Time series data mining has been classified broadly into whole series mining and subsequence series mining. We propose a hierarchical tree based approach for subsequence mining in a time series using a modified WaveSim transform called Adaptive WaveSim transform. The technique has been illustrated through a set of experimentation results which is expected to open up a wide arena for future work.
{"title":"WaveSim and Adaptive WaveSim Transform for Subsequence Time-Series Clustering","authors":"R. P. Kumar, P. Nagabhushan, A. Chouakria","doi":"10.1109/ICIT.2006.93","DOIUrl":"https://doi.org/10.1109/ICIT.2006.93","url":null,"abstract":"Recent days advancement in sensor and instrumentation technology has seen a large amount of time series data being recorded in our day-to-day life. Knowledge and data mining research has taken up the responsibility of mining the hidden patterns in these huge collection of time series data during the past decade. In this paper we propose methodologies to extract hidden knowledge in a time series data through an unsupervised approach by using the novel WaveSim transform. This recently introduced transform is a novel perspective of wavelet transform and it is defined by keeping pattern analysis and recognition in mind. Time series data mining has been classified broadly into whole series mining and subsequence series mining. We propose a hierarchical tree based approach for subsequence mining in a time series using a modified WaveSim transform called Adaptive WaveSim transform. The technique has been illustrated through a set of experimentation results which is expected to open up a wide arena for future work.","PeriodicalId":161120,"journal":{"name":"9th International Conference on Information Technology (ICIT'06)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122903802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Results of association studies using individual single nucleotide polymorphisms (SNPs) or SNP-haplotypes have been inconsistent. Possible reasons could be attributed to poor experimental design, generalization of results from a single population or inappropriate choice of markers. Here we explore the possibility that the sequence context of a SNP may be responsible for its poor association with the phenotype. An analysis of the Human_MitBASE data helped in the prediction of association between SNP haplotypes with disease phenotypes. A novel computational tool E-MIDAS was developed to automate this analysis. Based on our results, we propose omission of SNPs in CpG dinucleotides which have a mutation predisposing flank and those present at sites of recurrent mutation, from association studies.
{"title":"Evaluating the Association of Mitochondrial SNP Haplotypes with Disease Phenotypes using a Novel in silico Tool E-MIDAS","authors":"Anshu Bhardwaj, S. Tiwari","doi":"10.1109/ICIT.2006.53","DOIUrl":"https://doi.org/10.1109/ICIT.2006.53","url":null,"abstract":"Results of association studies using individual single nucleotide polymorphisms (SNPs) or SNP-haplotypes have been inconsistent. Possible reasons could be attributed to poor experimental design, generalization of results from a single population or inappropriate choice of markers. Here we explore the possibility that the sequence context of a SNP may be responsible for its poor association with the phenotype. An analysis of the Human_MitBASE data helped in the prediction of association between SNP haplotypes with disease phenotypes. A novel computational tool E-MIDAS was developed to automate this analysis. Based on our results, we propose omission of SNPs in CpG dinucleotides which have a mutation predisposing flank and those present at sites of recurrent mutation, from association studies.","PeriodicalId":161120,"journal":{"name":"9th International Conference on Information Technology (ICIT'06)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127046477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grid Computing has emerged as core computing technology in many distributed applications initiated by different organizations all over the world. It aims to enable large scale resource sharing. Security is a big and challenging issue as a grid service may impact number of other services/resources distributed over wide area networks. The dynamic and multi- institutional nature of grid applications introduces challenging security issues that require new technical approaches. This paper proposes, at an abstract level, a framework for handling security issues in grids. The framework is capable of addressing both, generic as well as application specific security issues of grid services.
{"title":"Design of a Framework for Handling Security Issues in Grids","authors":"Sarbjeet Singh, S. Bawa","doi":"10.1109/ICIT.2006.43","DOIUrl":"https://doi.org/10.1109/ICIT.2006.43","url":null,"abstract":"Grid Computing has emerged as core computing technology in many distributed applications initiated by different organizations all over the world. It aims to enable large scale resource sharing. Security is a big and challenging issue as a grid service may impact number of other services/resources distributed over wide area networks. The dynamic and multi- institutional nature of grid applications introduces challenging security issues that require new technical approaches. This paper proposes, at an abstract level, a framework for handling security issues in grids. The framework is capable of addressing both, generic as well as application specific security issues of grid services.","PeriodicalId":161120,"journal":{"name":"9th International Conference on Information Technology (ICIT'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114073861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}