Common Object Request Broker Architecture (CORBA) is an enabling technology that supports heterogeneous applications to work together over networks. However, the implementation of CORBA components suffers from high interaction complexities in the component that seriously degrades the component independence for reuse. In this paper, we are presenting an adapter for implementing CORBA components in COBOL for reuse. The adapter is used to isolate, encapsulate, and manage a component's interactions outside the component. The use of adapters increases the reusability of components and also simplifies the integration of the components to an application. In addition, for organizations using an open-source implementation of CORBA, the work discussed in this paper helps them improve their CORBA middleware implementations to support COBOL interoperability and reuse.
{"title":"An adapter to promote reusability of CORBA components in COBOL","authors":"C. Chiang","doi":"10.1109/ITCC.2005.64","DOIUrl":"https://doi.org/10.1109/ITCC.2005.64","url":null,"abstract":"Common Object Request Broker Architecture (CORBA) is an enabling technology that supports heterogeneous applications to work together over networks. However, the implementation of CORBA components suffers from high interaction complexities in the component that seriously degrades the component independence for reuse. In this paper, we are presenting an adapter for implementing CORBA components in COBOL for reuse. The adapter is used to isolate, encapsulate, and manage a component's interactions outside the component. The use of adapters increases the reusability of components and also simplifies the integration of the components to an application. In addition, for organizations using an open-source implementation of CORBA, the work discussed in this paper helps them improve their CORBA middleware implementations to support COBOL interoperability and reuse.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128585450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Resource brokers on the grid consult a number of distributed information services to select the best data source and/or computational resource based on user requirements. The consultation task increases the design complexity of Resource Brokers. This work is the first attempt to unify the distributed grid information services in one framework. The grid query service (GQS) is composed of information services layered on OGSA-DAI grid services, which are generic data access and integration services. Grid resource brokers can consult just one service, the GQS service, to obtain indexed information about grid resources.
{"title":"A unified grid query service for grid resource brokers","authors":"Leena Al-Hussaini, A. Wendelborn, P. Coddington","doi":"10.1109/ITCC.2005.52","DOIUrl":"https://doi.org/10.1109/ITCC.2005.52","url":null,"abstract":"Resource brokers on the grid consult a number of distributed information services to select the best data source and/or computational resource based on user requirements. The consultation task increases the design complexity of Resource Brokers. This work is the first attempt to unify the distributed grid information services in one framework. The grid query service (GQS) is composed of information services layered on OGSA-DAI grid services, which are generic data access and integration services. Grid resource brokers can consult just one service, the GQS service, to obtain indexed information about grid resources.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128661855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
These days, billions of Web pages are created with HTML or other markup languages. They only have a few uniform structures and contain various authoring styles compared to traditional text-based documents. However, users usually focus on a particular section of the page that presents the most relevant information to their interest. Therefore, Web documents classification needs to group and filter the pages based on their contents and relevant information. Many researches on Web mining report on mining Web structure and extracting information from Web contents. However, they have focused on detecting tables that convey specific data, not the tables that are used as a mechanism for structuring the layout of Web pages. Case modeling of tables can be constructed based on structure abstraction. Furthermore, Ripple Down Rules (RDR) is used to implement knowledge organization and construction, because it supports a simple rule maintenance based on case and local validation.
如今,数十亿的Web页面都是用HTML或其他标记语言创建的。与传统的基于文本的文档相比,它们只有一些统一的结构,并且包含各种创作风格。然而,用户通常会把注意力集中在页面的特定部分,这些部分显示了与他们感兴趣的最相关的信息。因此,Web文档分类需要根据页面的内容和相关信息对页面进行分组和过滤。许多Web挖掘研究都是关于挖掘Web结构和从Web内容中提取信息的。但是,它们的重点是检测传递特定数据的表,而不是用于构建Web页面布局的机制的表。表的用例建模是基于结构抽象的。此外,Ripple Down Rules (RDR)用于实现知识组织和构建,因为它支持基于案例和局部验证的简单规则维护。
{"title":"Elimination of redundant information for Web data mining","authors":"S. Taib, Soon-ja Yeom, B. Kang","doi":"10.1109/ITCC.2005.143","DOIUrl":"https://doi.org/10.1109/ITCC.2005.143","url":null,"abstract":"These days, billions of Web pages are created with HTML or other markup languages. They only have a few uniform structures and contain various authoring styles compared to traditional text-based documents. However, users usually focus on a particular section of the page that presents the most relevant information to their interest. Therefore, Web documents classification needs to group and filter the pages based on their contents and relevant information. Many researches on Web mining report on mining Web structure and extracting information from Web contents. However, they have focused on detecting tables that convey specific data, not the tables that are used as a mechanism for structuring the layout of Web pages. Case modeling of tables can be constructed based on structure abstraction. Furthermore, Ripple Down Rules (RDR) is used to implement knowledge organization and construction, because it supports a simple rule maintenance based on case and local validation.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128734673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Argamon, Nazli Goharian, D. Grossman, O. Frieder, N. Raju
The authors described the progress extending the undergraduate computer science (CS) curriculum to include a deep understanding of techniques for information and knowledge management systems (IKMS). In a novel five-course sequence, students build and work with techniques for data mining, information retrieval, and text analysis, and develop a large-scale IKMS project. The authors taught in a hands-on lab setting where students use tools they have built, performing experiments that could extend the field. Hence undergraduates have firsthand knowledge of performing CS research using scientific methods. Second, a rigorous set of evaluation criteria developed in the Psychology Institute was utilized to evaluate how well students learn using our approaches. Ultimately, it is believed that this specialization warrants inclusion as an option in the standard undergraduate CS curriculum.
{"title":"A specialization in information and knowledge management systems for the undergraduate computer science curriculum","authors":"S. Argamon, Nazli Goharian, D. Grossman, O. Frieder, N. Raju","doi":"10.1109/ITCC.2005.39","DOIUrl":"https://doi.org/10.1109/ITCC.2005.39","url":null,"abstract":"The authors described the progress extending the undergraduate computer science (CS) curriculum to include a deep understanding of techniques for information and knowledge management systems (IKMS). In a novel five-course sequence, students build and work with techniques for data mining, information retrieval, and text analysis, and develop a large-scale IKMS project. The authors taught in a hands-on lab setting where students use tools they have built, performing experiments that could extend the field. Hence undergraduates have firsthand knowledge of performing CS research using scientific methods. Second, a rigorous set of evaluation criteria developed in the Psychology Institute was utilized to evaluate how well students learn using our approaches. Ultimately, it is believed that this specialization warrants inclusion as an option in the standard undergraduate CS curriculum.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129596615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a service-based grid computing model which emphasizes that a grid is a special computing system. By comparing this model with the traditional computing system model, we analyze their similarities and differences, which is important for related grid study. The proposed model is very useful for future study in grid computing: it not only provides instructions for developing a grid system, but also provides a framework for the theoretical grid research.
{"title":"Analyze grid from the perspective of a computing system","authors":"Guoshun Hao, Shilong Ma, Haoming Guo, Xiaolong Wu, Mei Yang, Yingtao Jiang","doi":"10.1109/ITCC.2005.85","DOIUrl":"https://doi.org/10.1109/ITCC.2005.85","url":null,"abstract":"This paper presents a service-based grid computing model which emphasizes that a grid is a special computing system. By comparing this model with the traditional computing system model, we analyze their similarities and differences, which is important for related grid study. The proposed model is very useful for future study in grid computing: it not only provides instructions for developing a grid system, but also provides a framework for the theoretical grid research.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127192510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Authenticated group key agreement is important in many modern collaborative and distributed applications. Recently, identity-based authenticated group key agreement has been increasingly researched because of the simplicity of a public key management. Basically, these protocols provide authentication by signatures. Hence they are less efficient for the verification of signatures creates additional computational overhead. In the paper, we propose an efficient authenticated group key agreement protocol by introducing a modified identity-based public key infrastructure with new system setup and key extraction algorithms. The security of our protocol is assured by the discrete logarithm assumption. Moreover, our protocol requires only one round and is more efficient than all previously known ones since it provides authentication without using signatures.
{"title":"ID-based one round authenticated group key agreement protocol with bilinear pairings","authors":"Yijuan Shi, Gongliang Chen, Jianhua Li","doi":"10.1109/ITCC.2005.169","DOIUrl":"https://doi.org/10.1109/ITCC.2005.169","url":null,"abstract":"Authenticated group key agreement is important in many modern collaborative and distributed applications. Recently, identity-based authenticated group key agreement has been increasingly researched because of the simplicity of a public key management. Basically, these protocols provide authentication by signatures. Hence they are less efficient for the verification of signatures creates additional computational overhead. In the paper, we propose an efficient authenticated group key agreement protocol by introducing a modified identity-based public key infrastructure with new system setup and key extraction algorithms. The security of our protocol is assured by the discrete logarithm assumption. Moreover, our protocol requires only one round and is more efficient than all previously known ones since it provides authentication without using signatures.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130534695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research describes a Linux/Unix base middleware platform and GUI operating interface that has been created to help developer overcome the difficulties associated with existing software editing, upgrading, updating and integration. With growing in size and complexity of systems using in current organizations, the developing and maintaining existing or new systems can be really cost-and-time consuming. Even, the present new system might be a burden for the future new system development. We present methods for creating a route for data transmission between multiple platforms. Also, we use less than three API to connect with different application software and communication protocol. For example, TCP/IP communicates with X.25, TCP/IP communicates with SNA. The solution is benefit to an application designer to reduce the effort and time when editing or developing system. The major objective of this research is making breakthrough of providing technique for integrating different system while they consist of different OS, application software, protocol and programming language.
{"title":"Design of middleware platform to enhance abilities of application systems integration","authors":"Vincent Chang","doi":"10.1109/ITCC.2005.123","DOIUrl":"https://doi.org/10.1109/ITCC.2005.123","url":null,"abstract":"This research describes a Linux/Unix base middleware platform and GUI operating interface that has been created to help developer overcome the difficulties associated with existing software editing, upgrading, updating and integration. With growing in size and complexity of systems using in current organizations, the developing and maintaining existing or new systems can be really cost-and-time consuming. Even, the present new system might be a burden for the future new system development. We present methods for creating a route for data transmission between multiple platforms. Also, we use less than three API to connect with different application software and communication protocol. For example, TCP/IP communicates with X.25, TCP/IP communicates with SNA. The solution is benefit to an application designer to reduce the effort and time when editing or developing system. The major objective of this research is making breakthrough of providing technique for integrating different system while they consist of different OS, application software, protocol and programming language.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130768084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the real world there are thousands of time series data that coexists with other data. Every day tons of data is collected in the form of time series. Time series is a collection of observations that is recorded or measured over time on a regular or irregular basis generally sequentially. Time series arise in financial, economic, and scientific applications. Typical examples are the recording of different values of stock prices, bank transactions, consumer price index, electricity and telecommunication data, etc. In theory, such data is processed, analyzed, disseminated, and presented. However, many institutions are facing some difficult issues in organizing such a vast amount of data. Therefore, the need for data management tools has become more and more important. This paper addresses this issue by proposing a framework for Time Series Data Management (TSDM). The central abstraction for the proposed domain specific framework is the notion of Business Sections, Group of Time Series, and Time Series itself. The framework integrates minimum specification regarding structural and functional aspects for time series data management.
{"title":"A time series data management framework","authors":"Abel Matus-Castillejos, R. Jentzsch","doi":"10.1109/ITCC.2005.45","DOIUrl":"https://doi.org/10.1109/ITCC.2005.45","url":null,"abstract":"In the real world there are thousands of time series data that coexists with other data. Every day tons of data is collected in the form of time series. Time series is a collection of observations that is recorded or measured over time on a regular or irregular basis generally sequentially. Time series arise in financial, economic, and scientific applications. Typical examples are the recording of different values of stock prices, bank transactions, consumer price index, electricity and telecommunication data, etc. In theory, such data is processed, analyzed, disseminated, and presented. However, many institutions are facing some difficult issues in organizing such a vast amount of data. Therefore, the need for data management tools has become more and more important. This paper addresses this issue by proposing a framework for Time Series Data Management (TSDM). The central abstraction for the proposed domain specific framework is the notion of Business Sections, Group of Time Series, and Time Series itself. The framework integrates minimum specification regarding structural and functional aspects for time series data management.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130907978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An efficient Markov chain correlation based clustering method (MCC) has been proposed for clustering gene expression data. The gene expression data is first normalized and Markov chains (MC) are constructed from the dynamics of the gene expressions, in which the behavior of the genes at each step of the experiment can be taken into account. Based on the correlation of one-step Markov chain transition probabilities, an agglomerative method is employed to group the series that have similar behavior at each point. The proposed MCC clustering method has been applied to four gene expression datasets to obtain a number of clusters. The results show that the MCC method outperforms the commonly used K-means method and produces clusters that are more meaningful in terms of the similarity of the grouped genes. Another advantage of the proposed method over the existing clustering methods is that the knowledge of the group number is not required.
{"title":"Markov chain correlation based clustering of gene expression data","authors":"Youping Deng, Venkatachalam Chokalingam, Chaoyang Zhang","doi":"10.1109/ITCC.2005.189","DOIUrl":"https://doi.org/10.1109/ITCC.2005.189","url":null,"abstract":"An efficient Markov chain correlation based clustering method (MCC) has been proposed for clustering gene expression data. The gene expression data is first normalized and Markov chains (MC) are constructed from the dynamics of the gene expressions, in which the behavior of the genes at each step of the experiment can be taken into account. Based on the correlation of one-step Markov chain transition probabilities, an agglomerative method is employed to group the series that have similar behavior at each point. The proposed MCC clustering method has been applied to four gene expression datasets to obtain a number of clusters. The results show that the MCC method outperforms the commonly used K-means method and produces clusters that are more meaningful in terms of the similarity of the grouped genes. Another advantage of the proposed method over the existing clustering methods is that the knowledge of the group number is not required.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130265163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The inference problem exits when users can infer sensitive data classified at higher security levels from the knowledge of data at their level by performing inference. Inference problems greatly compromise database security, especially in multilevel secure (MLS) databases where both users and data are classified into different security levels. This paper presents an approach of dynamic control over the inference problem after all inference channels have been identified in a multilevel database. A set of key schemes are used for this purpose. We prove that these schemes are more efficient, in both space and time complexity, than previously proposed approaches.
{"title":"A dynamic method for handling the inference problem in multilevel secure databases","authors":"X. Chen, R. Wei","doi":"10.1109/ITCC.2005.7","DOIUrl":"https://doi.org/10.1109/ITCC.2005.7","url":null,"abstract":"The inference problem exits when users can infer sensitive data classified at higher security levels from the knowledge of data at their level by performing inference. Inference problems greatly compromise database security, especially in multilevel secure (MLS) databases where both users and data are classified into different security levels. This paper presents an approach of dynamic control over the inference problem after all inference channels have been identified in a multilevel database. A set of key schemes are used for this purpose. We prove that these schemes are more efficient, in both space and time complexity, than previously proposed approaches.","PeriodicalId":326887,"journal":{"name":"International Conference on Information Technology: Coding and Computing (ITCC'05) - Volume II","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126750257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}