Jie Bao, Zhilian Hu, Doina Caragea, J. Reecy, Vasant G Honavar
In order for ontologies to be broadly useful to the scientific community, they need to capture knowledge and expertise of multiple experts and research groups. Consequently, the construction of such ontologies necessarily requires collaboration among individual experts or research groups. Support for such collaboration is largely lacking in existing ontology development environments. We describe some initial steps towards the development of a collaborative ontology development environment. Specifically, we describe an ontology editing tool COB editor which exploits the notion of modular ontologies (or ontology packages) to support sharing, reuse, and collaborative editing of partial order (i.e., DAG-structured) ontologies. COB editor can engage diverse and relatively autonomous communities of biologists in the process of creating the ontologies needed for annotating, integrating, and analyzing diverse sources of `omics' data
{"title":"A Tool for Collaborative Construction of Large Biological Ontologies","authors":"Jie Bao, Zhilian Hu, Doina Caragea, J. Reecy, Vasant G Honavar","doi":"10.1109/DEXA.2006.20","DOIUrl":"https://doi.org/10.1109/DEXA.2006.20","url":null,"abstract":"In order for ontologies to be broadly useful to the scientific community, they need to capture knowledge and expertise of multiple experts and research groups. Consequently, the construction of such ontologies necessarily requires collaboration among individual experts or research groups. Support for such collaboration is largely lacking in existing ontology development environments. We describe some initial steps towards the development of a collaborative ontology development environment. Specifically, we describe an ontology editing tool COB editor which exploits the notion of modular ontologies (or ontology packages) to support sharing, reuse, and collaborative editing of partial order (i.e., DAG-structured) ontologies. COB editor can engage diverse and relatively autonomous communities of biologists in the process of creating the ontologies needed for annotating, integrating, and analyzing diverse sources of `omics' data","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132354235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The process of automatically extracting metadata from an experiment's dataset is an important stage in efficiently integrating this dataset with data available in public bioinformatics data sources. Metadata extracted from the experiment's dataset can be stored in databases and used to verify data extracted from other experiments' datasets. Moreover, the biologist can keep track of the dataset so that it can be easily retrieved next time. The extracted metadata can be mined to discover useful knowledge as well as integrated with other information using domain ontology to reveal hidden relationships. The experiment's dataset may contain several kinds of metadata that can be used to add semantic value to linked data. This paper describes an approach for extracting metadata from an experiment's dataset. This system has been used in a preliminary investigation of aging across species
{"title":"Extracting Metadata from Biological Experimental Data","authors":"Badr Al-Daihani, W. A. Gray, P. Kille","doi":"10.1109/DEXA.2006.58","DOIUrl":"https://doi.org/10.1109/DEXA.2006.58","url":null,"abstract":"The process of automatically extracting metadata from an experiment's dataset is an important stage in efficiently integrating this dataset with data available in public bioinformatics data sources. Metadata extracted from the experiment's dataset can be stored in databases and used to verify data extracted from other experiments' datasets. Moreover, the biologist can keep track of the dataset so that it can be easily retrieved next time. The extracted metadata can be mined to discover useful knowledge as well as integrated with other information using domain ontology to reveal hidden relationships. The experiment's dataset may contain several kinds of metadata that can be used to add semantic value to linked data. This paper describes an approach for extracting metadata from an experiment's dataset. This system has been used in a preliminary investigation of aging across species","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122054665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data replication can drastically improve data accessibility in mobile ad hoc networks (MANETs). In this paper, we introduce our work that addresses data replication in MANETs, particularly focusing on update management. We explain a few research issues based on both optimistic and pessimistic consistency management policies. We also describe a few prospects for future directions
{"title":"Data Replication and Update Management in Mobile Ad Hoc Networks (Invited Paper)","authors":"T. Hara","doi":"10.1109/DEXA.2006.47","DOIUrl":"https://doi.org/10.1109/DEXA.2006.47","url":null,"abstract":"Data replication can drastically improve data accessibility in mobile ad hoc networks (MANETs). In this paper, we introduce our work that addresses data replication in MANETs, particularly focusing on update management. We explain a few research issues based on both optimistic and pessimistic consistency management policies. We also describe a few prospects for future directions","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121078759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we explain an implementation of an accrual failure detector, that we call the phi failure detector. The particularity of the phi failure detector is that it dynamically adjusts to current network conditions the scale on which the suspicion level is expressed. We have done the experiment in a LAN in a whole day and evaluated the behavior of our phi failure detector. Then we discuss on the parameters of the failure detector based on our experimental result
{"title":"Performance Analysis of the varphi Failure Detector with its Tunable Parameters","authors":"Naohiro Hayashibara, M. Takizawa","doi":"10.1109/DEXA.2006.111","DOIUrl":"https://doi.org/10.1109/DEXA.2006.111","url":null,"abstract":"In this paper, we explain an implementation of an accrual failure detector, that we call the phi failure detector. The particularity of the phi failure detector is that it dynamically adjusts to current network conditions the scale on which the suspicion level is expressed. We have done the experiment in a LAN in a whole day and evaluated the behavior of our phi failure detector. Then we discuss on the parameters of the failure detector based on our experimental result","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122913074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In scientific and other domains, knowledge discovery has started to be widely supported by service oriented data mining grids. When access to such services is required anytime at anyplace, the integration of mobile devices and wireless networks into grids is useful. However, mobile technologies exhibit limited capabilities and movement further cause frequent changes of context, like location and, thus, network connectivity. In this paper, the integration of mobile devices as ubiquitous knowledge discovery clients is proposed. The major service classes in knowledge discovery workflow management are addressed, those are, the monitoring and controlling of executing services. The feasibility of the approach is demonstrated by means of a .NET-based prototypical implementation on PDAs for the knowledge discovery framework GridMiner
{"title":"Mobility Extensions for Knowledge Discovery Workflows in Data Mining Grids","authors":"K. Hummel, Georg Bohs, P. Brezany, I. Janciak","doi":"10.1109/DEXA.2006.97","DOIUrl":"https://doi.org/10.1109/DEXA.2006.97","url":null,"abstract":"In scientific and other domains, knowledge discovery has started to be widely supported by service oriented data mining grids. When access to such services is required anytime at anyplace, the integration of mobile devices and wireless networks into grids is useful. However, mobile technologies exhibit limited capabilities and movement further cause frequent changes of context, like location and, thus, network connectivity. In this paper, the integration of mobile devices as ubiquitous knowledge discovery clients is proposed. The major service classes in knowledge discovery workflow management are addressed, those are, the monitoring and controlling of executing services. The feasibility of the approach is demonstrated by means of a .NET-based prototypical implementation on PDAs for the knowledge discovery framework GridMiner","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129285665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the pre-requisites for the realization of the semantic Web vision are matching techniques which are capable of handling the open, dynamic and heterogeneous nature of the semantic data in a feasible way. Currently this issue is not being optimally resolved; the majority of existing approaches to ontology matching are (implicitly) restricted to processing particular classes of ontologies and thus unable to guarantee a predictable result quality on arbitrary inputs. Accounting for the empirical findings of two case studies in ontology engineering, we argue that a possible solution to cope with this situation is to design a matching strategy which strives for an optimization of the matching process whilst being aware of the inherent dependencies between algorithms and the types of ontologies these are able to process successfully. We introduce a matching framework that, given a set of ontologies to be matched described by ontology metadata, takes into account the capabilities of existing matching algorithms (matcher metadata) and suggests, by using a set of rules, appropriate ones
{"title":"A High-Level Architecture of a Metadata-based Ontology Matching Framework","authors":"Malgorzata Mochól, E. Simperl","doi":"10.1109/DEXA.2006.9","DOIUrl":"https://doi.org/10.1109/DEXA.2006.9","url":null,"abstract":"One of the pre-requisites for the realization of the semantic Web vision are matching techniques which are capable of handling the open, dynamic and heterogeneous nature of the semantic data in a feasible way. Currently this issue is not being optimally resolved; the majority of existing approaches to ontology matching are (implicitly) restricted to processing particular classes of ontologies and thus unable to guarantee a predictable result quality on arbitrary inputs. Accounting for the empirical findings of two case studies in ontology engineering, we argue that a possible solution to cope with this situation is to design a matching strategy which strives for an optimization of the matching process whilst being aware of the inherent dependencies between algorithms and the types of ontologies these are able to process successfully. We introduce a matching framework that, given a set of ontologies to be matched described by ontology metadata, takes into account the capabilities of existing matching algorithms (matcher metadata) and suggests, by using a set of rules, appropriate ones","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123767769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nickolas J. G. Falkner, P. Coddington, A. Wendelborn
This paper presents an overview of a mechanism for bridging the gaps between the semantic Web data and services, and existing network-based services that are not semantically-annotated or do not meet the requirements of semantic Web-based applications. The semantic Web is a relatively new set of technologies that mutually interoperate well but often requires mediation, translation or wrapping to interoperate with existing network-based services. Seen as an extension of network-based services and the WWW, the semantic Web constitutes an expanding system that can require significant effort to integrate and develop services while still providing seamless service to users. New components in a system must interoperate with the existing components and their use of protocols and shared data must be structurally and semantically equivalent. The new system must continue to meet the original system requirements as well as providing the new features or facilities. We propose a new model of network services using a knowledge-based approach that defines services and their data in terms of an ontology that can be shared with other components
{"title":"Bridging the Gap between the SemanticWeb and Existing Network Services","authors":"Nickolas J. G. Falkner, P. Coddington, A. Wendelborn","doi":"10.1109/DEXA.2006.37","DOIUrl":"https://doi.org/10.1109/DEXA.2006.37","url":null,"abstract":"This paper presents an overview of a mechanism for bridging the gaps between the semantic Web data and services, and existing network-based services that are not semantically-annotated or do not meet the requirements of semantic Web-based applications. The semantic Web is a relatively new set of technologies that mutually interoperate well but often requires mediation, translation or wrapping to interoperate with existing network-based services. Seen as an extension of network-based services and the WWW, the semantic Web constitutes an expanding system that can require significant effort to integrate and develop services while still providing seamless service to users. New components in a system must interoperate with the existing components and their use of protocols and shared data must be structurally and semantically equivalent. The new system must continue to meet the original system requirements as well as providing the new features or facilities. We propose a new model of network services using a knowledge-based approach that defines services and their data in terms of an ontology that can be shared with other components","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"294 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121361865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Sato, M. Uehara, K. Shimomura, Hirobumi Yamamoto, K. Kamijo
Recent years, especially in Japan, camouflaging geographical origin of agricultural products is a big problem. Therefore, we introduced a distributed system to identify their geographical origin using their differences of trace elements, or very small quantities of elements. Vegetables grown in farms absorb metals form the soil. Since compositions of trace metal elements differ from geographical places, this can be utilized to identify geographical origin of vegetables. In proposing system, trace element compositions of vegetables are measured when they are shipped from a farm, and the data is stored in databases which are located in farming districts. When a doubtful vegetable is found in food distribution channel, its trace element compositions are measured and compared by calculating correlation coefficients to ones accumulated in databases. This system can be used to verify geographical origin data by food traceability system. Because correlation coefficients are not known when they are once calculated, so correlation coefficients between all accumulated data in databases and doubtful vegetable. This means that proposing system is not scalable when the number of accumulated data is increased. Therefore, we introduced a method to reduce the number of target to calculate correlation coefficients using similarity preserve hash (SPH) which gives similar output for similar input. This could reduce time to calculation itself, however, computation time including picking out target data for calculation of correlation coefficients from database. Therefore, we introduce a method to accelerate picking up data form database by grouping value of SPH
{"title":"Efficient Target Selection in Similarity Preserve Hash for Distributed Geographical Origin Identification System of Vegetables","authors":"N. Sato, M. Uehara, K. Shimomura, Hirobumi Yamamoto, K. Kamijo","doi":"10.1109/DEXA.2006.55","DOIUrl":"https://doi.org/10.1109/DEXA.2006.55","url":null,"abstract":"Recent years, especially in Japan, camouflaging geographical origin of agricultural products is a big problem. Therefore, we introduced a distributed system to identify their geographical origin using their differences of trace elements, or very small quantities of elements. Vegetables grown in farms absorb metals form the soil. Since compositions of trace metal elements differ from geographical places, this can be utilized to identify geographical origin of vegetables. In proposing system, trace element compositions of vegetables are measured when they are shipped from a farm, and the data is stored in databases which are located in farming districts. When a doubtful vegetable is found in food distribution channel, its trace element compositions are measured and compared by calculating correlation coefficients to ones accumulated in databases. This system can be used to verify geographical origin data by food traceability system. Because correlation coefficients are not known when they are once calculated, so correlation coefficients between all accumulated data in databases and doubtful vegetable. This means that proposing system is not scalable when the number of accumulated data is increased. Therefore, we introduced a method to reduce the number of target to calculate correlation coefficients using similarity preserve hash (SPH) which gives similar output for similar input. This could reduce time to calculation itself, however, computation time including picking out target data for calculation of correlation coefficients from database. Therefore, we introduce a method to accelerate picking up data form database by grouping value of SPH","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127213254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The common object request broker architecture (CORBA) specification originally did not include any support for fault-tolerance. The fault-tolerant CORBA standard was added to address this issue. One drawback of the standard is that it does not include fault-tolerance in the case of network partitioning faults. The main contribution of this paper is the design of a fault-tolerance CORBA add-on for partitionable environments. In contrast to other solutions, our modular design separates replication and reconciliation policies from the basic replication mechanisms. This modularity allows the replication and reconciliation strategies to be modified easily
{"title":"CORBA Replication Support for Fault-Tolerance in a Partitionable Distributed System","authors":"S. Beyer, F. D. Muñoz-Escoí, Pablo Galdámez","doi":"10.1109/DEXA.2006.44","DOIUrl":"https://doi.org/10.1109/DEXA.2006.44","url":null,"abstract":"The common object request broker architecture (CORBA) specification originally did not include any support for fault-tolerance. The fault-tolerant CORBA standard was added to address this issue. One drawback of the standard is that it does not include fault-tolerance in the case of network partitioning faults. The main contribution of this paper is the design of a fault-tolerance CORBA add-on for partitionable environments. In contrast to other solutions, our modular design separates replication and reconciliation policies from the basic replication mechanisms. This modularity allows the replication and reconciliation strategies to be modified easily","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125971571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nickolas J. G. Falkner, P. Coddington, A. Wendelborn
Organisations may wish to use a standards-defined distributed system in a global sense but also have a requirement for non-standard local behaviour. This reflects the production of, and desire to use, organisational knowledge developed over time. The efficient and effective management of this knowledge can be a deciding factor in an organisation's success or failure. Virtual organisations, where members share a problem-solving purpose rather than a location-based or formal organisation, have no formal bodies to manage change requests and may be restricted in how they can apply their knowledge. These organisations are also the most likely to seek divergent local behaviour since their locale is formed by the members' desire to solve a particular problem and this problem-based approach may lead to user requirements that exist only in that virtual organisation. We describe a method for capturing and representing operational semantics so that global and local behaviour can co-exist without leading to operational impairment in either sphere. The approach applies to virtual and traditional organisations equally well as both entities can use it to integrate their local knowledge and requirements into a much larger framework and, potentially, share these with their collaborators. We illustrate our approach with reference to our implementation of an ontologically enhanced domain name system (DNS) server
{"title":"Capturing and Using the Operational Semantics of Large Distributed Systems: Sharing Common Application Requirements in Virtual Organisations","authors":"Nickolas J. G. Falkner, P. Coddington, A. Wendelborn","doi":"10.1109/DEXA.2006.152","DOIUrl":"https://doi.org/10.1109/DEXA.2006.152","url":null,"abstract":"Organisations may wish to use a standards-defined distributed system in a global sense but also have a requirement for non-standard local behaviour. This reflects the production of, and desire to use, organisational knowledge developed over time. The efficient and effective management of this knowledge can be a deciding factor in an organisation's success or failure. Virtual organisations, where members share a problem-solving purpose rather than a location-based or formal organisation, have no formal bodies to manage change requests and may be restricted in how they can apply their knowledge. These organisations are also the most likely to seek divergent local behaviour since their locale is formed by the members' desire to solve a particular problem and this problem-based approach may lead to user requirements that exist only in that virtual organisation. We describe a method for capturing and representing operational semantics so that global and local behaviour can co-exist without leading to operational impairment in either sphere. The approach applies to virtual and traditional organisations equally well as both entities can use it to integrate their local knowledge and requirements into a much larger framework and, potentially, share these with their collaborators. We illustrate our approach with reference to our implementation of an ontologically enhanced domain name system (DNS) server","PeriodicalId":282986,"journal":{"name":"17th International Workshop on Database and Expert Systems Applications (DEXA'06)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127882261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}