Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583074
Omar Gonzalez-Padilla, F. Corchado, H. Unger
RFID systems generate large volume of data about localization of people and objects; this information is filtered by a middleware and sent to upper level applications so they can detect events happening in the environment and react properly. This approach implies two drawbacks: firstly, developers invest time programming how to analyze data; secondly, network resources could be unnecessarily wasted when middleware sends data which is irrelevant for the application. To overcome these drawbacks, we present an approach where applications define composite events of interest through a XML-based language, and filtered information is analyzed by a new layer in order to notify applications only when interesting events occur. We present our language called RFID-CEDL for defining interesting events using RFID data and we describe the mechanism used to recognize such events. As demonstration of our approach, we present examples for a hospital environment.
{"title":"RFID composite event definition and detection","authors":"Omar Gonzalez-Padilla, F. Corchado, H. Unger","doi":"10.1109/IRI.2008.4583074","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583074","url":null,"abstract":"RFID systems generate large volume of data about localization of people and objects; this information is filtered by a middleware and sent to upper level applications so they can detect events happening in the environment and react properly. This approach implies two drawbacks: firstly, developers invest time programming how to analyze data; secondly, network resources could be unnecessarily wasted when middleware sends data which is irrelevant for the application. To overcome these drawbacks, we present an approach where applications define composite events of interest through a XML-based language, and filtered information is analyzed by a new layer in order to notify applications only when interesting events occur. We present our language called RFID-CEDL for defining interesting events using RFID data and we describe the mechanism used to recognize such events. As demonstration of our approach, we present examples for a hospital environment.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114975423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583052
Jianmei Guo, Yinglin Wang
Knowledge flow exists in almost every collaborative teamwork environment and now it attracts much attention in the knowledge management field. As knowledge flow often takes place in specific context, context should be considered in the modeling of knowledge flow. However, previous models of knowledge flow lack deep studies on context modeling, which makes these models insufficient for real application. This paper puts forward a conceptual framework of context-based knowledge flow. In this framework, context is viewed as one indispensable element of knowledge flow, which is about how to create, transform, propagate and apply knowledge items. Then, TPK context model is proposed, which is based on three spaces: task space, process space and knowledge space. The multi-dimensional structures for the three spaces are suggested. The proposed model reflects the main features of the context of knowledge flow. Based on the model, context-aware knowledge reuse flow is detailed through a case study in product design domain. A system implementing the model is also developed and applied in enterprises.
{"title":"Context modeling for knowledge flow","authors":"Jianmei Guo, Yinglin Wang","doi":"10.1109/IRI.2008.4583052","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583052","url":null,"abstract":"Knowledge flow exists in almost every collaborative teamwork environment and now it attracts much attention in the knowledge management field. As knowledge flow often takes place in specific context, context should be considered in the modeling of knowledge flow. However, previous models of knowledge flow lack deep studies on context modeling, which makes these models insufficient for real application. This paper puts forward a conceptual framework of context-based knowledge flow. In this framework, context is viewed as one indispensable element of knowledge flow, which is about how to create, transform, propagate and apply knowledge items. Then, TPK context model is proposed, which is based on three spaces: task space, process space and knowledge space. The multi-dimensional structures for the three spaces are suggested. The proposed model reflects the main features of the context of knowledge flow. Based on the model, context-aware knowledge reuse flow is detailed through a case study in product design domain. A system implementing the model is also developed and applied in enterprises.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"394 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115916217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583029
Erik Wilde, Yiming Liu
Much of the Web’s success rests with its role in enabling information reuse and integration across various boundaries. Hyperlinked Web resources represent a rich information tapestry of content and context, instrumental in effective knowledge sharing and further knowledge development. However, the Web’s simple linking model has become increasingly inadequate for effective content discovery and reuse. At the same time, rigorous but heavyweight solutions such as the Semantic Web have yet to garner critical mass in adoption. This paper analyzes the relative strengths and shortcomings of existing linked data approaches. It proposes a novel, lightweight architecture for the modeling, aggregation, retrieval, management, and sharing of contextual information for Web resources, based on established standards and designed to encourage more efficient and robust information reuse on the Web.
{"title":"Lightweight linked data","authors":"Erik Wilde, Yiming Liu","doi":"10.1109/IRI.2008.4583029","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583029","url":null,"abstract":"Much of the Web’s success rests with its role in enabling information reuse and integration across various boundaries. Hyperlinked Web resources represent a rich information tapestry of content and context, instrumental in effective knowledge sharing and further knowledge development. However, the Web’s simple linking model has become increasingly inadequate for effective content discovery and reuse. At the same time, rigorous but heavyweight solutions such as the Semantic Web have yet to garner critical mass in adoption. This paper analyzes the relative strengths and shortcomings of existing linked data approaches. It proposes a novel, lightweight architecture for the modeling, aggregation, retrieval, management, and sharing of contextual information for Web resources, based on established standards and designed to encourage more efficient and robust information reuse on the Web.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"464 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115868682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583059
Li Zhao, Qing Li, Guoqing Wu
In a secure workflow system, it is imperative to specify which users (or roles) can be authorized to execute which specific tasks. Users may also be able to delegate their rights of executing a task to others. Much of research work in the area of delegation has been carried out. In this paper, we provide a new formal method for representing delegation constraints in secure workflow systems, by adding a trust relationship and propagation parameters to make delegation propagation more effective. We also analyze the consistency after the delegation constraints are injected into the secure workflow systems.
{"title":"Injecting formulized delegation constraints into secure workflow systems","authors":"Li Zhao, Qing Li, Guoqing Wu","doi":"10.1109/IRI.2008.4583059","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583059","url":null,"abstract":"In a secure workflow system, it is imperative to specify which users (or roles) can be authorized to execute which specific tasks. Users may also be able to delegate their rights of executing a task to others. Much of research work in the area of delegation has been carried out. In this paper, we provide a new formal method for representing delegation constraints in secure workflow systems, by adding a trust relationship and propagation parameters to make delegation propagation more effective. We also analyze the consistency after the delegation constraints are injected into the secure workflow systems.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125074972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583022
Shuxin Zhao, E. Chang, T. Dillon
This paper presents a novel approach for extracting knowledge from web-based application source code in supplementing and assisting ontology development from database schemas. The structure of web-based application source code is defined in order to distinguish different kinds of knowledge within the source code for ontology development. The connections between the relevant parts of web application source code and the backend database schema with their various forms are explicitly specified in detail. A knowledge processing and integration model for extracting and integrating the knowledge embedded in the source code for ontology development is then proposed.
{"title":"Knowledge extraction from web-based application source code: An approach to database reverse engineering for ontology development","authors":"Shuxin Zhao, E. Chang, T. Dillon","doi":"10.1109/IRI.2008.4583022","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583022","url":null,"abstract":"This paper presents a novel approach for extracting knowledge from web-based application source code in supplementing and assisting ontology development from database schemas. The structure of web-based application source code is defined in order to distinguish different kinds of knowledge within the source code for ontology development. The connections between the relevant parts of web application source code and the backend database schema with their various forms are explicitly specified in detail. A knowledge processing and integration model for extracting and integrating the knowledge embedded in the source code for ontology development is then proposed.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134285107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583036
Allan Chan, Nancy Situ, K. Wong, K. Kianmehr, R. Alhajj
This paper describes the design and implementation of a fuzzy nested querying system for XML databases. The research steps involved are outlined and examined. We integrated different aspects of fuzziness, web and database technology into the implementation of a prototype which covers the intended scope of a demonstration of fuzzy nested querying of databases. This prototype consists of an easy to use graphical interface that allows the user to apply fuzziness to their XML searches. The goal of this is to provide insight on creating more intuitive ways of searching and using XML databases, especially by naive users.
{"title":"Fuzzy querying of nested XML","authors":"Allan Chan, Nancy Situ, K. Wong, K. Kianmehr, R. Alhajj","doi":"10.1109/IRI.2008.4583036","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583036","url":null,"abstract":"This paper describes the design and implementation of a fuzzy nested querying system for XML databases. The research steps involved are outlined and examined. We integrated different aspects of fuzziness, web and database technology into the implementation of a prototype which covers the intended scope of a demonstration of fuzzy nested querying of databases. This prototype consists of an easy to use graphical interface that allows the user to apply fuzziness to their XML searches. The goal of this is to provide insight on creating more intuitive ways of searching and using XML databases, especially by naive users.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128671483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583020
Chunying Zhou, Huajun Chen, Tong Yu
The proliferation of online social websites results in the accumulation of a large volume of real-world data capturing social networks in diversified application domains. However, social networks are always separated with each other that causes the data isolated island phenomenon, which becomes impedance to implementing complex data analysis that requires comprehensive data stored in several social networks. In this paper, we present a social network mashup approach that uses the Semantic Web technology to integrate heterogeneous social networks that contain richer semantics. Secondly, we propose a statistic learning approach that learns a Probabilistic Semantic Model (PSM) from semantic structures of social networks. This framework can utilize these accumulated and integrated data without losing semantics. Lastly, our approach is evaluated by a real-life application that combines LinkedIn and DBLP to predict collaborative colleague relation.
{"title":"Social network mashup: Ontology-based social network integration for statistic learning","authors":"Chunying Zhou, Huajun Chen, Tong Yu","doi":"10.1109/IRI.2008.4583020","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583020","url":null,"abstract":"The proliferation of online social websites results in the accumulation of a large volume of real-world data capturing social networks in diversified application domains. However, social networks are always separated with each other that causes the data isolated island phenomenon, which becomes impedance to implementing complex data analysis that requires comprehensive data stored in several social networks. In this paper, we present a social network mashup approach that uses the Semantic Web technology to integrate heterogeneous social networks that contain richer semantics. Secondly, we propose a statistic learning approach that learns a Probabilistic Semantic Model (PSM) from semantic structures of social networks. This framework can utilize these accumulated and integrated data without losing semantics. Lastly, our approach is evaluated by a real-life application that combines LinkedIn and DBLP to predict collaborative colleague relation.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128118265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583046
Xiaoyuan Su, T. Khoshgoftaar, Xingquan Zhu
We propose VCI (voting on classifications from imputed learning sets) predictors, which generate multiple incomplete learning sets from a complete dataset by randomly deleting values with a small MCAR (missing completely at random) missing ratio, and then apply an imputation technique to fill in the missing values before giving the imputed data to a machine learner. The final prediction of a class is the result of voting on the classifications from the imputed learning sets. Our empirical results show that VCI predictors significantly improve the classification performance on complete data, and perform better than Bagging predictors on binary class data.
{"title":"VCI predictors: Voting on classifications from imputed learning sets","authors":"Xiaoyuan Su, T. Khoshgoftaar, Xingquan Zhu","doi":"10.1109/IRI.2008.4583046","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583046","url":null,"abstract":"We propose VCI (voting on classifications from imputed learning sets) predictors, which generate multiple incomplete learning sets from a complete dataset by randomly deleting values with a small MCAR (missing completely at random) missing ratio, and then apply an imputation technique to fill in the missing values before giving the imputed data to a machine learner. The final prediction of a class is the result of voting on the classifications from the imputed learning sets. Our empirical results show that VCI predictors significantly improve the classification performance on complete data, and perform better than Bagging predictors on binary class data.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125204117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583031
Shih-Hung Wu, Yu-Te Li
Transliteration of new named entity is important for information retrieval that crosses two or multiple language. Rule-based machine transliteration is not satisfactory, since different information sources have different standards for the transliteration. To build a statistic machine transliteration module, researchers have to curate a transliteration corpus for any given two languages of interest. Since a large amount of transliteration/translation pairs can be collected from the Web, a large transliteration-training corpus can be curated from these pairs. In this paper, we proposed a bi-directional approach to classify transliteration/translation pairs. Our approach combines both forward transliteration and backward transliteration to classify transliteration from translation. An experiment on English and Chinese transliteration is conducted.
{"title":"Curate a transliteration corpus from transliteration/translation pairs","authors":"Shih-Hung Wu, Yu-Te Li","doi":"10.1109/IRI.2008.4583031","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583031","url":null,"abstract":"Transliteration of new named entity is important for information retrieval that crosses two or multiple language. Rule-based machine transliteration is not satisfactory, since different information sources have different standards for the transliteration. To build a statistic machine transliteration module, researchers have to curate a transliteration corpus for any given two languages of interest. Since a large amount of transliteration/translation pairs can be collected from the Web, a large transliteration-training corpus can be curated from these pairs. In this paper, we proposed a bi-directional approach to classify transliteration/translation pairs. Our approach combines both forward transliteration and backward transliteration to classify transliteration from translation. An experiment on English and Chinese transliteration is conducted.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116950300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583045
B. Far, Vani Mudigonda, A. Elamy
In the present day market situation, there are several alternatives available when a customer wants to purchase a product or adopt a software system that meets the customer’s requirements. General Purpose Software Evaluation (GPSE) system adopts state of the art statistical methods based on Multidimensional Weighted Attribute Framework (MWAF) for the evaluation of the available alternatives. By using GPSE system, the user can follow the MWAF process and design the architecture which best describes the given evaluation problem. The architectural elements of MWAF essentially focus on survey questionnaire which involves gathering information from multiple domain experts. The GPSE system then applies principles of Analysis of Variance (ANOVA) and Tukey’s pairwise comparison tests on the data collected to arrive at selection of the best suited alternative for the given problem. The GPSE system has been fully implemented and successfully tested on several projects including evaluation of multi-agent development methodologies and selection of COTS products.
{"title":"A General Purpose Software Evaluation System","authors":"B. Far, Vani Mudigonda, A. Elamy","doi":"10.1109/IRI.2008.4583045","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583045","url":null,"abstract":"In the present day market situation, there are several alternatives available when a customer wants to purchase a product or adopt a software system that meets the customer’s requirements. General Purpose Software Evaluation (GPSE) system adopts state of the art statistical methods based on Multidimensional Weighted Attribute Framework (MWAF) for the evaluation of the available alternatives. By using GPSE system, the user can follow the MWAF process and design the architecture which best describes the given evaluation problem. The architectural elements of MWAF essentially focus on survey questionnaire which involves gathering information from multiple domain experts. The GPSE system then applies principles of Analysis of Variance (ANOVA) and Tukey’s pairwise comparison tests on the data collected to arrive at selection of the best suited alternative for the given problem. The GPSE system has been fully implemented and successfully tested on several projects including evaluation of multi-agent development methodologies and selection of COTS products.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116232610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}