Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583029
Erik Wilde, Yiming Liu
Much of the Web’s success rests with its role in enabling information reuse and integration across various boundaries. Hyperlinked Web resources represent a rich information tapestry of content and context, instrumental in effective knowledge sharing and further knowledge development. However, the Web’s simple linking model has become increasingly inadequate for effective content discovery and reuse. At the same time, rigorous but heavyweight solutions such as the Semantic Web have yet to garner critical mass in adoption. This paper analyzes the relative strengths and shortcomings of existing linked data approaches. It proposes a novel, lightweight architecture for the modeling, aggregation, retrieval, management, and sharing of contextual information for Web resources, based on established standards and designed to encourage more efficient and robust information reuse on the Web.
{"title":"Lightweight linked data","authors":"Erik Wilde, Yiming Liu","doi":"10.1109/IRI.2008.4583029","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583029","url":null,"abstract":"Much of the Web’s success rests with its role in enabling information reuse and integration across various boundaries. Hyperlinked Web resources represent a rich information tapestry of content and context, instrumental in effective knowledge sharing and further knowledge development. However, the Web’s simple linking model has become increasingly inadequate for effective content discovery and reuse. At the same time, rigorous but heavyweight solutions such as the Semantic Web have yet to garner critical mass in adoption. This paper analyzes the relative strengths and shortcomings of existing linked data approaches. It proposes a novel, lightweight architecture for the modeling, aggregation, retrieval, management, and sharing of contextual information for Web resources, based on established standards and designed to encourage more efficient and robust information reuse on the Web.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"464 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115868682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583052
Jianmei Guo, Yinglin Wang
Knowledge flow exists in almost every collaborative teamwork environment and now it attracts much attention in the knowledge management field. As knowledge flow often takes place in specific context, context should be considered in the modeling of knowledge flow. However, previous models of knowledge flow lack deep studies on context modeling, which makes these models insufficient for real application. This paper puts forward a conceptual framework of context-based knowledge flow. In this framework, context is viewed as one indispensable element of knowledge flow, which is about how to create, transform, propagate and apply knowledge items. Then, TPK context model is proposed, which is based on three spaces: task space, process space and knowledge space. The multi-dimensional structures for the three spaces are suggested. The proposed model reflects the main features of the context of knowledge flow. Based on the model, context-aware knowledge reuse flow is detailed through a case study in product design domain. A system implementing the model is also developed and applied in enterprises.
{"title":"Context modeling for knowledge flow","authors":"Jianmei Guo, Yinglin Wang","doi":"10.1109/IRI.2008.4583052","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583052","url":null,"abstract":"Knowledge flow exists in almost every collaborative teamwork environment and now it attracts much attention in the knowledge management field. As knowledge flow often takes place in specific context, context should be considered in the modeling of knowledge flow. However, previous models of knowledge flow lack deep studies on context modeling, which makes these models insufficient for real application. This paper puts forward a conceptual framework of context-based knowledge flow. In this framework, context is viewed as one indispensable element of knowledge flow, which is about how to create, transform, propagate and apply knowledge items. Then, TPK context model is proposed, which is based on three spaces: task space, process space and knowledge space. The multi-dimensional structures for the three spaces are suggested. The proposed model reflects the main features of the context of knowledge flow. Based on the model, context-aware knowledge reuse flow is detailed through a case study in product design domain. A system implementing the model is also developed and applied in enterprises.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"394 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115916217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583059
Li Zhao, Qing Li, Guoqing Wu
In a secure workflow system, it is imperative to specify which users (or roles) can be authorized to execute which specific tasks. Users may also be able to delegate their rights of executing a task to others. Much of research work in the area of delegation has been carried out. In this paper, we provide a new formal method for representing delegation constraints in secure workflow systems, by adding a trust relationship and propagation parameters to make delegation propagation more effective. We also analyze the consistency after the delegation constraints are injected into the secure workflow systems.
{"title":"Injecting formulized delegation constraints into secure workflow systems","authors":"Li Zhao, Qing Li, Guoqing Wu","doi":"10.1109/IRI.2008.4583059","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583059","url":null,"abstract":"In a secure workflow system, it is imperative to specify which users (or roles) can be authorized to execute which specific tasks. Users may also be able to delegate their rights of executing a task to others. Much of research work in the area of delegation has been carried out. In this paper, we provide a new formal method for representing delegation constraints in secure workflow systems, by adding a trust relationship and propagation parameters to make delegation propagation more effective. We also analyze the consistency after the delegation constraints are injected into the secure workflow systems.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125074972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583036
Allan Chan, Nancy Situ, K. Wong, K. Kianmehr, R. Alhajj
This paper describes the design and implementation of a fuzzy nested querying system for XML databases. The research steps involved are outlined and examined. We integrated different aspects of fuzziness, web and database technology into the implementation of a prototype which covers the intended scope of a demonstration of fuzzy nested querying of databases. This prototype consists of an easy to use graphical interface that allows the user to apply fuzziness to their XML searches. The goal of this is to provide insight on creating more intuitive ways of searching and using XML databases, especially by naive users.
{"title":"Fuzzy querying of nested XML","authors":"Allan Chan, Nancy Situ, K. Wong, K. Kianmehr, R. Alhajj","doi":"10.1109/IRI.2008.4583036","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583036","url":null,"abstract":"This paper describes the design and implementation of a fuzzy nested querying system for XML databases. The research steps involved are outlined and examined. We integrated different aspects of fuzziness, web and database technology into the implementation of a prototype which covers the intended scope of a demonstration of fuzzy nested querying of databases. This prototype consists of an easy to use graphical interface that allows the user to apply fuzziness to their XML searches. The goal of this is to provide insight on creating more intuitive ways of searching and using XML databases, especially by naive users.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128671483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583017
D. Carstoiu, A. Cernian, A. Olteanu
The aim of the project making the subject of this paper was setting up a complex database (medical, imagistic, biomechanical) and developing conceptual models for interpreting the available data, with direct applicability in choosing and evaluating the treatment. The database contains heterogeneous, multidisciplinary data, provided by various investigations. Research based on corroborating clinical data, specific to each specialization involved in the functional rehabilitation process for patients with orthopaedics or neuromuscular pathology, has been successfully associated with motion analysis software, in order to create a complex acquisition and data processing system, with direct applicability in the human motility analysis.
{"title":"Biomedical data correlation and reuse in analyzing the efficiency of rehabilitation treatment","authors":"D. Carstoiu, A. Cernian, A. Olteanu","doi":"10.1109/IRI.2008.4583017","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583017","url":null,"abstract":"The aim of the project making the subject of this paper was setting up a complex database (medical, imagistic, biomechanical) and developing conceptual models for interpreting the available data, with direct applicability in choosing and evaluating the treatment. The database contains heterogeneous, multidisciplinary data, provided by various investigations. Research based on corroborating clinical data, specific to each specialization involved in the functional rehabilitation process for patients with orthopaedics or neuromuscular pathology, has been successfully associated with motion analysis software, in order to create a complex acquisition and data processing system, with direct applicability in the human motility analysis.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124014631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583020
Chunying Zhou, Huajun Chen, Tong Yu
The proliferation of online social websites results in the accumulation of a large volume of real-world data capturing social networks in diversified application domains. However, social networks are always separated with each other that causes the data isolated island phenomenon, which becomes impedance to implementing complex data analysis that requires comprehensive data stored in several social networks. In this paper, we present a social network mashup approach that uses the Semantic Web technology to integrate heterogeneous social networks that contain richer semantics. Secondly, we propose a statistic learning approach that learns a Probabilistic Semantic Model (PSM) from semantic structures of social networks. This framework can utilize these accumulated and integrated data without losing semantics. Lastly, our approach is evaluated by a real-life application that combines LinkedIn and DBLP to predict collaborative colleague relation.
{"title":"Social network mashup: Ontology-based social network integration for statistic learning","authors":"Chunying Zhou, Huajun Chen, Tong Yu","doi":"10.1109/IRI.2008.4583020","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583020","url":null,"abstract":"The proliferation of online social websites results in the accumulation of a large volume of real-world data capturing social networks in diversified application domains. However, social networks are always separated with each other that causes the data isolated island phenomenon, which becomes impedance to implementing complex data analysis that requires comprehensive data stored in several social networks. In this paper, we present a social network mashup approach that uses the Semantic Web technology to integrate heterogeneous social networks that contain richer semantics. Secondly, we propose a statistic learning approach that learns a Probabilistic Semantic Model (PSM) from semantic structures of social networks. This framework can utilize these accumulated and integrated data without losing semantics. Lastly, our approach is evaluated by a real-life application that combines LinkedIn and DBLP to predict collaborative colleague relation.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128118265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583022
Shuxin Zhao, E. Chang, T. Dillon
This paper presents a novel approach for extracting knowledge from web-based application source code in supplementing and assisting ontology development from database schemas. The structure of web-based application source code is defined in order to distinguish different kinds of knowledge within the source code for ontology development. The connections between the relevant parts of web application source code and the backend database schema with their various forms are explicitly specified in detail. A knowledge processing and integration model for extracting and integrating the knowledge embedded in the source code for ontology development is then proposed.
{"title":"Knowledge extraction from web-based application source code: An approach to database reverse engineering for ontology development","authors":"Shuxin Zhao, E. Chang, T. Dillon","doi":"10.1109/IRI.2008.4583022","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583022","url":null,"abstract":"This paper presents a novel approach for extracting knowledge from web-based application source code in supplementing and assisting ontology development from database schemas. The structure of web-based application source code is defined in order to distinguish different kinds of knowledge within the source code for ontology development. The connections between the relevant parts of web application source code and the backend database schema with their various forms are explicitly specified in detail. A knowledge processing and integration model for extracting and integrating the knowledge embedded in the source code for ontology development is then proposed.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134285107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583046
Xiaoyuan Su, T. Khoshgoftaar, Xingquan Zhu
We propose VCI (voting on classifications from imputed learning sets) predictors, which generate multiple incomplete learning sets from a complete dataset by randomly deleting values with a small MCAR (missing completely at random) missing ratio, and then apply an imputation technique to fill in the missing values before giving the imputed data to a machine learner. The final prediction of a class is the result of voting on the classifications from the imputed learning sets. Our empirical results show that VCI predictors significantly improve the classification performance on complete data, and perform better than Bagging predictors on binary class data.
{"title":"VCI predictors: Voting on classifications from imputed learning sets","authors":"Xiaoyuan Su, T. Khoshgoftaar, Xingquan Zhu","doi":"10.1109/IRI.2008.4583046","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583046","url":null,"abstract":"We propose VCI (voting on classifications from imputed learning sets) predictors, which generate multiple incomplete learning sets from a complete dataset by randomly deleting values with a small MCAR (missing completely at random) missing ratio, and then apply an imputation technique to fill in the missing values before giving the imputed data to a machine learner. The final prediction of a class is the result of voting on the classifications from the imputed learning sets. Our empirical results show that VCI predictors significantly improve the classification performance on complete data, and perform better than Bagging predictors on binary class data.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125204117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583011
Jin Park, Du Zhang, M. Lu
In this paper, we describe an intelligent agent for the game of Age of Mythology: the Titans. Through implementing known economic theories that were developed by some of the world’s best players, the agent has a superior rushing performance when compared with the original game engine. Specifically the agent looks to reduce unspent resources and tailor the gathering of resources for military unit production in order to push out the first attack before the opponent and to outnumber the opponent’s army when the attack arrives. The increased efficiency in military production has led to better protection of resource gatherers, buildings, and expanded territory, and pinned the opponent to its own base. In our experiments, the overall win-loss record for the agent is 35 wins and 10 losses. Though our focus in the study is on the Rush tactic, the approach we adopt can be applied to other aspects of the game.
{"title":"An intelligent agent for the game of Age of Mythology: the Titans","authors":"Jin Park, Du Zhang, M. Lu","doi":"10.1109/IRI.2008.4583011","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583011","url":null,"abstract":"In this paper, we describe an intelligent agent for the game of Age of Mythology: the Titans. Through implementing known economic theories that were developed by some of the world’s best players, the agent has a superior rushing performance when compared with the original game engine. Specifically the agent looks to reduce unspent resources and tailor the gathering of resources for military unit production in order to push out the first attack before the opponent and to outnumber the opponent’s army when the attack arrives. The increased efficiency in military production has led to better protection of resource gatherers, buildings, and expanded territory, and pinned the opponent to its own base. In our experiments, the overall win-loss record for the agent is 35 wins and 10 losses. Though our focus in the study is on the Rush tactic, the approach we adopt can be applied to other aspects of the game.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116381415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-07-13DOI: 10.1109/IRI.2008.4583045
B. Far, Vani Mudigonda, A. Elamy
In the present day market situation, there are several alternatives available when a customer wants to purchase a product or adopt a software system that meets the customer’s requirements. General Purpose Software Evaluation (GPSE) system adopts state of the art statistical methods based on Multidimensional Weighted Attribute Framework (MWAF) for the evaluation of the available alternatives. By using GPSE system, the user can follow the MWAF process and design the architecture which best describes the given evaluation problem. The architectural elements of MWAF essentially focus on survey questionnaire which involves gathering information from multiple domain experts. The GPSE system then applies principles of Analysis of Variance (ANOVA) and Tukey’s pairwise comparison tests on the data collected to arrive at selection of the best suited alternative for the given problem. The GPSE system has been fully implemented and successfully tested on several projects including evaluation of multi-agent development methodologies and selection of COTS products.
{"title":"A General Purpose Software Evaluation System","authors":"B. Far, Vani Mudigonda, A. Elamy","doi":"10.1109/IRI.2008.4583045","DOIUrl":"https://doi.org/10.1109/IRI.2008.4583045","url":null,"abstract":"In the present day market situation, there are several alternatives available when a customer wants to purchase a product or adopt a software system that meets the customer’s requirements. General Purpose Software Evaluation (GPSE) system adopts state of the art statistical methods based on Multidimensional Weighted Attribute Framework (MWAF) for the evaluation of the available alternatives. By using GPSE system, the user can follow the MWAF process and design the architecture which best describes the given evaluation problem. The architectural elements of MWAF essentially focus on survey questionnaire which involves gathering information from multiple domain experts. The GPSE system then applies principles of Analysis of Variance (ANOVA) and Tukey’s pairwise comparison tests on the data collected to arrive at selection of the best suited alternative for the given problem. The GPSE system has been fully implemented and successfully tested on several projects including evaluation of multi-agent development methodologies and selection of COTS products.","PeriodicalId":169554,"journal":{"name":"2008 IEEE International Conference on Information Reuse and Integration","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116232610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}