Pub Date : 2007-05-07DOI: 10.1109/ICACT.2007.358509
Nguyen Thanh, Tu Minh Phuong
Location prediction is essential for efficient location management in mobile networks. In this paper, we propose a novel method for predicting the current location of a mobile user and describe how the method can be used to facilitate paging process. Based on observation that most mobile users have mobility patterns that they follow in general, the proposed method discovers common mobility patterns from a collection of user moving logs. To do this, the method models cell-residence times as generated from a mixture of Gaussian distributions and use the expectation maximization (EM) algorithm to learn the model parameters. Mobility patterns, each is characterized by a common trajectory and a cell-residence time model, are then used for making predictions. Simulation studies show that the proposed method has better prediction performance when compared with two other prediction methods.
{"title":"A Gaussian Mixture Model for Mobile Location Prediction","authors":"Nguyen Thanh, Tu Minh Phuong","doi":"10.1109/ICACT.2007.358509","DOIUrl":"https://doi.org/10.1109/ICACT.2007.358509","url":null,"abstract":"Location prediction is essential for efficient location management in mobile networks. In this paper, we propose a novel method for predicting the current location of a mobile user and describe how the method can be used to facilitate paging process. Based on observation that most mobile users have mobility patterns that they follow in general, the proposed method discovers common mobility patterns from a collection of user moving logs. To do this, the method models cell-residence times as generated from a mixture of Gaussian distributions and use the expectation maximization (EM) algorithm to learn the model parameters. Mobility patterns, each is characterized by a common trajectory and a cell-residence time model, are then used for making predictions. Simulation studies show that the proposed method has better prediction performance when compared with two other prediction methods.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133183351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-03-05DOI: 10.1109/RIVF.2007.369145
C. Tijus, E. Zibetti, V. Besson, Nicolas Bredèche, Y. Kodratoff, Mary Felkin, Cédric Hartland
This paper is at the crossroad of cognitive psychology and AI robotics. It reports a cross-disciplinary project concerned about implementing human heuristics within autonomous mobile robots. In the following, we address the problem of relying on human-based heuristics to endow a group of mobile robots with the ability to solve problems such as target finding in a labyrinth. Such heuristics may provide an efficient way to explore the environment and to decompose a complex problem into subtasks for which specific heuristics are efficient. We first present a set of experiments conducted with group of humans looking for a target with limited sensing capabilities solving. Then we describe the heuristics extracted from the observation and analysis of their behavior. Finally we implemented these heuristics within khepera-like autonomous mobile robots facing the same tasks. We show that the control architecture can be experimentally validated to some extent thanks to this approach.
{"title":"Human Heuristics for a Team of Mobile Robots","authors":"C. Tijus, E. Zibetti, V. Besson, Nicolas Bredèche, Y. Kodratoff, Mary Felkin, Cédric Hartland","doi":"10.1109/RIVF.2007.369145","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369145","url":null,"abstract":"This paper is at the crossroad of cognitive psychology and AI robotics. It reports a cross-disciplinary project concerned about implementing human heuristics within autonomous mobile robots. In the following, we address the problem of relying on human-based heuristics to endow a group of mobile robots with the ability to solve problems such as target finding in a labyrinth. Such heuristics may provide an efficient way to explore the environment and to decompose a complex problem into subtasks for which specific heuristics are efficient. We first present a set of experiments conducted with group of humans looking for a target with limited sensing capabilities solving. Then we describe the heuristics extracted from the observation and analysis of their behavior. Finally we implemented these heuristics within khepera-like autonomous mobile robots facing the same tasks. We show that the control architecture can be experimentally validated to some extent thanks to this approach.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117205598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-03-05DOI: 10.1109/RIVF.2007.369138
Nguyen-Thong Dang
This paper introduces a survey and a classification of 3D pointing techniques. The survey presents a chronological view on the study of 3D pointing techniques. The classification is based on a proposed definition of 3D cursor. The paper shows that existing 3D pointing techniques can be either 3D pointer-based cursor or 3D line-based cursor. Based on recent results of 3D Fitts' law study and the definition of two types of 3D cursor, the paper discusses different virtual enhancements for improving existing 3D pointing techniques and for creating and evaluating new 3D pointing techniques which focus on decreasing the average target acquisition time.
{"title":"A Survey and Classification of 3D Pointing Techniques","authors":"Nguyen-Thong Dang","doi":"10.1109/RIVF.2007.369138","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369138","url":null,"abstract":"This paper introduces a survey and a classification of 3D pointing techniques. The survey presents a chronological view on the study of 3D pointing techniques. The classification is based on a proposed definition of 3D cursor. The paper shows that existing 3D pointing techniques can be either 3D pointer-based cursor or 3D line-based cursor. Based on recent results of 3D Fitts' law study and the definition of two types of 3D cursor, the paper discusses different virtual enhancements for improving existing 3D pointing techniques and for creating and evaluating new 3D pointing techniques which focus on decreasing the average target acquisition time.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125334188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-03-05DOI: 10.1109/RIVF.2007.369128
T. Q. Dung, W. Kameyama
This paper presents an ontology-based health care information extraction system - VnHIES. In the system, we develop and use two effective algorithms called "semantic elements extracting algorithm" and "new semantic elements learning algorithm" for health care semantic words extraction and ontology enhancement. The former algorithm will extract concepts (Cs), descriptions of concepts (Ds), pairs of concept and description(C-D) and Names of diseases (Ns) in health care information domain from Web pages. Those extracted semantic elements are used by latter algorithm that will render suggestions in which might contain new semantic elements for later use by domain users to enrich ontology. After extracting semantic elements, a "document weighting algorithm" is applied to get summary information of document with respect to all extracted semantic words and then to be stored in knowledge base which contains ontology and database to be used later in other applications. Our experiment results show that the approach is very optimistic with high accuracy in semantic extracting and efficiency in ontology upgrade. VnHIES can be used in many health care information management systems such as medical document classification, health care information retrieval system. VnHIES is implemented in Vietnamese language.
{"title":"A Proposal of Ontology-based Health Care Information Extraction System: VnHIES","authors":"T. Q. Dung, W. Kameyama","doi":"10.1109/RIVF.2007.369128","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369128","url":null,"abstract":"This paper presents an ontology-based health care information extraction system - VnHIES. In the system, we develop and use two effective algorithms called \"semantic elements extracting algorithm\" and \"new semantic elements learning algorithm\" for health care semantic words extraction and ontology enhancement. The former algorithm will extract concepts (Cs), descriptions of concepts (Ds), pairs of concept and description(C-D) and Names of diseases (Ns) in health care information domain from Web pages. Those extracted semantic elements are used by latter algorithm that will render suggestions in which might contain new semantic elements for later use by domain users to enrich ontology. After extracting semantic elements, a \"document weighting algorithm\" is applied to get summary information of document with respect to all extracted semantic words and then to be stored in knowledge base which contains ontology and database to be used later in other applications. Our experiment results show that the approach is very optimistic with high accuracy in semantic extracting and efficiency in ontology upgrade. VnHIES can be used in many health care information management systems such as medical document classification, health care information retrieval system. VnHIES is implemented in Vietnamese language.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116719307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-03-05DOI: 10.1109/RIVF.2007.369135
Djamal Habet, Michel Vasquez
Our main purpose is to enhance the efficiency of local search algorithms (issued from Walksat family) for the satisfiability problem (SAT) by including the structure of the treated instances in their resolution. The structure is described by the dependencies between the variables of the problem, interpreted as additional constraints hidden in the original formulation of the SAT instance. Checking these dependencies may allow a speeding up of the search and increasing the robustness of the incomplete methods. The extracted dependencies are implications and equivalencies between variables. The effective implementation of this purpose is achieved by an hybrid approach between a local search algorithm and an efficient DPL procedure.
{"title":"Improving Local Search for Satisfiability Problem by Integrating Structural Properties","authors":"Djamal Habet, Michel Vasquez","doi":"10.1109/RIVF.2007.369135","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369135","url":null,"abstract":"Our main purpose is to enhance the efficiency of local search algorithms (issued from Walksat family) for the satisfiability problem (SAT) by including the structure of the treated instances in their resolution. The structure is described by the dependencies between the variables of the problem, interpreted as additional constraints hidden in the original formulation of the SAT instance. Checking these dependencies may allow a speeding up of the search and increasing the robustness of the incomplete methods. The extracted dependencies are implications and equivalencies between variables. The effective implementation of this purpose is achieved by an hybrid approach between a local search algorithm and an efficient DPL procedure.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121489644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-03-05DOI: 10.1109/RIVF.2007.369129
Ba-Hung Ngo, C. Bac, Frédérique Silber-Chaussumier, Quyet-Thang Le
Semantic file systems enhance standard file systems with the ability of file searching based on file semantics. The users interact with semantic file systems not only by browsing a hierarchy of directories but also by querying as information retrieval systems usually do. In this paper, we argue for a new file system paradigm, the semantic file system. We identify the issues in designing a semantic file system and propose an ontology- based solution for these issues.
{"title":"Towards Ontology-based Semantic File Systems","authors":"Ba-Hung Ngo, C. Bac, Frédérique Silber-Chaussumier, Quyet-Thang Le","doi":"10.1109/RIVF.2007.369129","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369129","url":null,"abstract":"Semantic file systems enhance standard file systems with the ability of file searching based on file semantics. The users interact with semantic file systems not only by browsing a hierarchy of directories but also by querying as information retrieval systems usually do. In this paper, we argue for a new file system paradigm, the semantic file system. We identify the issues in designing a semantic file system and propose an ontology- based solution for these issues.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125672060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-03-05DOI: 10.1109/RIVF.2007.369160
Aliou Boly, G. Hébrail
The amount of data stored in data warehouses grows very quickly so that they can get saturated. To overcome this problem, we propose a language for specifying forgetting functions on stored data. In order to preserve the possibility of performing interesting analyses of historical data, the specifications include the definition of some summaries of deleted data. These summaries are aggregates and samples of deleted data and will be kept in the data warehouse. Once forgetting functions have been specified, the data warehouse is automatically updated in order to follow the specifications. This paper presents both the language for specifications, the structure of the summaries and the algorithms to update the data warehouse.
{"title":"Forgetting data intelligently in data warehouses","authors":"Aliou Boly, G. Hébrail","doi":"10.1109/RIVF.2007.369160","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369160","url":null,"abstract":"The amount of data stored in data warehouses grows very quickly so that they can get saturated. To overcome this problem, we propose a language for specifying forgetting functions on stored data. In order to preserve the possibility of performing interesting analyses of historical data, the specifications include the definition of some summaries of deleted data. These summaries are aggregates and samples of deleted data and will be kept in the data warehouse. Once forgetting functions have been specified, the data warehouse is automatically updated in order to follow the specifications. This paper presents both the language for specifications, the structure of the summaries and the algorithms to update the data warehouse.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128853183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-03-05DOI: 10.1109/RIVF.2007.369133
H. Fujiwara, K. Iwama, Kouki Yonezawa
We consider a server location problem with only one server to move. If each request must be served on the exact position, there is no choice for the online player and the problem is trivial. In this paper we assume that a request is given as a region and that the service can be done anywhere inside the region. Namely, for each request an online algorithm chooses an arbitrary point in the region and moves the server there. Our main result shows that if the region is a regular n-gon, the competitive ratio of the greedy algorithm is 1/sin pi/2n for odd n and 1/sin pi/n for even n. Especially for a square region, the greedy algorithm turns out to be optimal.
{"title":"Online Chasing Problems for Regular n-Gons","authors":"H. Fujiwara, K. Iwama, Kouki Yonezawa","doi":"10.1109/RIVF.2007.369133","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369133","url":null,"abstract":"We consider a server location problem with only one server to move. If each request must be served on the exact position, there is no choice for the online player and the problem is trivial. In this paper we assume that a request is given as a region and that the service can be done anywhere inside the region. Namely, for each request an online algorithm chooses an arbitrary point in the region and moves the server there. Our main result shows that if the region is a regular n-gon, the competitive ratio of the greedy algorithm is 1/sin pi/2n for odd n and 1/sin pi/n for even n. Especially for a square region, the greedy algorithm turns out to be optimal.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124359072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-03-05DOI: 10.1109/RIVF.2007.369166
V. B. Dang, Bao-Quoc Ho
Parallel corpus has become a very essential resource for multilingual natural language processing and there are large scale of parallel texts available on the Internet these days. In this paper, we propose a simple but reliable method to construct an English-Vietnamese parallel corpus through Web mining. Our system can automatically download and detect parallel Web pages on a given domain to construct a parallel corpus that is well-aligned at paragraph level with completely clean texts. The proposed technique can be easily applied to other language pairs. Experiments have been made and shown promising results.
{"title":"Automatic Construction of English-Vietnamese Parallel Corpus through Web Mining","authors":"V. B. Dang, Bao-Quoc Ho","doi":"10.1109/RIVF.2007.369166","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369166","url":null,"abstract":"Parallel corpus has become a very essential resource for multilingual natural language processing and there are large scale of parallel texts available on the Internet these days. In this paper, we propose a simple but reliable method to construct an English-Vietnamese parallel corpus through Web mining. Our system can automatically download and detect parallel Web pages on a given domain to construct a parallel corpus that is well-aligned at paragraph level with completely clean texts. The proposed technique can be easily applied to other language pairs. Experiments have been made and shown promising results.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116959132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-03-05DOI: 10.1109/RIVF.2007.369156
Pham Van Chung, D. T. Anh
Temporal abstraction (TA) methods aim to extract more meaningful data from raw temporal data. The use of temporal abstraction is important for decision support applications in clinical domains which consume abstract concepts, while clinical databases usually contain primitive concepts. In this paper we propose a new approach for TA from temporal clinical databases: using inference graph, an extension of transition graph, as an implementation technique of a knowledge-based temporal abstraction system. We also describe a system, TDM, that integrates temporal data maintenance and temporal abstraction in a single architecture. TDM allows clinicians to use SQL-like temporal queries to retrieve both raw, time-oriented data and generated summaries of those data. The TDM system has been implemented and applied in monitoring the treatment of patients who have colorectal cancer.
{"title":"Applying Temporal Abstraction in Clinical Databases","authors":"Pham Van Chung, D. T. Anh","doi":"10.1109/RIVF.2007.369156","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369156","url":null,"abstract":"Temporal abstraction (TA) methods aim to extract more meaningful data from raw temporal data. The use of temporal abstraction is important for decision support applications in clinical domains which consume abstract concepts, while clinical databases usually contain primitive concepts. In this paper we propose a new approach for TA from temporal clinical databases: using inference graph, an extension of transition graph, as an implementation technique of a knowledge-based temporal abstraction system. We also describe a system, TDM, that integrates temporal data maintenance and temporal abstraction in a single architecture. TDM allows clinicians to use SQL-like temporal queries to retrieve both raw, time-oriented data and generated summaries of those data. The TDM system has been implemented and applied in monitoring the treatment of patients who have colorectal cancer.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125403113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}