首页 > 最新文献

2007 IEEE International Conference on Research, Innovation and Vision for the Future最新文献

英文 中文
A Gaussian Mixture Model for Mobile Location Prediction 移动位置预测的高斯混合模型
Nguyen Thanh, Tu Minh Phuong
Location prediction is essential for efficient location management in mobile networks. In this paper, we propose a novel method for predicting the current location of a mobile user and describe how the method can be used to facilitate paging process. Based on observation that most mobile users have mobility patterns that they follow in general, the proposed method discovers common mobility patterns from a collection of user moving logs. To do this, the method models cell-residence times as generated from a mixture of Gaussian distributions and use the expectation maximization (EM) algorithm to learn the model parameters. Mobility patterns, each is characterized by a common trajectory and a cell-residence time model, are then used for making predictions. Simulation studies show that the proposed method has better prediction performance when compared with two other prediction methods.
在移动网络中,位置预测是实现高效位置管理的关键。在本文中,我们提出了一种预测移动用户当前位置的新方法,并描述了该方法如何用于促进分页过程。基于对大多数移动用户通常遵循的移动模式的观察,提出的方法从用户移动日志集合中发现常见的移动模式。为此,该方法对混合高斯分布生成的细胞停留时间进行建模,并使用期望最大化(EM)算法来学习模型参数。移动模式,每一个都有一个共同的轨迹和一个细胞停留时间模型,然后用于预测。仿真研究表明,与其他两种预测方法相比,该方法具有更好的预测性能。
{"title":"A Gaussian Mixture Model for Mobile Location Prediction","authors":"Nguyen Thanh, Tu Minh Phuong","doi":"10.1109/ICACT.2007.358509","DOIUrl":"https://doi.org/10.1109/ICACT.2007.358509","url":null,"abstract":"Location prediction is essential for efficient location management in mobile networks. In this paper, we propose a novel method for predicting the current location of a mobile user and describe how the method can be used to facilitate paging process. Based on observation that most mobile users have mobility patterns that they follow in general, the proposed method discovers common mobility patterns from a collection of user moving logs. To do this, the method models cell-residence times as generated from a mixture of Gaussian distributions and use the expectation maximization (EM) algorithm to learn the model parameters. Mobility patterns, each is characterized by a common trajectory and a cell-residence time model, are then used for making predictions. Simulation studies show that the proposed method has better prediction performance when compared with two other prediction methods.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133183351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Human Heuristics for a Team of Mobile Robots 移动机器人团队的人类启发式
C. Tijus, E. Zibetti, V. Besson, Nicolas Bredèche, Y. Kodratoff, Mary Felkin, Cédric Hartland
This paper is at the crossroad of cognitive psychology and AI robotics. It reports a cross-disciplinary project concerned about implementing human heuristics within autonomous mobile robots. In the following, we address the problem of relying on human-based heuristics to endow a group of mobile robots with the ability to solve problems such as target finding in a labyrinth. Such heuristics may provide an efficient way to explore the environment and to decompose a complex problem into subtasks for which specific heuristics are efficient. We first present a set of experiments conducted with group of humans looking for a target with limited sensing capabilities solving. Then we describe the heuristics extracted from the observation and analysis of their behavior. Finally we implemented these heuristics within khepera-like autonomous mobile robots facing the same tasks. We show that the control architecture can be experimentally validated to some extent thanks to this approach.
本文正处于认知心理学和人工智能机器人的十字路口。它报告了一个跨学科项目,关注在自主移动机器人中实现人类启发式。在下面,我们解决了依靠基于人的启发式来赋予一组移动机器人解决迷宫中寻找目标等问题的能力的问题。这种启发式可以提供一种有效的方法来探索环境,并将复杂问题分解为特定启发式有效的子任务。我们首先提出了一组实验进行了一组人类寻找目标有限的传感能力解决。然后,我们描述了从观察和分析他们的行为中提取的启发式。最后,我们在面对相同任务的类似khepera的自主移动机器人中实现了这些启发式算法。我们表明,由于这种方法,控制体系结构可以在一定程度上得到实验验证。
{"title":"Human Heuristics for a Team of Mobile Robots","authors":"C. Tijus, E. Zibetti, V. Besson, Nicolas Bredèche, Y. Kodratoff, Mary Felkin, Cédric Hartland","doi":"10.1109/RIVF.2007.369145","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369145","url":null,"abstract":"This paper is at the crossroad of cognitive psychology and AI robotics. It reports a cross-disciplinary project concerned about implementing human heuristics within autonomous mobile robots. In the following, we address the problem of relying on human-based heuristics to endow a group of mobile robots with the ability to solve problems such as target finding in a labyrinth. Such heuristics may provide an efficient way to explore the environment and to decompose a complex problem into subtasks for which specific heuristics are efficient. We first present a set of experiments conducted with group of humans looking for a target with limited sensing capabilities solving. Then we describe the heuristics extracted from the observation and analysis of their behavior. Finally we implemented these heuristics within khepera-like autonomous mobile robots facing the same tasks. We show that the control architecture can be experimentally validated to some extent thanks to this approach.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117205598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Survey and Classification of 3D Pointing Techniques 三维指向技术综述与分类
Nguyen-Thong Dang
This paper introduces a survey and a classification of 3D pointing techniques. The survey presents a chronological view on the study of 3D pointing techniques. The classification is based on a proposed definition of 3D cursor. The paper shows that existing 3D pointing techniques can be either 3D pointer-based cursor or 3D line-based cursor. Based on recent results of 3D Fitts' law study and the definition of two types of 3D cursor, the paper discusses different virtual enhancements for improving existing 3D pointing techniques and for creating and evaluating new 3D pointing techniques which focus on decreasing the average target acquisition time.
本文介绍了三维指向技术的概况和分类。该调查呈现了三维指向技术研究的时间顺序视图。分类是基于3D光标的定义。本文表明,现有的三维指向技术既可以是基于三维指针的光标,也可以是基于三维直线的光标。基于三维菲茨定律研究的最新成果和两种类型的三维光标的定义,本文讨论了不同的虚拟增强技术,以改进现有的三维指向技术和创建和评估新的三维指向技术,重点是减少平均目标捕获时间。
{"title":"A Survey and Classification of 3D Pointing Techniques","authors":"Nguyen-Thong Dang","doi":"10.1109/RIVF.2007.369138","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369138","url":null,"abstract":"This paper introduces a survey and a classification of 3D pointing techniques. The survey presents a chronological view on the study of 3D pointing techniques. The classification is based on a proposed definition of 3D cursor. The paper shows that existing 3D pointing techniques can be either 3D pointer-based cursor or 3D line-based cursor. Based on recent results of 3D Fitts' law study and the definition of two types of 3D cursor, the paper discusses different virtual enhancements for improving existing 3D pointing techniques and for creating and evaluating new 3D pointing techniques which focus on decreasing the average target acquisition time.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125334188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
A Proposal of Ontology-based Health Care Information Extraction System: VnHIES 基于本体的医疗信息抽取系统VnHIES的构想
T. Q. Dung, W. Kameyama
This paper presents an ontology-based health care information extraction system - VnHIES. In the system, we develop and use two effective algorithms called "semantic elements extracting algorithm" and "new semantic elements learning algorithm" for health care semantic words extraction and ontology enhancement. The former algorithm will extract concepts (Cs), descriptions of concepts (Ds), pairs of concept and description(C-D) and Names of diseases (Ns) in health care information domain from Web pages. Those extracted semantic elements are used by latter algorithm that will render suggestions in which might contain new semantic elements for later use by domain users to enrich ontology. After extracting semantic elements, a "document weighting algorithm" is applied to get summary information of document with respect to all extracted semantic words and then to be stored in knowledge base which contains ontology and database to be used later in other applications. Our experiment results show that the approach is very optimistic with high accuracy in semantic extracting and efficiency in ontology upgrade. VnHIES can be used in many health care information management systems such as medical document classification, health care information retrieval system. VnHIES is implemented in Vietnamese language.
提出了一种基于本体的医疗信息抽取系统VnHIES。在系统中,我们开发并使用了两种有效的算法,即“语义元素提取算法”和“新语义元素学习算法”,用于医疗保健语义词的提取和本体增强。前一种算法将从网页中提取医疗保健信息域中的概念(Cs)、概念描述(Ds)、概念和描述对(C-D)和疾病名称(Ns)。这些提取的语义元素被后一种算法使用,该算法将提供可能包含新语义元素的建议,供领域用户以后使用以丰富本体。在提取语义元素后,采用“文档加权算法”对提取出来的所有语义词进行汇总,并存储在包含本体和数据库的知识库中,以供后续应用使用。实验结果表明,该方法具有较高的语义提取精度和本体升级效率。VnHIES可用于许多医疗信息管理系统,如医疗文件分类、医疗信息检索系统等。VnHIES以越南语实施。
{"title":"A Proposal of Ontology-based Health Care Information Extraction System: VnHIES","authors":"T. Q. Dung, W. Kameyama","doi":"10.1109/RIVF.2007.369128","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369128","url":null,"abstract":"This paper presents an ontology-based health care information extraction system - VnHIES. In the system, we develop and use two effective algorithms called \"semantic elements extracting algorithm\" and \"new semantic elements learning algorithm\" for health care semantic words extraction and ontology enhancement. The former algorithm will extract concepts (Cs), descriptions of concepts (Ds), pairs of concept and description(C-D) and Names of diseases (Ns) in health care information domain from Web pages. Those extracted semantic elements are used by latter algorithm that will render suggestions in which might contain new semantic elements for later use by domain users to enrich ontology. After extracting semantic elements, a \"document weighting algorithm\" is applied to get summary information of document with respect to all extracted semantic words and then to be stored in knowledge base which contains ontology and database to be used later in other applications. Our experiment results show that the approach is very optimistic with high accuracy in semantic extracting and efficiency in ontology upgrade. VnHIES can be used in many health care information management systems such as medical document classification, health care information retrieval system. VnHIES is implemented in Vietnamese language.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116719307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Improving Local Search for Satisfiability Problem by Integrating Structural Properties 利用结构性质积分改进可满足性问题的局部搜索
Djamal Habet, Michel Vasquez
Our main purpose is to enhance the efficiency of local search algorithms (issued from Walksat family) for the satisfiability problem (SAT) by including the structure of the treated instances in their resolution. The structure is described by the dependencies between the variables of the problem, interpreted as additional constraints hidden in the original formulation of the SAT instance. Checking these dependencies may allow a speeding up of the search and increasing the robustness of the incomplete methods. The extracted dependencies are implications and equivalencies between variables. The effective implementation of this purpose is achieved by an hybrid approach between a local search algorithm and an efficient DPL procedure.
我们的主要目的是通过在解决可满足性问题(SAT)时包含处理实例的结构来提高局部搜索算法(来自Walksat家族)的效率。该结构由问题变量之间的依赖关系描述,解释为隐藏在SAT实例的原始公式中的附加约束。检查这些依赖关系可以加快搜索速度并增加不完整方法的鲁棒性。提取的依赖关系是变量之间的含义和等价关系。通过局部搜索算法和高效DPL过程的混合方法,有效地实现了这一目的。
{"title":"Improving Local Search for Satisfiability Problem by Integrating Structural Properties","authors":"Djamal Habet, Michel Vasquez","doi":"10.1109/RIVF.2007.369135","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369135","url":null,"abstract":"Our main purpose is to enhance the efficiency of local search algorithms (issued from Walksat family) for the satisfiability problem (SAT) by including the structure of the treated instances in their resolution. The structure is described by the dependencies between the variables of the problem, interpreted as additional constraints hidden in the original formulation of the SAT instance. Checking these dependencies may allow a speeding up of the search and increasing the robustness of the incomplete methods. The extracted dependencies are implications and equivalencies between variables. The effective implementation of this purpose is achieved by an hybrid approach between a local search algorithm and an efficient DPL procedure.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121489644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards Ontology-based Semantic File Systems 面向基于本体的语义文件系统
Ba-Hung Ngo, C. Bac, Frédérique Silber-Chaussumier, Quyet-Thang Le
Semantic file systems enhance standard file systems with the ability of file searching based on file semantics. The users interact with semantic file systems not only by browsing a hierarchy of directories but also by querying as information retrieval systems usually do. In this paper, we argue for a new file system paradigm, the semantic file system. We identify the issues in designing a semantic file system and propose an ontology- based solution for these issues.
语义文件系统通过基于文件语义的文件搜索能力增强了标准文件系统。用户与语义文件系统的交互不仅通过浏览目录的层次结构,而且还通过查询,就像信息检索系统通常做的那样。在本文中,我们提出了一种新的文件系统范式——语义文件系统。针对语义文件系统设计中存在的问题,提出了一种基于本体的解决方案。
{"title":"Towards Ontology-based Semantic File Systems","authors":"Ba-Hung Ngo, C. Bac, Frédérique Silber-Chaussumier, Quyet-Thang Le","doi":"10.1109/RIVF.2007.369129","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369129","url":null,"abstract":"Semantic file systems enhance standard file systems with the ability of file searching based on file semantics. The users interact with semantic file systems not only by browsing a hierarchy of directories but also by querying as information retrieval systems usually do. In this paper, we argue for a new file system paradigm, the semantic file system. We identify the issues in designing a semantic file system and propose an ontology- based solution for these issues.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125672060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Forgetting data intelligently in data warehouses 在数据仓库中智能忘记数据
Aliou Boly, G. Hébrail
The amount of data stored in data warehouses grows very quickly so that they can get saturated. To overcome this problem, we propose a language for specifying forgetting functions on stored data. In order to preserve the possibility of performing interesting analyses of historical data, the specifications include the definition of some summaries of deleted data. These summaries are aggregates and samples of deleted data and will be kept in the data warehouse. Once forgetting functions have been specified, the data warehouse is automatically updated in order to follow the specifications. This paper presents both the language for specifications, the structure of the summaries and the algorithms to update the data warehouse.
存储在数据仓库中的数据量增长非常快,以至于它们可能会饱和。为了克服这个问题,我们提出了一种语言来指定存储数据的遗忘函数。为了保留对历史数据进行有趣分析的可能性,规范包括对已删除数据的一些摘要的定义。这些摘要是已删除数据的聚合和样本,将保存在数据仓库中。一旦指定了遗忘函数,数据仓库就会自动更新以遵循规范。本文给出了规范语言、摘要结构和更新数据仓库的算法。
{"title":"Forgetting data intelligently in data warehouses","authors":"Aliou Boly, G. Hébrail","doi":"10.1109/RIVF.2007.369160","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369160","url":null,"abstract":"The amount of data stored in data warehouses grows very quickly so that they can get saturated. To overcome this problem, we propose a language for specifying forgetting functions on stored data. In order to preserve the possibility of performing interesting analyses of historical data, the specifications include the definition of some summaries of deleted data. These summaries are aggregates and samples of deleted data and will be kept in the data warehouse. Once forgetting functions have been specified, the data warehouse is automatically updated in order to follow the specifications. This paper presents both the language for specifications, the structure of the summaries and the algorithms to update the data warehouse.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128853183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Online Chasing Problems for Regular n-Gons 常规n-Gons的在线追逐问题
H. Fujiwara, K. Iwama, Kouki Yonezawa
We consider a server location problem with only one server to move. If each request must be served on the exact position, there is no choice for the online player and the problem is trivial. In this paper we assume that a request is given as a region and that the service can be done anywhere inside the region. Namely, for each request an online algorithm chooses an arbitrary point in the region and moves the server there. Our main result shows that if the region is a regular n-gon, the competitive ratio of the greedy algorithm is 1/sin pi/2n for odd n and 1/sin pi/n for even n. Especially for a square region, the greedy algorithm turns out to be optimal.
我们考虑一个只有一台服务器要移动的服务器位置问题。如果每个请求都必须在准确的位置上提供服务,那么在线玩家就没有选择,问题就微不足道了。在本文中,我们假设请求是作为一个区域给出的,并且服务可以在该区域内的任何地方完成。也就是说,对于每个请求,在线算法在区域中选择一个任意点,并将服务器移动到那里。我们的主要结果表明,当区域是正则n-gon时,贪心算法的竞争比对于奇数n为1/sin /2n,对于偶数n为1/sin /n。特别是对于方形区域,贪心算法是最优的。
{"title":"Online Chasing Problems for Regular n-Gons","authors":"H. Fujiwara, K. Iwama, Kouki Yonezawa","doi":"10.1109/RIVF.2007.369133","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369133","url":null,"abstract":"We consider a server location problem with only one server to move. If each request must be served on the exact position, there is no choice for the online player and the problem is trivial. In this paper we assume that a request is given as a region and that the service can be done anywhere inside the region. Namely, for each request an online algorithm chooses an arbitrary point in the region and moves the server there. Our main result shows that if the region is a regular n-gon, the competitive ratio of the greedy algorithm is 1/sin pi/2n for odd n and 1/sin pi/n for even n. Especially for a square region, the greedy algorithm turns out to be optimal.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124359072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Construction of English-Vietnamese Parallel Corpus through Web Mining 基于Web挖掘的英越平行语料库自动构建
V. B. Dang, Bao-Quoc Ho
Parallel corpus has become a very essential resource for multilingual natural language processing and there are large scale of parallel texts available on the Internet these days. In this paper, we propose a simple but reliable method to construct an English-Vietnamese parallel corpus through Web mining. Our system can automatically download and detect parallel Web pages on a given domain to construct a parallel corpus that is well-aligned at paragraph level with completely clean texts. The proposed technique can be easily applied to other language pairs. Experiments have been made and shown promising results.
并行语料库已成为多语种自然语言处理的重要资源,目前互联网上存在大量的并行文本。本文提出了一种简单可靠的基于Web挖掘的英越平行语料库构建方法。我们的系统可以自动下载和检测给定域上的并行网页,以构建一个在段落级别上对齐良好、文本完全干净的并行语料库。所提出的技术可以很容易地应用于其他语言对。已经进行了实验,并显示出令人满意的结果。
{"title":"Automatic Construction of English-Vietnamese Parallel Corpus through Web Mining","authors":"V. B. Dang, Bao-Quoc Ho","doi":"10.1109/RIVF.2007.369166","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369166","url":null,"abstract":"Parallel corpus has become a very essential resource for multilingual natural language processing and there are large scale of parallel texts available on the Internet these days. In this paper, we propose a simple but reliable method to construct an English-Vietnamese parallel corpus through Web mining. Our system can automatically download and detect parallel Web pages on a given domain to construct a parallel corpus that is well-aligned at paragraph level with completely clean texts. The proposed technique can be easily applied to other language pairs. Experiments have been made and shown promising results.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116959132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Applying Temporal Abstraction in Clinical Databases 时间抽象在临床数据库中的应用
Pham Van Chung, D. T. Anh
Temporal abstraction (TA) methods aim to extract more meaningful data from raw temporal data. The use of temporal abstraction is important for decision support applications in clinical domains which consume abstract concepts, while clinical databases usually contain primitive concepts. In this paper we propose a new approach for TA from temporal clinical databases: using inference graph, an extension of transition graph, as an implementation technique of a knowledge-based temporal abstraction system. We also describe a system, TDM, that integrates temporal data maintenance and temporal abstraction in a single architecture. TDM allows clinicians to use SQL-like temporal queries to retrieve both raw, time-oriented data and generated summaries of those data. The TDM system has been implemented and applied in monitoring the treatment of patients who have colorectal cancer.
时间抽象(TA)方法旨在从原始时间数据中提取更有意义的数据。时间抽象的使用对于临床领域的决策支持应用非常重要,因为临床数据库使用抽象概念,而临床数据库通常包含原始概念。本文提出了一种基于时间临床数据库的数据分析的新方法:利用过渡图的扩展——推理图作为基于知识的时间抽象系统的实现技术。我们还描述了一个系统TDM,它将时态数据维护和时态抽象集成在一个单一的体系结构中。TDM允许临床医生使用类似sql的时态查询来检索原始的、面向时间的数据和生成的这些数据的摘要。TDM系统已在大肠癌患者的治疗监测中得到实施和应用。
{"title":"Applying Temporal Abstraction in Clinical Databases","authors":"Pham Van Chung, D. T. Anh","doi":"10.1109/RIVF.2007.369156","DOIUrl":"https://doi.org/10.1109/RIVF.2007.369156","url":null,"abstract":"Temporal abstraction (TA) methods aim to extract more meaningful data from raw temporal data. The use of temporal abstraction is important for decision support applications in clinical domains which consume abstract concepts, while clinical databases usually contain primitive concepts. In this paper we propose a new approach for TA from temporal clinical databases: using inference graph, an extension of transition graph, as an implementation technique of a knowledge-based temporal abstraction system. We also describe a system, TDM, that integrates temporal data maintenance and temporal abstraction in a single architecture. TDM allows clinicians to use SQL-like temporal queries to retrieve both raw, time-oriented data and generated summaries of those data. The TDM system has been implemented and applied in monitoring the treatment of patients who have colorectal cancer.","PeriodicalId":158887,"journal":{"name":"2007 IEEE International Conference on Research, Innovation and Vision for the Future","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125403113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2007 IEEE International Conference on Research, Innovation and Vision for the Future
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1