首页 > 最新文献

2013 IEEE 25th International Conference on Tools with Artificial Intelligence最新文献

英文 中文
Goal-Driven Changes in Argumentation: A Theoretical Framework and a Tool 论证中目标驱动的变化:一个理论框架和工具
Pierre Bisquert, C. Cayrol, Florence Dupin de Saint-Cyr -- Bannay, M. Lagasquie-Schiex
This paper defines a new framework for dynamics in argumentation. In this framework, an agent can change an argumentation system (the target system) in order to achieve some desired goal. Changes consist in addition/removal of arguments or attacks between arguments and are constrained by theagent's knowledge encoded by another argumentation system. We present a software that computes the possible change operations for a given agent on a given target argumentation system in order to achieve some given goal.
本文定义了一个新的论证动力学框架。在这个框架中,一个代理可以改变一个论证系统(目标系统),以达到一些期望的目标。变化包括论据的添加/删除或论据之间的攻击,并受到由另一个论证系统编码的代理知识的约束。我们提出了一种软件,它可以计算给定主体在给定目标论证系统上为达到某个给定目标而可能进行的变更操作。
{"title":"Goal-Driven Changes in Argumentation: A Theoretical Framework and a Tool","authors":"Pierre Bisquert, C. Cayrol, Florence Dupin de Saint-Cyr -- Bannay, M. Lagasquie-Schiex","doi":"10.1109/ICTAI.2013.96","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.96","url":null,"abstract":"This paper defines a new framework for dynamics in argumentation. In this framework, an agent can change an argumentation system (the target system) in order to achieve some desired goal. Changes consist in addition/removal of arguments or attacks between arguments and are constrained by theagent's knowledge encoded by another argumentation system. We present a software that computes the possible change operations for a given agent on a given target argumentation system in order to achieve some given goal.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121791258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Enhancing Classification Accuracy with the Help of Feature Maximization Metric 利用特征最大化度量提高分类精度
Jean-Charles Lamirel
This paper deals with a new feature selection and feature contrasting approach for enhancing classification of both numerical and textual data. The method is experienced on different types of reference datasets. The paper illustrates that the proposed approach provides a very significant performance increase in all the studied cases clearly figuring out its generic character.
本文研究了一种新的特征选择和特征对比方法,以增强数字和文本数据的分类能力。该方法在不同类型的参考数据集上进行了实验。本文表明,该方法在所有研究案例中都有非常显著的性能提升,并明确了其共性。
{"title":"Enhancing Classification Accuracy with the Help of Feature Maximization Metric","authors":"Jean-Charles Lamirel","doi":"10.1109/ICTAI.2013.90","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.90","url":null,"abstract":"This paper deals with a new feature selection and feature contrasting approach for enhancing classification of both numerical and textual data. The method is experienced on different types of reference datasets. The paper illustrates that the proposed approach provides a very significant performance increase in all the studied cases clearly figuring out its generic character.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121858257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Symmetry-Based Pruning in Itemset Mining 项集挖掘中基于对称的剪枝
Saïd Jabbour, Mehdi Khiari, L. Sais, Y. Salhi, Karim Tabia
In this paper, we show how symmetries, a fundamental structural property, can be used to prune the search space in itemset mining problems. Our approach is based on a dynamic integration of symmetries in APRIORI-like algorithms to prune the set of possible candidate patterns. More precisely, for a given itemset, symmetry can be applied to deduce other itemsets while preserving their properties. We also show that our symmetry-based pruning approach can be extended to the general Mannila and Toivonen pattern mining framework. Experimental results highlight the usefulness and the efficiency of our symmetry-based pruning approach.
在这篇文章中,我们展示了对称性,一个基本的结构性质,如何在项目集挖掘问题中被用来精简搜索空间。我们的方法是基于apriori类算法中对称性的动态集成来修剪可能的候选模式集。更准确地说,对于给定的项集,对称性可以应用于推导其他项集,同时保留它们的属性。我们还表明,我们基于对称的修剪方法可以扩展到一般的Mannila和Toivonen模式挖掘框架。实验结果表明了该方法的有效性和有效性。
{"title":"Symmetry-Based Pruning in Itemset Mining","authors":"Saïd Jabbour, Mehdi Khiari, L. Sais, Y. Salhi, Karim Tabia","doi":"10.1109/ICTAI.2013.78","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.78","url":null,"abstract":"In this paper, we show how symmetries, a fundamental structural property, can be used to prune the search space in itemset mining problems. Our approach is based on a dynamic integration of symmetries in APRIORI-like algorithms to prune the set of possible candidate patterns. More precisely, for a given itemset, symmetry can be applied to deduce other itemsets while preserving their properties. We also show that our symmetry-based pruning approach can be extended to the general Mannila and Toivonen pattern mining framework. Experimental results highlight the usefulness and the efficiency of our symmetry-based pruning approach.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126910338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Novel Combination of Reasoners for Ontology Classification 一种新的本体分类推理器组合
Changlong Wang, Zhiyong Feng
Large scale ontology applications require efficient reasoning services, of which ontology classification is the fundamental reasoning task. The special EL reasoners are efficient, but they can not classify ontologies with axioms outside the OWL 2 EL profile. The general-purpose OWL 2 reasoners for expressive Description Logics are less efficient when classifying the OWL 2 EL ontologies. In this work, we propose a novel technique that combines an OWL 2 reasoner with an EL reasoner for classification of ontologies expressed in DL SROIQ. We develop an efficient task decomposition algorithm for identifying the minimal non-EL module that is assigned to the OWL 2 reasoner, and the bulk of the workload is assigned to the EL reasoner. Furthermore, this paper reports on the implementation of our approach in the ComR system which integrates the two types of reasoners in a black-box manner. The experimental results show that our method leads to a reasonable task assignment and can offer a substantial speed up (over 50%) in ontology classification.
大规模的本体应用需要高效的推理服务,其中本体分类是最基本的推理任务。特殊的EL推理器是有效的,但它们不能对owl2 EL profile之外的公理本体进行分类。用于表达描述逻辑的通用owl2推理器在对owl2 EL本体进行分类时效率较低。在这项工作中,我们提出了一种结合OWL 2推理器和EL推理器的新技术,用于对DL SROIQ中表达的本体进行分类。我们开发了一种高效的任务分解算法,用于识别分配给owl2推理器的最小非EL模块,并将大部分工作负载分配给EL推理器。此外,本文还报告了我们的方法在ComR系统中的实现,该系统以黑盒方式集成了两种类型的推理器。实验结果表明,该方法能够合理地分配任务,并能将本体分类的速度提高50%以上。
{"title":"A Novel Combination of Reasoners for Ontology Classification","authors":"Changlong Wang, Zhiyong Feng","doi":"10.1109/ICTAI.2013.75","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.75","url":null,"abstract":"Large scale ontology applications require efficient reasoning services, of which ontology classification is the fundamental reasoning task. The special EL reasoners are efficient, but they can not classify ontologies with axioms outside the OWL 2 EL profile. The general-purpose OWL 2 reasoners for expressive Description Logics are less efficient when classifying the OWL 2 EL ontologies. In this work, we propose a novel technique that combines an OWL 2 reasoner with an EL reasoner for classification of ontologies expressed in DL SROIQ. We develop an efficient task decomposition algorithm for identifying the minimal non-EL module that is assigned to the OWL 2 reasoner, and the bulk of the workload is assigned to the EL reasoner. Furthermore, this paper reports on the implementation of our approach in the ComR system which integrates the two types of reasoners in a black-box manner. The experimental results show that our method leads to a reasonable task assignment and can offer a substantial speed up (over 50%) in ontology classification.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129054532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Imbalanced Hypergraph Partitioning and Improvements for Consensus Clustering 非平衡超图划分及共识聚类的改进
John Robert Yaros, T. Imielinski
Hypergraph partitioning is typically defined as an optimization problem wherein vertices are placed in separate parts (of a partition) such that the fewest number of hyperedges will span multiple parts. To ensure that parts have sizes satisfying user requirements, constraints are typically imposed. Under such constraints, the problem is known to be NP-Hard, so heuristic methods are needed to find approximate solutions in reasonable time. Circuit layout has historically been one of the most prominent application areas and has seen a proliferation of tools designed to satisfy its needs. Constraints in these tools typically focus on equal size parts, allowing the user to specify a maximum tolerance for deviation from that equal size. A more generalized constraint allows the user to define fixed sizes and tolerances for each part. More recently, other domains have mapped problems to hypergraph partitioning and, perhaps due to their availability, have used existing tools to perform partitioning. In particular, consensus clustering easily fits a hypergraph representation where each cluster of each input clustering is represented by a hyperedge. Authors of such research have reported partitioning tends to only have good results when clusters can be expected to be roughly the same size, an unsurprising result given the tools' focus on equal sized parts. Thus, even though many datasets have "natural" part sizes that are mixed, the current toolset is ill-suited to find good solutions unless those part sizes are known a priori. We argue that the main issue rests in the current constraint definitions and their focus measuring imbalance on the basis of the largest/smallest part. We further argue that, due to its holistic nature, entropy best measures imbalance and can best guide the partition method to the natural part sizes with lowest cut for a given level of imbalance. We provide a method that finds good approximate solutions under an entropy constraint and further introduce the notion of a discount cut, which helps overcome local optima that frequently plague k-way partitioning algorithms. In comparison to today's popular tools, we show our method returns sizable improvements in cut size as the level of imbalance grows. In consensus clustering, we demonstrate that good solutions are more easily achieved even when part sizes are not roughly equal.
超图划分通常被定义为一个优化问题,其中顶点被放置在(分区的)独立部分,这样最少数量的超边将跨越多个部分。为了确保零件的尺寸满足用户需求,通常会施加约束。在这种约束下,已知问题是NP-Hard的,因此需要启发式方法在合理的时间内找到近似解。电路布局历来是最突出的应用领域之一,并且已经看到了为满足其需求而设计的工具的扩散。这些工具中的约束通常集中在等尺寸的零件上,允许用户指定偏离等尺寸的最大公差。更广义的约束允许用户为每个零件定义固定的尺寸和公差。最近,其他领域已经将问题映射到超图分区,并且可能由于它们的可用性,已经使用现有工具来执行分区。特别是,共识聚类很容易适合超图表示,其中每个输入聚类的每个聚类都由超边缘表示。此类研究的作者报告说,只有当集群的大小大致相同时,划分才会有好的结果,考虑到工具关注的是大小相等的部分,这一结果并不令人惊讶。因此,即使许多数据集具有混合的“自然”零件尺寸,当前的工具集也不适合找到好的解决方案,除非这些零件尺寸是先验已知的。我们认为,主要问题在于当前的约束定义,以及它们以最大/最小部分为基础衡量不平衡的重点。我们进一步认为,由于熵的整体性,熵可以最好地衡量不平衡,并且可以最好地指导分割方法在给定的不平衡水平下具有最低切割的自然部分尺寸。我们提供了一种在熵约束下找到好的近似解的方法,并进一步引入了折扣削减的概念,这有助于克服经常困扰k-way划分算法的局部最优解。与当今流行的工具相比,我们表明,随着不平衡水平的增长,我们的方法在切割尺寸方面得到了相当大的改进。在一致聚类中,我们证明了即使零件尺寸不大致相等,也更容易获得好的解决方案。
{"title":"Imbalanced Hypergraph Partitioning and Improvements for Consensus Clustering","authors":"John Robert Yaros, T. Imielinski","doi":"10.1109/ICTAI.2013.61","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.61","url":null,"abstract":"Hypergraph partitioning is typically defined as an optimization problem wherein vertices are placed in separate parts (of a partition) such that the fewest number of hyperedges will span multiple parts. To ensure that parts have sizes satisfying user requirements, constraints are typically imposed. Under such constraints, the problem is known to be NP-Hard, so heuristic methods are needed to find approximate solutions in reasonable time. Circuit layout has historically been one of the most prominent application areas and has seen a proliferation of tools designed to satisfy its needs. Constraints in these tools typically focus on equal size parts, allowing the user to specify a maximum tolerance for deviation from that equal size. A more generalized constraint allows the user to define fixed sizes and tolerances for each part. More recently, other domains have mapped problems to hypergraph partitioning and, perhaps due to their availability, have used existing tools to perform partitioning. In particular, consensus clustering easily fits a hypergraph representation where each cluster of each input clustering is represented by a hyperedge. Authors of such research have reported partitioning tends to only have good results when clusters can be expected to be roughly the same size, an unsurprising result given the tools' focus on equal sized parts. Thus, even though many datasets have \"natural\" part sizes that are mixed, the current toolset is ill-suited to find good solutions unless those part sizes are known a priori. We argue that the main issue rests in the current constraint definitions and their focus measuring imbalance on the basis of the largest/smallest part. We further argue that, due to its holistic nature, entropy best measures imbalance and can best guide the partition method to the natural part sizes with lowest cut for a given level of imbalance. We provide a method that finds good approximate solutions under an entropy constraint and further introduce the notion of a discount cut, which helps overcome local optima that frequently plague k-way partitioning algorithms. In comparison to today's popular tools, we show our method returns sizable improvements in cut size as the level of imbalance grows. In consensus clustering, we demonstrate that good solutions are more easily achieved even when part sizes are not roughly equal.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126770921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
ESmodels: An Inference Engine of Epistemic Specifications ESmodels:认知规范的推理引擎
Pub Date : 2013-11-04 DOI: 10.1109/ICTAI.2013.118
Zhizheng Zhang, Kaikai Zhao, Rongcun Cui
Epistemic specification (ES for short) is an extension of answer set programming (ASP for short). The extension is built around the introduction of modalities K and M, and then is capable of representing incomplete information in the presence of multiple belief sets. Although both syntax and semantics of ES are up in the air, the need for this extension has been illustrated with several examples in the literatures. In this paper, we present a new ES version with only modality K and the design of its inference engine ESmodels that aims to be efficient enough to promote the theoretical research and also practical use of ES. We first introduce the syntax and semantics of the new version of ES and show it is succinct but flexible by comparing it with existing ES versions. Then, we focus on the description of the algorithm and optimization approaches of the inference engine. Finally, we conclude with perspectives.
认知规范(ES)是对答案集规划(ASP)的扩展。该扩展是围绕模态K和M的引入建立的,然后能够表示存在多个信念集的不完全信息。尽管ES的语法和语义都是悬而未决的,但是文献中的几个例子已经说明了对这个扩展的需求。在本文中,我们提出了一个新的只有K模态的ES版本,并设计了其推理引擎ESmodels,旨在提高ES的理论研究和实际应用效率。我们首先介绍新版本ES的语法和语义,并通过将其与现有的ES版本进行比较,说明它简洁而灵活。然后,重点介绍了推理引擎的算法描述和优化方法。最后,我们以观点作为总结。
{"title":"ESmodels: An Inference Engine of Epistemic Specifications","authors":"Zhizheng Zhang, Kaikai Zhao, Rongcun Cui","doi":"10.1109/ICTAI.2013.118","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.118","url":null,"abstract":"Epistemic specification (ES for short) is an extension of answer set programming (ASP for short). The extension is built around the introduction of modalities K and M, and then is capable of representing incomplete information in the presence of multiple belief sets. Although both syntax and semantics of ES are up in the air, the need for this extension has been illustrated with several examples in the literatures. In this paper, we present a new ES version with only modality K and the design of its inference engine ESmodels that aims to be efficient enough to promote the theoretical research and also practical use of ES. We first introduce the syntax and semantics of the new version of ES and show it is succinct but flexible by comparing it with existing ES versions. Then, we focus on the description of the algorithm and optimization approaches of the inference engine. Finally, we conclude with perspectives.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127770280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Generation of Implied Constraints for Automaton-Induced Decompositions 自动机诱导分解隐含约束的生成
Pub Date : 2013-11-04 DOI: 10.1109/ICTAI.2013.160
María Andreína Francisco Rodríguez, P. Flener, J. Pearson
Automata, possibly with counters, allow many constraints to be expressed in a simple and high-level way. An automaton induces a decomposition into a conjunction of already implemented constraints. Generalised arc consistency is not generally maintained on decompositions induced by counter automata with more than one state or counter. To improve propagation of automaton-induced constraint decompositions, we use automated tools to derive loop invariants from the constraint checker corresponding to the given automaton. These loop invariants correspond to implied constraints, which can be added to the decomposition. We consider two global constraints and derive implied constraints to improve propagation even to the point of maintaining generalised arc consistency.
自动机(可能带有计数器)允许以简单和高级的方式表示许多约束。自动机将分解归纳为已实现约束的结合。由具有多个状态或计数器的计数器自动机引起的分解一般不能保持广义弧一致性。为了改善自动机诱导的约束分解的传播,我们使用自动化工具从与给定自动机对应的约束检查器中导出循环不变量。这些循环不变量对应于可以添加到分解中的隐含约束。我们考虑了两个全局约束,并推导了隐含约束,以改善传播,甚至达到保持广义弧一致性的程度。
{"title":"Generation of Implied Constraints for Automaton-Induced Decompositions","authors":"María Andreína Francisco Rodríguez, P. Flener, J. Pearson","doi":"10.1109/ICTAI.2013.160","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.160","url":null,"abstract":"Automata, possibly with counters, allow many constraints to be expressed in a simple and high-level way. An automaton induces a decomposition into a conjunction of already implemented constraints. Generalised arc consistency is not generally maintained on decompositions induced by counter automata with more than one state or counter. To improve propagation of automaton-induced constraint decompositions, we use automated tools to derive loop invariants from the constraint checker corresponding to the given automaton. These loop invariants correspond to implied constraints, which can be added to the decomposition. We consider two global constraints and derive implied constraints to improve propagation even to the point of maintaining generalised arc consistency.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"82 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132679696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Ontology Learning from Incomplete Semantic Web Data by BelNet 基于BelNet的不完全语义网数据本体学习
Pub Date : 2013-11-04 DOI: 10.1109/ICTAI.2013.117
Man Zhu, Zhiqiang Gao, Jeff Z. Pan, Yuting Zhao, Ying Xu, Zhibin Quan
Recent years have seen a dramatic growth of semantic web on the data level, but unfortunately not on the schema level, which contains mostly concept hierarchies. The shortage of schemas makes the semantic web data difficult to be used in many semantic web applications, so schemas learning from semantic web data becomes an increasingly pressing issue. In this paper we propose a novel schemas learning approach -BelNet, which combines description logics (DLs) with Bayesian networks. In this way BelNet is capable to understand and capture the semantics of the data on the one hand, and to handle incompleteness during the learning procedure on the other hand. The main contributions of this work are: (i)we introduce the architecture of BelNet, and corresponding lypropose the ontology learning techniques in it, (ii) we compare the experimental results of our approach with the state-of-the-art ontology learning approaches, and provide discussions from different aspects.
近年来,语义网在数据层面上有了显著的增长,但不幸的是,在模式层面上却没有,因为模式主要包含概念层次结构。模式的缺乏使得语义web数据难以在许多语义web应用中使用,因此从语义web数据中学习模式成为一个日益紧迫的问题。在本文中,我们提出了一种新的模式学习方法-BelNet,它将描述逻辑(dl)与贝叶斯网络相结合。通过这种方式,BelNet一方面能够理解和捕获数据的语义,另一方面在学习过程中处理不完整性。本工作的主要贡献有:(1)介绍了BelNet的体系结构,并提出了相应的本体学习技术;(2)将本方法的实验结果与当前最先进的本体学习方法进行了比较,并从不同方面进行了讨论。
{"title":"Ontology Learning from Incomplete Semantic Web Data by BelNet","authors":"Man Zhu, Zhiqiang Gao, Jeff Z. Pan, Yuting Zhao, Ying Xu, Zhibin Quan","doi":"10.1109/ICTAI.2013.117","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.117","url":null,"abstract":"Recent years have seen a dramatic growth of semantic web on the data level, but unfortunately not on the schema level, which contains mostly concept hierarchies. The shortage of schemas makes the semantic web data difficult to be used in many semantic web applications, so schemas learning from semantic web data becomes an increasingly pressing issue. In this paper we propose a novel schemas learning approach -BelNet, which combines description logics (DLs) with Bayesian networks. In this way BelNet is capable to understand and capture the semantics of the data on the one hand, and to handle incompleteness during the learning procedure on the other hand. The main contributions of this work are: (i)we introduce the architecture of BelNet, and corresponding lypropose the ontology learning techniques in it, (ii) we compare the experimental results of our approach with the state-of-the-art ontology learning approaches, and provide discussions from different aspects.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133510477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
A Probabilistic Query Suggestion Approach without Using Query Logs 不使用查询日志的概率查询建议方法
M. T. Shaikh, M. S. Pera, Yiu-Kai Ng
Commercial web search engines include a query suggestion module so that given a user's keyword query, alternative suggestions are offered and served as a guide to assist the user in formulating queries which capture his/her intended information need in a quick and simple manner. Majorityof these modules, however, perform an in-depth analysis oflarge query logs and thus (i) their suggestions are mostlybased on queries frequently posted by users and (ii) theirdesign methodologies cannot be applied to make suggestions oncustomized search applications for enterprises for which theirrespective query logs are not large enough or non-existent. To address these design issues, we have developed PQS, aprobabilistic query suggestion module. Unlike its counterparts, PQS is not constrained by the existence of query logs, sinceit solely relies on the availability of user-generated contentfreely accessible online, such as the Wikipedia.org documentcollection, and applies simple, yet effective, probabilistic-andinformation retrieval-based models, i.e., the Multinomial, BigramLanguage, and Vector Space Models, to provide usefuland diverse query suggestions. Empirical studies conductedusing a set of test queries and the feedbacks provided byMechanical Turk appraisers have verified that PQS makesmore useful suggestions than Yahoo! and is almost as goodas Google and Bing based on the relatively small difference inperformance measures achieved by Google and Bing over PQS.
商业网页搜寻引擎设有查询建议模块,以便在用户查询关键字时,提供其他建议,并作为指引,协助用户制定查询,以快速和简单的方式获取他/她所需要的资讯。然而,这些模块中的大多数对大型查询日志进行深入分析,因此(i)它们的建议主要基于用户经常发布的查询,(ii)它们的设计方法不能应用于为企业定制搜索应用程序提供建议,因为它们各自的查询日志不够大或不存在。为了解决这些设计问题,我们开发了PQS,即非概率查询建议模块。与其他同类方法不同,PQS不受查询日志存在的限制,因为它完全依赖于在线免费访问的用户生成内容的可用性,例如Wikipedia.org文档集合,并应用简单但有效的基于概率和信息检索的模型,即多项式、BigramLanguage和向量空间模型,以提供有用和多样化的查询建议。使用一组测试查询和mechanical Turk评估师提供的反馈进行的实证研究证实,PQS提供的建议比Yahoo!基于谷歌和Bing在PQS上取得的相对较小的性能差异,它几乎与谷歌和Bing一样好。
{"title":"A Probabilistic Query Suggestion Approach without Using Query Logs","authors":"M. T. Shaikh, M. S. Pera, Yiu-Kai Ng","doi":"10.1109/ICTAI.2013.99","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.99","url":null,"abstract":"Commercial web search engines include a query suggestion module so that given a user's keyword query, alternative suggestions are offered and served as a guide to assist the user in formulating queries which capture his/her intended information need in a quick and simple manner. Majorityof these modules, however, perform an in-depth analysis oflarge query logs and thus (i) their suggestions are mostlybased on queries frequently posted by users and (ii) theirdesign methodologies cannot be applied to make suggestions oncustomized search applications for enterprises for which theirrespective query logs are not large enough or non-existent. To address these design issues, we have developed PQS, aprobabilistic query suggestion module. Unlike its counterparts, PQS is not constrained by the existence of query logs, sinceit solely relies on the availability of user-generated contentfreely accessible online, such as the Wikipedia.org documentcollection, and applies simple, yet effective, probabilistic-andinformation retrieval-based models, i.e., the Multinomial, BigramLanguage, and Vector Space Models, to provide usefuland diverse query suggestions. Empirical studies conductedusing a set of test queries and the feedbacks provided byMechanical Turk appraisers have verified that PQS makesmore useful suggestions than Yahoo! and is almost as goodas Google and Bing based on the relatively small difference inperformance measures achieved by Google and Bing over PQS.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131704651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Using Evolution Strategies to Reduce Emergency Services Arrival Time in Case of Accident 利用演化策略缩短事故应急服务到达时间
Pub Date : 2013-11-04 DOI: 10.1109/ICTAI.2013.127
Javier Barrachina, Piedad Garrido, Manuel Fogué, F. Martinez, Juan-Carlos Cano, C. Calafate, P. Manzoni
A critical issue, especially in urban areas, is the occurrence of traffic accidents, since it could generate traffic jams. Additionally, these traffic jams will negatively affect to the rescue process, increasing the emergency services arrival time, which can determine the difference between life or death for injured people involved in the accident. In this paper, we propose four different approaches addressing the traffic congestion problem, comparing them to obtain the best solution. Using V2I communications, we are able to accurately estimate the traffic density in a certain area, which represents a key parameter to perform efficient traffic redirection, thereby reducing the emergency services arrival time, and avoiding traffic jams when an accident occurs. Specifically, we propose two approaches based on the Dijkstra algorithm, and two approaches based on Evolution Strategies. Results indicate that the Density-Based Evolution Strategy system is the best one among all the proposed solutions, since it offers the lowest emergency services travel times.
一个关键的问题,特别是在城市地区,是交通事故的发生,因为它可能会造成交通堵塞。此外,这些交通堵塞会对救援过程产生负面影响,增加了应急服务的到达时间,这可以决定事故中受伤人员的生死差异。在本文中,我们提出了四种不同的方法来解决交通拥堵问题,并对它们进行比较,以获得最佳解决方案。利用V2I通信,我们可以准确估计某一区域的交通密度,这是进行有效的交通重定向的关键参数,从而减少应急服务的到达时间,避免事故发生时的交通堵塞。具体来说,我们提出了两种基于Dijkstra算法的方法和两种基于进化策略的方法。结果表明,基于密度的演化策略系统是所有方案中最优的,因为它提供了最低的应急服务旅行时间。
{"title":"Using Evolution Strategies to Reduce Emergency Services Arrival Time in Case of Accident","authors":"Javier Barrachina, Piedad Garrido, Manuel Fogué, F. Martinez, Juan-Carlos Cano, C. Calafate, P. Manzoni","doi":"10.1109/ICTAI.2013.127","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.127","url":null,"abstract":"A critical issue, especially in urban areas, is the occurrence of traffic accidents, since it could generate traffic jams. Additionally, these traffic jams will negatively affect to the rescue process, increasing the emergency services arrival time, which can determine the difference between life or death for injured people involved in the accident. In this paper, we propose four different approaches addressing the traffic congestion problem, comparing them to obtain the best solution. Using V2I communications, we are able to accurately estimate the traffic density in a certain area, which represents a key parameter to perform efficient traffic redirection, thereby reducing the emergency services arrival time, and avoiding traffic jams when an accident occurs. Specifically, we propose two approaches based on the Dijkstra algorithm, and two approaches based on Evolution Strategies. Results indicate that the Density-Based Evolution Strategy system is the best one among all the proposed solutions, since it offers the lowest emergency services travel times.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131242418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2013 IEEE 25th International Conference on Tools with Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1