首页 > 最新文献

2014 IEEE International Conference on Semantic Computing最新文献

英文 中文
Semantic Inference for Pharmacokinetic Drug-Drug Interactions 药代动力学药物-药物相互作用的语义推断
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.36
A. Moitra, R. Palla, L. Tari, M. Krishnamoorthy
Drug-drug interaction (DDI) study is an important aspect of therapy management and drug efficacy. DDI study investigates how drugs interact with each other and determine whether these interactions may lead to dire effects or nullify the therapeutic effects of each other. In this paper we model metabolic pathways of drugs that include the reaction effects between drugs and the related enzymes. By modeling the reaction effects, our model captures the degree of the effects of the interacting drugs. We introduce a novel methodology that combines semantics, ontology to model the concepts and interactions, and Answer Set Programming for temporal reasoning. We illustrate our method by inferring the effects of DDI among three drugs clozapine, olanzapine and fluvoxamine.
药物相互作用(DDI)研究是治疗管理和药物疗效的重要方面。DDI研究调查药物如何相互作用,并确定这些相互作用是否会导致可怕的影响或使彼此的治疗效果无效。在本文中,我们模拟了药物的代谢途径,包括药物与相关酶之间的反应效应。通过对反应效应进行建模,我们的模型捕获了相互作用药物的影响程度。我们介绍了一种新的方法,它结合了语义、本体来建模概念和交互,以及答案集编程来进行时间推理。我们通过推断氯氮平、奥氮平和氟伏沙明三种药物对DDI的影响来说明我们的方法。
{"title":"Semantic Inference for Pharmacokinetic Drug-Drug Interactions","authors":"A. Moitra, R. Palla, L. Tari, M. Krishnamoorthy","doi":"10.1109/ICSC.2014.36","DOIUrl":"https://doi.org/10.1109/ICSC.2014.36","url":null,"abstract":"Drug-drug interaction (DDI) study is an important aspect of therapy management and drug efficacy. DDI study investigates how drugs interact with each other and determine whether these interactions may lead to dire effects or nullify the therapeutic effects of each other. In this paper we model metabolic pathways of drugs that include the reaction effects between drugs and the related enzymes. By modeling the reaction effects, our model captures the degree of the effects of the interacting drugs. We introduce a novel methodology that combines semantics, ontology to model the concepts and interactions, and Answer Set Programming for temporal reasoning. We illustrate our method by inferring the effects of DDI among three drugs clozapine, olanzapine and fluvoxamine.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122614082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Harvesting Domain Specific Ontologies from Text 从文本中获取领域特定本体
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.12
Hamid Mousavi, Deirdre Kerr, Markus R Iseli, C. Zaniolo
Ontologies are a vital component of most knowledge-based applications, including semantic web search, intelligent information integration, and natural language processing. In particular, we need effective tools for generating in-depth ontologies that achieve comprehensive converge of specific application domains of interest, while minimizing the time and cost of this process. Therefore we cannot rely on the manual or highly supervised approaches often used in the past, since they do not scale well. We instead propose a new approach that automatically generates domain-specific ontologies from a small corpus of documents using deep NLP-based text-mining. Starting from an initial small seed of domain concepts, our Onto Harvester system iteratively extracts ontological relations connecting existing concepts to other terms in the text, and adds strongly connected terms to the current ontology. As a result, Onto Harvester (i) remains focused on the application domain, (ii) is resistant to noise, and (iii) generates very comprehensive ontologies from modest-size document corpora. In fact, starting from a small seed, Onto Harvester produces ontologies that outperform both manually generated ontologies and ontologies generated by current techniques, even those that require very large well-focused data sets.
本体是大多数基于知识的应用程序的重要组成部分,包括语义网络搜索、智能信息集成和自然语言处理。特别是,我们需要有效的工具来生成深度本体,以实现感兴趣的特定应用领域的全面收敛,同时最大限度地减少此过程的时间和成本。因此,我们不能依赖过去经常使用的手动或高度监督的方法,因为它们不能很好地扩展。我们提出了一种新的方法,使用基于深度nlp的文本挖掘,从一个小的文档语料库自动生成特定领域的本体。从最初的小领域概念种子开始,我们的Onto Harvester系统迭代地提取将现有概念与文本中其他术语连接起来的本体关系,并将强连接的术语添加到当前本体中。因此,Onto Harvester(1)仍然专注于应用领域,(2)抗噪声,(3)从中等大小的文档语料库生成非常全面的本体。事实上,从一个小种子开始,Onto Harvester产生的本体优于手动生成的本体和当前技术生成的本体,即使是那些需要非常大的集中数据集的本体。
{"title":"Harvesting Domain Specific Ontologies from Text","authors":"Hamid Mousavi, Deirdre Kerr, Markus R Iseli, C. Zaniolo","doi":"10.1109/ICSC.2014.12","DOIUrl":"https://doi.org/10.1109/ICSC.2014.12","url":null,"abstract":"Ontologies are a vital component of most knowledge-based applications, including semantic web search, intelligent information integration, and natural language processing. In particular, we need effective tools for generating in-depth ontologies that achieve comprehensive converge of specific application domains of interest, while minimizing the time and cost of this process. Therefore we cannot rely on the manual or highly supervised approaches often used in the past, since they do not scale well. We instead propose a new approach that automatically generates domain-specific ontologies from a small corpus of documents using deep NLP-based text-mining. Starting from an initial small seed of domain concepts, our Onto Harvester system iteratively extracts ontological relations connecting existing concepts to other terms in the text, and adds strongly connected terms to the current ontology. As a result, Onto Harvester (i) remains focused on the application domain, (ii) is resistant to noise, and (iii) generates very comprehensive ontologies from modest-size document corpora. In fact, starting from a small seed, Onto Harvester produces ontologies that outperform both manually generated ontologies and ontologies generated by current techniques, even those that require very large well-focused data sets.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116915433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Requirements Elicitation Towards a Search Engine for Semantic Multimedia Content 语义多媒体内容搜索引擎的需求探析
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.35
Lydia Weiland, Felix Hanser, A. Scherp
We investigate user requirements regarding the interface design for a semantic multimedia search and retrieval based on a prototypical implementation of a search engine for multimedia content on the web. Thus, unlike existing image search engines and video search engines, we are interested in true multimedia content combining different media assets into multimedia documents like PowerPoint presentations and Flash files. In a user study with 20 participants, we conducted a formative evaluation based on the think-aloud method and semi-structured interviews in order to obtain requirements to a future web search engine for multimedia content. The interviews are complemented by a paper-and-pencil questionnaire to obtain quantitative information and present mockups demonstrating the user interface of a future multimedia search and retrieval engine.
我们研究了基于web上多媒体内容搜索引擎原型实现的语义多媒体搜索和检索界面设计的用户需求。因此,与现有的图像搜索引擎和视频搜索引擎不同,我们感兴趣的是真正的多媒体内容,将不同的媒体资产组合成多媒体文档,如PowerPoint演示文稿和Flash文件。在一项有20名参与者的用户研究中,我们基于有声思考方法和半结构化访谈进行了形成性评估,以获得对未来多媒体内容网络搜索引擎的需求。访谈还附有一份纸笔调查表,以获得定量信息,并展示展示未来多媒体搜索和检索引擎用户界面的模型。
{"title":"Requirements Elicitation Towards a Search Engine for Semantic Multimedia Content","authors":"Lydia Weiland, Felix Hanser, A. Scherp","doi":"10.1109/ICSC.2014.35","DOIUrl":"https://doi.org/10.1109/ICSC.2014.35","url":null,"abstract":"We investigate user requirements regarding the interface design for a semantic multimedia search and retrieval based on a prototypical implementation of a search engine for multimedia content on the web. Thus, unlike existing image search engines and video search engines, we are interested in true multimedia content combining different media assets into multimedia documents like PowerPoint presentations and Flash files. In a user study with 20 participants, we conducted a formative evaluation based on the think-aloud method and semi-structured interviews in order to obtain requirements to a future web search engine for multimedia content. The interviews are complemented by a paper-and-pencil questionnaire to obtain quantitative information and present mockups demonstrating the user interface of a future multimedia search and retrieval engine.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124851384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Aligned Ontology Model to Convert Cultural Heritage Resources into Semantic Web 利用对齐本体模型将文化遗产资源转化为语义Web
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.39
Li Bing, Keith C. C. Chan, L. Carr
Cultural heritage resources are huge and heterogeneous. They include highly structured, very unstructured, and semi-structured data or information obtained from both authorized and unauthorized sources and involving multimedia data including text, audio and video data. With the rapid development of the web, more and more cultural heritage organizations use digital methods to record, store and represent their arts and events. However, searching for information after they are stored is still considered a challenging task. The use of semantic web techniques is proposed here to make the data more structured so that the items in the cultural heritage domain can be fully represented and made easily assessable to the public as much as possible. This paper proposes a method to convert a traditional cultural heritage website into one that is well-designed and content-rich. The method includes an ontology model which could automatically adopt new class and instance as input by asserted and inferred models. It could also align local ontology and external online ontologies. Through the proposed method, this paper also discusses several urgent issues about automatic conversion of data, semantic search and user involvement.
文化遗产资源巨大且异质性强。它们包括高度结构化、非常非结构化和半结构化的数据或从授权和未经授权的来源获得的信息,并涉及多媒体数据,包括文本、音频和视频数据。随着网络的快速发展,越来越多的文化遗产组织采用数字化的方式来记录、存储和展示他们的艺术和活动。然而,在信息存储后进行搜索仍然被认为是一项具有挑战性的任务。本文建议使用语义网技术,使数据更加结构化,从而使文化遗产领域的项目能够充分地呈现出来,并尽可能方便地向公众进行评估。本文提出了一种将传统文化遗产网站改造成设计精良、内容丰富的网站的方法。该方法包括一个本体模型,该模型可以通过断言模型和推断模型自动接受新的类和实例作为输入。它还可以对齐本地本体和外部在线本体。通过提出的方法,本文还讨论了数据自动转换、语义搜索和用户参与等亟待解决的问题。
{"title":"Using Aligned Ontology Model to Convert Cultural Heritage Resources into Semantic Web","authors":"Li Bing, Keith C. C. Chan, L. Carr","doi":"10.1109/ICSC.2014.39","DOIUrl":"https://doi.org/10.1109/ICSC.2014.39","url":null,"abstract":"Cultural heritage resources are huge and heterogeneous. They include highly structured, very unstructured, and semi-structured data or information obtained from both authorized and unauthorized sources and involving multimedia data including text, audio and video data. With the rapid development of the web, more and more cultural heritage organizations use digital methods to record, store and represent their arts and events. However, searching for information after they are stored is still considered a challenging task. The use of semantic web techniques is proposed here to make the data more structured so that the items in the cultural heritage domain can be fully represented and made easily assessable to the public as much as possible. This paper proposes a method to convert a traditional cultural heritage website into one that is well-designed and content-rich. The method includes an ontology model which could automatically adopt new class and instance as input by asserted and inferred models. It could also align local ontology and external online ontologies. Through the proposed method, this paper also discusses several urgent issues about automatic conversion of data, semantic search and user involvement.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125464513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Find-to-Forecast Process: An Automated Methodology for Situation Assessment 从发现到预测的过程:一种态势评估的自动化方法
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.60
K. Bimson, Ahmad Slim, G. Heileman
The ability to identify, process, and comprehend the essential elements of information associated with a given operational environment can be used to reason about how the actors within the environment can best respond. This is often referred to as "situation assessment," the end state of which is "situation awareness," which can be simply defined as "knowing what is going on around you." Taken together, these are important fields of study concerned with perception of the environment critical to decision-makers in many complex, dynamic domains, including aviation, military command and control, and emergency management. The primary goal of our research is to identify some of the main technical challenges associated with automated situation assessment, in general, and to propose an information processing methodology that meets those challenges, which we call Find-to-Forecast (F2F). The F2F framework supports accessing heterogeneous information (structured and unstructured), which is normalized into a standard RDF representation. Next, the F2F framework identifies mission-relevant information elements, filtering out irrelevant (or low priority) information, fusing the remaining relevant information. The next steps in the F2F process involve focusing operator attention on essential elements of mission information, and reasoning over fused, relevant information to forecast potential courses of action based on the evolving situation, changing data, and uncertain knowledge. This paper provides an overview of the overall F2F methodology, to provide context, followed by a more detailed consideration of the "focus" algorithm, which uses contextual semantics to evaluate the value of new information relative to an operator's situational understanding during evolving events.
识别、处理和理解与给定操作环境相关的信息的基本元素的能力可用于推断环境中的参与者如何最好地响应。这通常被称为“情况评估”,其最终状态是“情况感知”,可以简单地定义为“知道你周围发生了什么”。总的来说,这些都是研究环境感知的重要领域,对许多复杂、动态领域的决策者至关重要,包括航空、军事指挥和控制以及应急管理。我们研究的主要目标是确定与自动情况评估相关的一些主要技术挑战,一般来说,并提出满足这些挑战的信息处理方法,我们称之为发现到预测(F2F)。F2F框架支持访问异构信息(结构化和非结构化),这些信息被标准化为标准RDF表示。接下来,F2F框架识别与任务相关的信息元素,过滤掉不相关(或低优先级)的信息,融合剩余的相关信息。F2F过程的下一步包括将操作员的注意力集中在任务信息的基本要素上,并对融合的相关信息进行推理,以根据不断变化的情况、不断变化的数据和不确定的知识预测潜在的行动路线。本文概述了整个F2F方法,提供了上下文,然后更详细地考虑了“焦点”算法,该算法使用上下文语义来评估相对于操作员在不断发展的事件中的情景理解的新信息的价值。
{"title":"Find-to-Forecast Process: An Automated Methodology for Situation Assessment","authors":"K. Bimson, Ahmad Slim, G. Heileman","doi":"10.1109/ICSC.2014.60","DOIUrl":"https://doi.org/10.1109/ICSC.2014.60","url":null,"abstract":"The ability to identify, process, and comprehend the essential elements of information associated with a given operational environment can be used to reason about how the actors within the environment can best respond. This is often referred to as \"situation assessment,\" the end state of which is \"situation awareness,\" which can be simply defined as \"knowing what is going on around you.\" Taken together, these are important fields of study concerned with perception of the environment critical to decision-makers in many complex, dynamic domains, including aviation, military command and control, and emergency management. The primary goal of our research is to identify some of the main technical challenges associated with automated situation assessment, in general, and to propose an information processing methodology that meets those challenges, which we call Find-to-Forecast (F2F). The F2F framework supports accessing heterogeneous information (structured and unstructured), which is normalized into a standard RDF representation. Next, the F2F framework identifies mission-relevant information elements, filtering out irrelevant (or low priority) information, fusing the remaining relevant information. The next steps in the F2F process involve focusing operator attention on essential elements of mission information, and reasoning over fused, relevant information to forecast potential courses of action based on the evolving situation, changing data, and uncertain knowledge. This paper provides an overview of the overall F2F methodology, to provide context, followed by a more detailed consideration of the \"focus\" algorithm, which uses contextual semantics to evaluate the value of new information relative to an operator's situational understanding during evolving events.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125073606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Multimedia Semantic Concept Mining and Retrieval by Incorporating Negative Correlations 利用负相关增强多媒体语义概念挖掘与检索
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.30
Tao Meng, Yang Liu, M. Shyu, Yilin Yan, C. Shu
In recent years, we have witnessed a deluge of multimedia data such as texts, images, and videos. However, the research of managing and retrieving these data efficiently is still in the development stage. The conventional tag-based searching approaches suffer from noisy or incomplete tag issues. As a result, the content-based multimedia data management framework has become increasingly popular. In this research direction, multimedia high-level semantic concept mining and retrieval is one of the fastest developing research topics requesting joint efforts from researchers in both data mining and multimedia domains. To solve this problem, one great challenge is to bridge the semantic gap which is the gap between high-level concepts and low-level features. Recently, positive inter-concept correlations have been utilized to capture the context of a concept to bridge the gap. However, negative correlations have rarely been studied because of the difficulty to mine and utilize them. In this paper, a concept mining and retrieval framework utilizing negative inter-concept correlations is proposed. Several research problems such as negative correlation selection, weight estimation, and score integration are addressed. Experimental results on TRECVID 2010 benchmark data set demonstrate that the proposed framework gives promising performance.
近年来,我们目睹了大量的多媒体数据,如文本、图像和视频。然而,如何有效地管理和检索这些数据还处于发展阶段。传统的基于标签的搜索方法存在噪声或标签不完整的问题。因此,基于内容的多媒体数据管理框架越来越受欢迎。多媒体高级语义概念挖掘与检索是当前发展最快的研究方向之一,需要数据挖掘和多媒体领域的研究人员共同努力。为了解决这个问题,一个巨大的挑战是弥合语义差距,即高级概念和低级特征之间的差距。最近,积极的概念间相关性被用来捕捉概念的背景,以弥合差距。然而,由于难以挖掘和利用负相关关系,对负相关关系的研究很少。本文提出了一种利用概念间负相关的概念挖掘和检索框架。讨论了负相关选择、权值估计和分数整合等研究问题。在TRECVID 2010基准数据集上的实验结果表明,该框架具有良好的性能。
{"title":"Enhancing Multimedia Semantic Concept Mining and Retrieval by Incorporating Negative Correlations","authors":"Tao Meng, Yang Liu, M. Shyu, Yilin Yan, C. Shu","doi":"10.1109/ICSC.2014.30","DOIUrl":"https://doi.org/10.1109/ICSC.2014.30","url":null,"abstract":"In recent years, we have witnessed a deluge of multimedia data such as texts, images, and videos. However, the research of managing and retrieving these data efficiently is still in the development stage. The conventional tag-based searching approaches suffer from noisy or incomplete tag issues. As a result, the content-based multimedia data management framework has become increasingly popular. In this research direction, multimedia high-level semantic concept mining and retrieval is one of the fastest developing research topics requesting joint efforts from researchers in both data mining and multimedia domains. To solve this problem, one great challenge is to bridge the semantic gap which is the gap between high-level concepts and low-level features. Recently, positive inter-concept correlations have been utilized to capture the context of a concept to bridge the gap. However, negative correlations have rarely been studied because of the difficulty to mine and utilize them. In this paper, a concept mining and retrieval framework utilizing negative inter-concept correlations is proposed. Several research problems such as negative correlation selection, weight estimation, and score integration are addressed. Experimental results on TRECVID 2010 benchmark data set demonstrate that the proposed framework gives promising performance.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116634734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Computing On-the-Fly DBpedia Property Ranking 计算动态DBpedia属性排名
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.55
A. Dessì, M. Atzori
In many Semantic Web applications, having RDF predicates sorted by significance is of primarily importance to improve usability and performance. In this paper we focus on predicates available on DBpedia, the most important Semantic Web source of data counting 470 million english triples. Although there is plenty of work in literature dealing with ranking entities or RDF query results, none of them seem to specifically address the problem of computing predicate rank. We address the problem by associating to each DBPedia property (also known as predicates or attributes of RDF triples) a number of original features specifically designed to provide sort-by-importance quantitative measures, automatically computable from an online SPARQL endpoint or a RDF dataset. By computing those features on a number of entity properties, we created a learning set and tested the performance of a number of well-known learning-to-rank algorithms. Our first experimental results show that the approach is effective and fast.
在许多语义Web应用程序中,按照重要性对RDF谓词进行排序对于提高可用性和性能非常重要。在本文中,我们将重点关注DBpedia上可用的谓词,DBpedia是最重要的语义Web数据源,包含4.7亿个英语三元组。尽管文献中有大量关于排序实体或RDF查询结果的工作,但它们似乎都没有专门解决计算谓词秩的问题。我们通过为每个DBPedia属性(也称为RDF三元组的谓词或属性)关联许多专门设计用于提供按重要性排序的定量度量的原始特性来解决这个问题,这些特性可从在线SPARQL端点或RDF数据集自动计算。通过在许多实体属性上计算这些特征,我们创建了一个学习集,并测试了许多著名的学习排序算法的性能。初步实验结果表明,该方法有效、快速。
{"title":"Computing On-the-Fly DBpedia Property Ranking","authors":"A. Dessì, M. Atzori","doi":"10.1109/ICSC.2014.55","DOIUrl":"https://doi.org/10.1109/ICSC.2014.55","url":null,"abstract":"In many Semantic Web applications, having RDF predicates sorted by significance is of primarily importance to improve usability and performance. In this paper we focus on predicates available on DBpedia, the most important Semantic Web source of data counting 470 million english triples. Although there is plenty of work in literature dealing with ranking entities or RDF query results, none of them seem to specifically address the problem of computing predicate rank. We address the problem by associating to each DBPedia property (also known as predicates or attributes of RDF triples) a number of original features specifically designed to provide sort-by-importance quantitative measures, automatically computable from an online SPARQL endpoint or a RDF dataset. By computing those features on a number of entity properties, we created a learning set and tested the performance of a number of well-known learning-to-rank algorithms. Our first experimental results show that the approach is effective and fast.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128457770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Semantic Context-Dependent Weighting for Vector Space Model 向量空间模型的语义上下文相关加权
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.49
T. Nakanishi
In this paper, we represent a dynamic context-dependent weighting method for vector space model. A meaning is relatively decided by a context dynamically. A vector space model, including latent semantic indexing (LSI), etc. relatively measures correlations of each target thing that represents in each vector. However, the vectors of each target thing in almost method of the vector space models are static. It is important to weight each element of each vector by a context. Recently, it is necessary to understand a certain thing by not reading one data but summarizing massive data. Therefore, the vectors in the vector space model create from data set corresponding to represent a certain thing. That is, we should create vectors for the vector space model dynamically corresponding to a context and data distribution. The features of our method are a dynamic calculation of each element of vectors in a vector space model corresponding to a context. Our method reduces a vector dimension corresponding to context by context-depending weighting. Therefore, We can measure correlation with low calculation cost corresponding to context because of dimension deduction.
本文提出了一种基于上下文的向量空间模型动态加权方法。意义是相对动态地由上下文决定的。包括潜在语义索引(LSI)等在内的向量空间模型相对度量了每个向量中表示的每个目标事物的相关性。然而,在向量空间模型的几乎方法中,每个目标物体的向量是静态的。通过上下文对每个向量的每个元素进行加权是很重要的。最近,需要通过汇总大量数据而不是阅读一个数据来理解某件事。因此,向量空间模型中的向量是从对应的数据集中产生的,用来表示某一事物。也就是说,我们应该为与上下文和数据分布动态对应的向量空间模型创建向量。该方法的特点是动态计算向量空间模型中对应于上下文的每个向量元素。我们的方法通过上下文相关的加权来减少与上下文对应的向量维。因此,由于维度的推演,我们可以以较低的计算成本度量上下文对应的相关性。
{"title":"Semantic Context-Dependent Weighting for Vector Space Model","authors":"T. Nakanishi","doi":"10.1109/ICSC.2014.49","DOIUrl":"https://doi.org/10.1109/ICSC.2014.49","url":null,"abstract":"In this paper, we represent a dynamic context-dependent weighting method for vector space model. A meaning is relatively decided by a context dynamically. A vector space model, including latent semantic indexing (LSI), etc. relatively measures correlations of each target thing that represents in each vector. However, the vectors of each target thing in almost method of the vector space models are static. It is important to weight each element of each vector by a context. Recently, it is necessary to understand a certain thing by not reading one data but summarizing massive data. Therefore, the vectors in the vector space model create from data set corresponding to represent a certain thing. That is, we should create vectors for the vector space model dynamically corresponding to a context and data distribution. The features of our method are a dynamic calculation of each element of vectors in a vector space model corresponding to a context. Our method reduces a vector dimension corresponding to context by context-depending weighting. Therefore, We can measure correlation with low calculation cost corresponding to context because of dimension deduction.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126854725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
"Units of Meaning" in Medical Documents: Natural Language Processing Perspective 医学文献中的“意义单位”:自然语言处理视角
Pub Date : 2014-06-16 DOI: 10.1142/S1793351X14400078
D. Popolov, Joseph R. Barr
This paper discusses principles for the design of natural language processing (NLP) systems to automatically extract of data from doctor's notes, laboratory results and other medical documents in free-form text. We argue that rather than searching for 'atom units of meaning' in the text and then trying to generalize them into a broader set of documents through increasingly complicated system of rules, an NLP practitioner should take concepts as a whole as a meaningful unit of text. This simplifies the rules and makes NLP system easier to maintain and adapt. The departure point is purely practical, however a deeper investigation of typical problems with the implementation of such systems leads us to a discussion of broader theoretical principles underlying the NLP practices.
本文讨论了自然语言处理(NLP)系统的设计原则,用于从医生笔记、实验室结果和其他自由格式的医疗文档中自动提取数据。我们认为,与其在文本中搜索“意义的原子单位”,然后试图通过日益复杂的规则系统将它们概括为更广泛的文档集,NLP从业者应该将概念作为一个整体作为文本的有意义单位。这简化了规则,使NLP系统更容易维护和适应。出发点纯粹是实用的,然而,对此类系统实施的典型问题进行更深入的调查将使我们讨论NLP实践背后的更广泛的理论原则。
{"title":"\"Units of Meaning\" in Medical Documents: Natural Language Processing Perspective","authors":"D. Popolov, Joseph R. Barr","doi":"10.1142/S1793351X14400078","DOIUrl":"https://doi.org/10.1142/S1793351X14400078","url":null,"abstract":"This paper discusses principles for the design of natural language processing (NLP) systems to automatically extract of data from doctor's notes, laboratory results and other medical documents in free-form text. We argue that rather than searching for 'atom units of meaning' in the text and then trying to generalize them into a broader set of documents through increasingly complicated system of rules, an NLP practitioner should take concepts as a whole as a meaningful unit of text. This simplifies the rules and makes NLP system easier to maintain and adapt. The departure point is purely practical, however a deeper investigation of typical problems with the implementation of such systems leads us to a discussion of broader theoretical principles underlying the NLP practices.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115677365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Detecting Unexplained Human Behaviors in Social Networks 在社交网络中检测无法解释的人类行为
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.21
F. Amato, Aniello De Santo, V. Moscato, Fabio Persia, A. Picariello
Detection of human behavior in On-line Social Networks (OSNs) has become more and more important for a wide range of applications, such as security, marketing, parent controls and so on, opening a wide range of novel research areas, which have not been fully addressed yet. In this paper, we present a two-stage method for anomaly detection in humans' behavior while they are using a social network. First, we use Markov chains to automatically learn from the social network graph a number of models of human behaviors (normal behaviors), the second stage applies an activity detection framework based on the concept of possible words to detect all unexplained activities with respect to the normal behaviors. Some preliminary experiments using Facebook data show the approach efficiency and effectiveness.
在线社交网络(online Social Networks, OSNs)中人类行为的检测在安全、营销、家长控制等广泛应用中变得越来越重要,开辟了许多尚未完全解决的新研究领域。在本文中,我们提出了一种两阶段的方法来检测人类使用社交网络时的行为异常。首先,我们使用马尔可夫链从社交网络图中自动学习人类行为(正常行为)的一些模型,第二阶段应用基于可能词概念的活动检测框架来检测相对于正常行为的所有未解释的活动。一些使用Facebook数据的初步实验显示了该方法的效率和有效性。
{"title":"Detecting Unexplained Human Behaviors in Social Networks","authors":"F. Amato, Aniello De Santo, V. Moscato, Fabio Persia, A. Picariello","doi":"10.1109/ICSC.2014.21","DOIUrl":"https://doi.org/10.1109/ICSC.2014.21","url":null,"abstract":"Detection of human behavior in On-line Social Networks (OSNs) has become more and more important for a wide range of applications, such as security, marketing, parent controls and so on, opening a wide range of novel research areas, which have not been fully addressed yet. In this paper, we present a two-stage method for anomaly detection in humans' behavior while they are using a social network. First, we use Markov chains to automatically learn from the social network graph a number of models of human behaviors (normal behaviors), the second stage applies an activity detection framework based on the concept of possible words to detect all unexplained activities with respect to the normal behaviors. Some preliminary experiments using Facebook data show the approach efficiency and effectiveness.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132316845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2014 IEEE International Conference on Semantic Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1