首页 > 最新文献

2014 IEEE International Conference on Semantic Computing最新文献

英文 中文
Biomedical Big Data for Clinical Research and Patient Care: Role of Semantic Computing 临床研究和患者护理的生物医学大数据:语义计算的作用
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.68
S. Sahoo
Healthcare datasets are increasingly characterized by large volume, high rate of generation and need for real time analysis (velocity), and variety. These datasets are often termed biomedical big data and include multi-modal electrophysiological signals and electronic health records. In this talk, we focus on the computational challenges associated with signal data management and the role of semantic computing in addressing these challenges. We describe a cloud computing platform called Cloud wave that has been developed to effectively manage electrophysiological big data for epilepsy clinical research and patient care.
医疗保健数据集的特点越来越大,生成速度快,需要实时分析(速度)和多样性。这些数据集通常被称为生物医学大数据,包括多模态电生理信号和电子健康记录。在这次演讲中,我们将重点关注与信号数据管理相关的计算挑战以及语义计算在解决这些挑战中的作用。我们描述了一个名为cloud wave的云计算平台,该平台已被开发用于有效管理癫痫临床研究和患者护理的电生理大数据。
{"title":"Biomedical Big Data for Clinical Research and Patient Care: Role of Semantic Computing","authors":"S. Sahoo","doi":"10.1109/ICSC.2014.68","DOIUrl":"https://doi.org/10.1109/ICSC.2014.68","url":null,"abstract":"Healthcare datasets are increasingly characterized by large volume, high rate of generation and need for real time analysis (velocity), and variety. These datasets are often termed biomedical big data and include multi-modal electrophysiological signals and electronic health records. In this talk, we focus on the computational challenges associated with signal data management and the role of semantic computing in addressing these challenges. We describe a cloud computing platform called Cloud wave that has been developed to effectively manage electrophysiological big data for epilepsy clinical research and patient care.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133591926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Mechanism for Linking and Discovering Structured Cybersecurity Information over Networks 网络上结构化网络安全信息的链接和发现机制
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.66
Takeshi Takahashi, Y. Kadobayashi
To cope with the increasing amount of cyber threats, cyber security information must be shared beyond organization borders. Assorted organizations have already started to provide publicly-available repositories that store XML-based cyber security information on the Internet, but users are unaware of all of them. Cyber security information must be identified and located across such repositories by the parties who need that, and then should be transported to them to advance information sharing. This paper proposes a discovery mechanism, which identifies and locates various types of cyber security information and exchanges the information over networks. The mechanism generates RDF-based metadata to manage the list of cyber security information, and the metadata structure is based on an ontology of cyber security information, which absorbs the differences of the assorted schemata of the information and incorporates them. The mechanism is also capable of propagating any information updates such that entities with obsolete information do not suffer from emerging security threats. This paper also introduces a prototype of the mechanism to demonstrate its feasibility. It then analyzes the mechanism's extensibility, scalability, and information credibility. Through this work, we wish to expedite information sharing beyond organization borders and contribute to global cyber security.
为了应对日益增多的网络威胁,网络安全信息必须跨越组织边界共享。各种各样的组织已经开始提供公开可用的存储库,这些存储库在Internet上存储基于xml的网络安全信息,但用户并不知道所有这些存储库。需要网络安全信息的各方必须在这些存储库中识别和定位网络安全信息,然后将其传输到这些存储库,以促进信息共享。本文提出了一种发现机制,该机制可以识别和定位各种类型的网络安全信息,并在网络上进行信息交换。该机制生成基于rdf的元数据来管理网络安全信息列表,元数据结构基于网络安全信息本体,吸收信息分类模式的差异并进行融合。该机制还能够传播任何信息更新,使具有过时信息的实体不会受到新出现的安全威胁。本文还介绍了该机构的原型,以证明其可行性。然后分析了该机制的可扩展性、可伸缩性和信息可信度。通过这项工作,我们希望加快跨组织的信息共享,为全球网络安全作出贡献。
{"title":"Mechanism for Linking and Discovering Structured Cybersecurity Information over Networks","authors":"Takeshi Takahashi, Y. Kadobayashi","doi":"10.1109/ICSC.2014.66","DOIUrl":"https://doi.org/10.1109/ICSC.2014.66","url":null,"abstract":"To cope with the increasing amount of cyber threats, cyber security information must be shared beyond organization borders. Assorted organizations have already started to provide publicly-available repositories that store XML-based cyber security information on the Internet, but users are unaware of all of them. Cyber security information must be identified and located across such repositories by the parties who need that, and then should be transported to them to advance information sharing. This paper proposes a discovery mechanism, which identifies and locates various types of cyber security information and exchanges the information over networks. The mechanism generates RDF-based metadata to manage the list of cyber security information, and the metadata structure is based on an ontology of cyber security information, which absorbs the differences of the assorted schemata of the information and incorporates them. The mechanism is also capable of propagating any information updates such that entities with obsolete information do not suffer from emerging security threats. This paper also introduces a prototype of the mechanism to demonstrate its feasibility. It then analyzes the mechanism's extensibility, scalability, and information credibility. Through this work, we wish to expedite information sharing beyond organization borders and contribute to global cyber security.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"275 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133232846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Ontology-Based Text Classification into Dynamically Defined Topics 基于本体的动态定义主题文本分类
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.51
M. Allahyari, K. Kochut, Maciej Janik
We present a method for the automatic classification of text documents into a dynamically defined set of topics of interest. The proposed approach requires only a domain ontology and a set of user-defined classification topics, specified as contexts in the ontology. Our method is based on measuring the semantic similarity of the thematic graph created from a text document and the ontology sub-graphs resulting from the projection of the defined contexts. The domain ontology effectively becomes the classifier, where classification topics are expressed using the defined ontological contexts. In contrast to the traditional supervised categorization methods, the proposed method does not require a training set of documents. More importantly, our approach allows dynamically changing the classification topics without retraining of the classifier. In our experiments, we used the English language Wikipedia converted to an RDF ontology to categorize a corpus of current Web news documents into selection of topics of interest. The high accuracy achieved in our tests demonstrates the effectiveness of the proposed method, as well as the applicability of Wikipedia for semantic text categorization purposes.
我们提出了一种将文本文档自动分类为动态定义的感兴趣主题集的方法。提出的方法只需要一个领域本体和一组用户定义的分类主题,这些主题在本体中被指定为上下文。我们的方法是基于测量由文本文档创建的主题图和由定义上下文投影产生的本体子图的语义相似性。领域本体有效地成为分类器,其中分类主题使用定义的本体上下文表示。与传统的监督分类方法相比,该方法不需要文档的训练集。更重要的是,我们的方法允许在不重新训练分类器的情况下动态更改分类主题。在我们的实验中,我们使用英语维基百科转换为RDF本体,将当前Web新闻文档的语料库分类为感兴趣的主题选择。在我们的测试中获得的高精度证明了所提出方法的有效性,以及维基百科对语义文本分类目的的适用性。
{"title":"Ontology-Based Text Classification into Dynamically Defined Topics","authors":"M. Allahyari, K. Kochut, Maciej Janik","doi":"10.1109/ICSC.2014.51","DOIUrl":"https://doi.org/10.1109/ICSC.2014.51","url":null,"abstract":"We present a method for the automatic classification of text documents into a dynamically defined set of topics of interest. The proposed approach requires only a domain ontology and a set of user-defined classification topics, specified as contexts in the ontology. Our method is based on measuring the semantic similarity of the thematic graph created from a text document and the ontology sub-graphs resulting from the projection of the defined contexts. The domain ontology effectively becomes the classifier, where classification topics are expressed using the defined ontological contexts. In contrast to the traditional supervised categorization methods, the proposed method does not require a training set of documents. More importantly, our approach allows dynamically changing the classification topics without retraining of the classifier. In our experiments, we used the English language Wikipedia converted to an RDF ontology to categorize a corpus of current Web news documents into selection of topics of interest. The high accuracy achieved in our tests demonstrates the effectiveness of the proposed method, as well as the applicability of Wikipedia for semantic text categorization purposes.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134387607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
A Semantic Mapping Representation and Generation Tool Using UML for System Engineers 面向系统工程师的UML语义映射表示与生成工具
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.16
Seung-Hwa Chung, W. Tai, D. O’Sullivan, Aidan Boran
To address the problem of semantic heterogeneity, there has been a large body of research directed to the study of semantic mapping technologies. Although various semantic mapping technologies have been investigated, facilitating the process for domain experts to perform a semantic data integration task is not an easy task. This is because one is required not only to possess domain expertise but also to have a good understanding of knowledge engineering. This work proposes an abstract semantic mapping representation using UML for undertaking ontology mapping. The aim is to enable domain experts (particularly system engineers) to undertake mappings using the proposed UML representation that they are familiar with, while ensuring accuracy and ease of use of the automatically generated mappings. The proposed UML representation is evaluated through usability experiments (undertaken by system engineers) using a developed tool that was developed to implement the propose approach. The results show that the participants could correctly undertake the mapping task using the proposed UML representation and that the tool generated correct and executable mappings.
为了解决语义异构问题,已经有大量的研究指向语义映射技术的研究。尽管已经研究了各种语义映射技术,但促进领域专家执行语义数据集成任务的过程并不是一件容易的事情。这是因为一个人不仅需要拥有领域的专业知识,还需要对知识工程有很好的理解。本文提出了一种使用UML进行本体映射的抽象语义映射表示。其目的是使领域专家(特别是系统工程师)能够使用他们熟悉的建议的UML表示进行映射,同时确保自动生成映射的准确性和易用性。建议的UML表示通过可用性实验(由系统工程师承担)进行评估,使用开发的工具来实现建议的方法。结果表明,参与者可以使用建议的UML表示正确地承担映射任务,并且该工具生成了正确的和可执行的映射。
{"title":"A Semantic Mapping Representation and Generation Tool Using UML for System Engineers","authors":"Seung-Hwa Chung, W. Tai, D. O’Sullivan, Aidan Boran","doi":"10.1109/ICSC.2014.16","DOIUrl":"https://doi.org/10.1109/ICSC.2014.16","url":null,"abstract":"To address the problem of semantic heterogeneity, there has been a large body of research directed to the study of semantic mapping technologies. Although various semantic mapping technologies have been investigated, facilitating the process for domain experts to perform a semantic data integration task is not an easy task. This is because one is required not only to possess domain expertise but also to have a good understanding of knowledge engineering. This work proposes an abstract semantic mapping representation using UML for undertaking ontology mapping. The aim is to enable domain experts (particularly system engineers) to undertake mappings using the proposed UML representation that they are familiar with, while ensuring accuracy and ease of use of the automatically generated mappings. The proposed UML representation is evaluated through usability experiments (undertaken by system engineers) using a developed tool that was developed to implement the propose approach. The results show that the participants could correctly undertake the mapping task using the proposed UML representation and that the tool generated correct and executable mappings.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"260 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122897175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Creating a Phrase Similarity Graph from Wikipedia 创建一个来自维基百科的短语相似图
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.22
L. Stanchev
The paper addresses the problem of modeling the relationship between phrases in English using a similarity graph. The mathematical model stores data about the strength of the relationship between phrases expressed as a decimal number. Both structured data from Wikipedia, such as that the Wikipedia page with title "Dog" belongs to the Wikipedia category "Domesticated animals", and textual descriptions, such as that the Wikipedia page with title "Dog" contains the word "wolf" thirty one times are used in creating the graph. The quality of the graph data is validated by comparing the similarity of pairs of phrases using our software that uses the graph with results of studies that were performed with human subjects. To the best of our knowledge, our software produces better correlation with the results of both the Miller and Charles study and the WordSimilarity-353 study than any other published research.
本文利用相似度图对英语短语之间的关系进行建模。数学模型存储以十进制数字表示的短语之间关系强度的数据。来自维基百科的结构化数据(例如标题为“狗”的维基百科页面属于维基百科类别“驯养动物”)和文本描述(例如标题为“狗”的维基百科页面包含“狼”这个词31次)都被用于创建图。通过使用我们的软件比较短语对的相似性来验证图形数据的质量,该软件使用图形与人类受试者进行的研究结果进行比较。据我们所知,我们的软件与米勒和查尔斯的研究以及wordsimilarity353的研究结果的相关性比其他任何已发表的研究都要好。
{"title":"Creating a Phrase Similarity Graph from Wikipedia","authors":"L. Stanchev","doi":"10.1109/ICSC.2014.22","DOIUrl":"https://doi.org/10.1109/ICSC.2014.22","url":null,"abstract":"The paper addresses the problem of modeling the relationship between phrases in English using a similarity graph. The mathematical model stores data about the strength of the relationship between phrases expressed as a decimal number. Both structured data from Wikipedia, such as that the Wikipedia page with title \"Dog\" belongs to the Wikipedia category \"Domesticated animals\", and textual descriptions, such as that the Wikipedia page with title \"Dog\" contains the word \"wolf\" thirty one times are used in creating the graph. The quality of the graph data is validated by comparing the similarity of pairs of phrases using our software that uses the graph with results of studies that were performed with human subjects. To the best of our knowledge, our software produces better correlation with the results of both the Miller and Charles study and the WordSimilarity-353 study than any other published research.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131701537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Ontology Based Improvement of Opening Hours in E-governments 基于本体的电子政务开放时间改进
Pub Date : 2014-06-16 DOI: 10.1109/ICSC.2014.37
Pieter Colpaert, Laurens De Vocht, S. Verstockt, Anastasia Dimou, Raf Buyle, E. Mannens, R. Walle
To inform citizens when they can use government services, governments publish the services' opening hours on their website. When opening hours would be published in a machine interpretable manner, software agents would be able to answer queries about when it is possible to contact a certain service. We introduce an ontology for describing opening hours and use this ontology to create an input form. Furthermore, we explain a logic which can reply to queries for government services which are open or closed. The data is modeled according to this ontology. The principles discussed and applied in this paper are the first steps towards a design pattern for the governance of Open Government Data.
为了告知公民何时可以使用政府服务,政府在其网站上公布了服务的开放时间。当开放时间以机器可解释的方式公布时,软件代理将能够回答有关何时可以联系某项服务的查询。我们引入了一个描述开放时间的本体,并使用该本体创建一个输入表单。此外,我们解释了一个逻辑,它可以回答对开放或关闭的政府服务的查询。根据该本体对数据进行建模。本文讨论和应用的原则是朝着开放政府数据治理设计模式迈出的第一步。
{"title":"Ontology Based Improvement of Opening Hours in E-governments","authors":"Pieter Colpaert, Laurens De Vocht, S. Verstockt, Anastasia Dimou, Raf Buyle, E. Mannens, R. Walle","doi":"10.1109/ICSC.2014.37","DOIUrl":"https://doi.org/10.1109/ICSC.2014.37","url":null,"abstract":"To inform citizens when they can use government services, governments publish the services' opening hours on their website. When opening hours would be published in a machine interpretable manner, software agents would be able to answer queries about when it is possible to contact a certain service. We introduce an ontology for describing opening hours and use this ontology to create an input form. Furthermore, we explain a logic which can reply to queries for government services which are open or closed. The data is modeled according to this ontology. The principles discussed and applied in this paper are the first steps towards a design pattern for the governance of Open Government Data.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128447983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Semantic Matchmaking for Kinect-Based Posture and Gesture Recognition 基于动作的姿态和手势识别的语义匹配
Pub Date : 2014-06-16 DOI: 10.1142/S1793351X14400169
M. Ruta, F. Scioscia, Maria di Summa, S. Ieva, E. Sciascio, M. Sacco
Innovative analysis methods applied to data extracted by off-the-shelf peripherals can provide useful results in activity recognition without requiring large computational resources. In this paper a framework is proposed for automated posture and gesture recognition, exploiting depth data provided by a commercial tracking device. The detection problem is handled as a semantic-based resource discovery. A general data model and the corresponding ontology provide the formal underpinning for automatic posture and gesture annotation via standard Semantic Web languages. Hence, a logic-based matchmaking, exploiting non-standard inference services, allows to: (i) detect postures via on-the-fly comparison of the retrieved annotations with standard posture descriptions stored as instances of a proper Knowledge Base, (ii) compare subsequent postures in order to recognize gestures. The framework has been implemented in a prototypical tool and experimental tests have been carried out on a reference dataset. Preliminary results indicate the feasibility of the proposed approach.
创新的分析方法应用于由现成外设提取的数据,可以在不需要大量计算资源的情况下提供有用的活动识别结果。本文提出了一种利用商业跟踪设备提供的深度数据进行自动姿态和手势识别的框架。检测问题作为基于语义的资源发现来处理。通用数据模型和相应的本体为通过标准语义Web语言实现自动姿态和手势注释提供了形式化的基础。因此,基于逻辑的匹配,利用非标准的推理服务,允许:(i)通过将检索到的注释与存储为适当知识库实例的标准姿势描述进行实时比较来检测姿势,(ii)比较后续姿势以识别手势。该框架已在原型工具中实现,并在参考数据集上进行了实验测试。初步结果表明了该方法的可行性。
{"title":"Semantic Matchmaking for Kinect-Based Posture and Gesture Recognition","authors":"M. Ruta, F. Scioscia, Maria di Summa, S. Ieva, E. Sciascio, M. Sacco","doi":"10.1142/S1793351X14400169","DOIUrl":"https://doi.org/10.1142/S1793351X14400169","url":null,"abstract":"Innovative analysis methods applied to data extracted by off-the-shelf peripherals can provide useful results in activity recognition without requiring large computational resources. In this paper a framework is proposed for automated posture and gesture recognition, exploiting depth data provided by a commercial tracking device. The detection problem is handled as a semantic-based resource discovery. A general data model and the corresponding ontology provide the formal underpinning for automatic posture and gesture annotation via standard Semantic Web languages. Hence, a logic-based matchmaking, exploiting non-standard inference services, allows to: (i) detect postures via on-the-fly comparison of the retrieved annotations with standard posture descriptions stored as instances of a proper Knowledge Base, (ii) compare subsequent postures in order to recognize gestures. The framework has been implemented in a prototypical tool and experimental tests have been carried out on a reference dataset. Preliminary results indicate the feasibility of the proposed approach.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116774995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Representing Evidence from Biomedical Literature for Clinical Decision Support: Challenges on Semantic Computing and Biomedicine 代表临床决策支持的生物医学文献证据:语义计算和生物医学的挑战
Pub Date : 2014-06-01 DOI: 10.1109/ICSC.2014.67
William Hsu
The rate at which biomedical literature is being published is quickly outpacing our ability to effectively leverage this information for evidence-based medicine. While papers are readily searchable through databases such as Pub Med, clinicians are often left with the time-consuming task of finding, assessing, interpreting, and applying this information. Tools that structure evidence from published papers using a standardized data model and provide an intuitive query interface for exploring documented biomedical entities would be valuable in utilizing this information as part of the clinical decision making process. This talk presents efforts towards developing computational tools and a representation for modeling and relating evidence from multiple clinical trial reports for lung cancer. Challenges related to representing this information in a machine-interpretable manner, assessing study quality, and handling conflicting evidence are described. I discuss the development of two tools: 1) an annotator tool used to extract information from papers, mapping it to concepts in an ontology-based representation and 2) a visualization that summarizes information about a single paper based on information captured in the model. Using lung cancer as a driving example, I demonstrate how these tools help users apply information reported in literature towards individually tailored medicine.
生物医学文献发表的速度远远超过了我们有效利用这些信息进行循证医学的能力。虽然论文可以很容易地通过Pub Med等数据库进行搜索,但临床医生往往需要花费大量时间来寻找、评估、解释和应用这些信息。使用标准化数据模型构建已发表论文证据的工具,以及为探索记录的生物医学实体提供直观查询界面的工具,对于利用这些信息作为临床决策过程的一部分非常有价值。本讲座介绍了开发计算工具的努力,以及建模的表示和来自肺癌多个临床试验报告的相关证据。描述了与以机器可解释的方式表示这些信息、评估研究质量和处理相互矛盾的证据相关的挑战。我将讨论两种工具的开发:1)用于从论文中提取信息的注释器工具,将其映射到基于本体的表示中的概念;2)基于模型中捕获的信息总结单个论文信息的可视化工具。以肺癌为例,我演示了这些工具如何帮助用户将文献中报告的信息应用于个性化医疗。
{"title":"Representing Evidence from Biomedical Literature for Clinical Decision Support: Challenges on Semantic Computing and Biomedicine","authors":"William Hsu","doi":"10.1109/ICSC.2014.67","DOIUrl":"https://doi.org/10.1109/ICSC.2014.67","url":null,"abstract":"The rate at which biomedical literature is being published is quickly outpacing our ability to effectively leverage this information for evidence-based medicine. While papers are readily searchable through databases such as Pub Med, clinicians are often left with the time-consuming task of finding, assessing, interpreting, and applying this information. Tools that structure evidence from published papers using a standardized data model and provide an intuitive query interface for exploring documented biomedical entities would be valuable in utilizing this information as part of the clinical decision making process. This talk presents efforts towards developing computational tools and a representation for modeling and relating evidence from multiple clinical trial reports for lung cancer. Challenges related to representing this information in a machine-interpretable manner, assessing study quality, and handling conflicting evidence are described. I discuss the development of two tools: 1) an annotator tool used to extract information from papers, mapping it to concepts in an ontology-based representation and 2) a visualization that summarizes information about a single paper based on information captured in the model. Using lung cancer as a driving example, I demonstrate how these tools help users apply information reported in literature towards individually tailored medicine.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114213374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping Hierarchical Sources into RDF Using the RML Mapping Language 使用RML映射语言将分层源映射到RDF
Pub Date : 2014-06-01 DOI: 10.1109/ICSC.2014.25
Anastasia Dimou, M. V. Sande, Jason Slepicka, Pedro A. Szekely, E. Mannens, Craig A. Knoblock, R. Walle
Incorporating structured data in the Linked Data cloud is still complicated, despite the numerous existing tools. In particular, hierarchical structured data (e.g., JSON) are underrepresented, due to their processing complexity. A uniform mapping formalization for data in different formats, which would enable reuse and exchange between tools and applied data, is missing. This paper describes a novel approach of mapping heterogeneous and hierarchical data sources into RDF using the RML mapping language, an extension over R2RML (the W3C standard for mapping relational databases into RDF). To facilitate those mappings, we present a toolset for producing RML mapping files using the Karma data modelling tool, and for consuming them using a prototype RML processor. A use case shows how RML facilitates the mapping rules' definition and execution to map several heterogeneous sources.
在关联数据云中整合结构化数据仍然很复杂,尽管有许多现有的工具。特别是,层次结构数据(例如JSON)由于其处理复杂性而未被充分表示。对于不同格式的数据,没有统一的映射形式化,这将支持工具和应用数据之间的重用和交换。本文描述了一种使用RML映射语言将异构和分层数据源映射到RDF的新方法,RML映射语言是对R2RML(将关系数据库映射到RDF的W3C标准)的扩展。为了促进这些映射,我们提供了一个工具集,用于使用Karma数据建模工具生成RML映射文件,并使用原型RML处理器来消费它们。用例展示了RML如何简化映射规则的定义和执行,从而映射多个异构源。
{"title":"Mapping Hierarchical Sources into RDF Using the RML Mapping Language","authors":"Anastasia Dimou, M. V. Sande, Jason Slepicka, Pedro A. Szekely, E. Mannens, Craig A. Knoblock, R. Walle","doi":"10.1109/ICSC.2014.25","DOIUrl":"https://doi.org/10.1109/ICSC.2014.25","url":null,"abstract":"Incorporating structured data in the Linked Data cloud is still complicated, despite the numerous existing tools. In particular, hierarchical structured data (e.g., JSON) are underrepresented, due to their processing complexity. A uniform mapping formalization for data in different formats, which would enable reuse and exchange between tools and applied data, is missing. This paper describes a novel approach of mapping heterogeneous and hierarchical data sources into RDF using the RML mapping language, an extension over R2RML (the W3C standard for mapping relational databases into RDF). To facilitate those mappings, we present a toolset for producing RML mapping files using the Karma data modelling tool, and for consuming them using a prototype RML processor. A use case shows how RML facilitates the mapping rules' definition and execution to map several heterogeneous sources.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129810630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
期刊
2014 IEEE International Conference on Semantic Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1