首页 > 最新文献

CEUR workshop proceedings最新文献

英文 中文
Personalized Health Knowledge Graph. 个性化健康知识图谱。
Pub Date : 2018-10-01
Amelie Gyrard, Manas Gaur, Saeedeh Shekarpour, Krishnaprasad Thirunarayan, Amit Sheth

Our current health applications do not adequately take into account contextual and personalized knowledge about patients. In order to design "Personalized Coach for Healthcare" applications to manage chronic diseases, there is a need to create a Personalized Healthcare Knowledge Graph (PHKG) that takes into consideration a patient's health condition (personalized knowledge) and enriches that with contextualized knowledge from environmental sensors and Web of Data (e.g., symptoms and treatments for diseases). To develop PHKG, aggregating knowledge from various heterogeneous sources such as the Internet of Things (IoT) devices, clinical notes, and Electronic Medical Records (EMRs) is necessary. In this paper, we explain the challenges of collecting, managing, analyzing, and integrating patients' health data from various sources in order to synthesize and deduce meaningful information embodying the vision of the Data, Information, Knowledge, and Wisdom (DIKW) pyramid. Furthermore, we sketch a solution that combines: 1) IoT data analytics, and 2) explicit knowledge and illustrate it using three chronic disease use cases - asthma, obesity, and Parkinson's.

我们目前的健康应用程序没有充分考虑到患者的情境和个性化知识。为了设计“个性化医疗保健教练”应用程序来管理慢性疾病,需要创建一个个性化医疗保健知识图(PHKG),该知识图考虑了患者的健康状况(个性化知识),并使用来自环境传感器和数据网络的情境化知识(例如疾病的症状和治疗)来丰富患者的健康状况。为了开发PHKG,需要从各种异构来源(如物联网(IoT)设备、临床记录和电子医疗记录(emr))聚集知识。在本文中,我们解释了收集、管理、分析和整合来自各种来源的患者健康数据的挑战,以便综合和推断体现数据、信息、知识和智慧(DIKW)金字塔愿景的有意义的信息。此外,我们还概述了一个解决方案,该解决方案结合了:1)物联网数据分析和2)明确的知识,并使用三种慢性病用例(哮喘、肥胖和帕金森病)进行说明。
{"title":"Personalized Health Knowledge Graph.","authors":"Amelie Gyrard,&nbsp;Manas Gaur,&nbsp;Saeedeh Shekarpour,&nbsp;Krishnaprasad Thirunarayan,&nbsp;Amit Sheth","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Our current health applications do not adequately take into account contextual and personalized knowledge about patients. In order to design \"Personalized Coach for Healthcare\" applications to manage chronic diseases, there is a need to create a Personalized Healthcare Knowledge Graph (PHKG) that takes into consideration a patient's health condition (personalized knowledge) and enriches that with contextualized knowledge from environmental sensors and Web of Data (e.g., symptoms and treatments for diseases). To develop PHKG, aggregating knowledge from various heterogeneous sources such as the Internet of Things (IoT) devices, clinical notes, and Electronic Medical Records (EMRs) is necessary. In this paper, we explain the challenges of collecting, managing, analyzing, and integrating patients' health data from various sources in order to synthesize and deduce meaningful information embodying the vision of the Data, Information, Knowledge, and Wisdom (DIKW) pyramid. Furthermore, we sketch a solution that combines: 1) IoT data analytics, and 2) explicit knowledge and illustrate it using three chronic disease use cases - asthma, obesity, and Parkinson's.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2317 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8532078/pdf/nihms-1743812.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39551742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards automated pain detection in children using facial and electrodermal activity 利用面部和皮肤电活动实现儿童疼痛自动检测
Pub Date : 2018-07-13 DOI: 10.1007/978-3-030-12738-1_13
Xiaojing Xu, Busra T. Susam, H. Nezamfar, K. Craig, Damaris Diaz, Jeannie S. Huang, M. Goodwin, M. Akçakaya, V. D. Sa
{"title":"Towards automated pain detection in children using facial and electrodermal activity","authors":"Xiaojing Xu, Busra T. Susam, H. Nezamfar, K. Craig, Damaris Diaz, Jeannie S. Huang, M. Goodwin, M. Akçakaya, V. D. Sa","doi":"10.1007/978-3-030-12738-1_13","DOIUrl":"https://doi.org/10.1007/978-3-030-12738-1_13","url":null,"abstract":"","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2142 1","pages":"208-211"},"PeriodicalIF":0.0,"publicationDate":"2018-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42071686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Automated Pain Detection in Facial Videos of Children using Human-Assisted Transfer Learning. 使用人类辅助迁移学习的儿童面部视频中的自动疼痛检测。
Pub Date : 2018-07-01
Xiaojing Xu, Kenneth D Craig, Damaris Diaz, Matthew S Goodwin, Murat Akcakaya, Büşra Tuğçe Susam, Jeannie S Huang, Virginia R de Sa

Accurately determining pain levels in children is difficult, even for trained professionals and parents. Facial activity provides sensitive and specific information about pain, and computer vision algorithms have been developed to automatically detect Facial Action Units (AUs) defined by the Facial Action Coding System (FACS). Our prior work utilized information from computer vision, i.e., automatically detected facial AUs, to develop classifiers to distinguish between pain and no-pain conditions. However, application of pain/no-pain classifiers based on automated AU codings across different environmental domains results in diminished performance. In contrast, classifiers based on manually coded AUs demonstrate reduced environmentally-based variability in performance. In this paper, we train a machine learning model to recognize pain using AUs coded by a computer vision system embedded in a software package called iMotions. We also study the relationship between iMotions (automatically) and human (manually) coded AUs. We find that AUs coded automatically are different from those coded by a human trained in the FACS system, and that the human coder is less sensitive to environmental changes. To improve classification performance in the current work, we applied transfer learning by training another machine learning model to map automated AU codings to a subspace of manual AU codings to enable more robust pain recognition performance when only automatically coded AUs are available for the test data. With this transfer learning method, we improved the Area Under the ROC Curve (AUC) on independent data from new participants in our target domain from 0.67 to 0.72.

即使是受过培训的专业人员和家长,也很难准确确定儿童的疼痛程度。面部活动提供了关于疼痛的敏感和特定信息,并且已经开发了计算机视觉算法来自动检测面部动作编码系统(FACS)定义的面部动作单元(AU)。我们之前的工作利用了来自计算机视觉的信息,即自动检测到的面部AU,来开发分类器来区分疼痛和无疼痛情况。然而,基于自动AU编码的疼痛/无疼痛分类器在不同环境领域的应用会导致性能下降。相比之下,基于手动编码AU的分类器表现出减少了基于环境的性能可变性。在本文中,我们使用嵌入名为iMotions的软件包中的计算机视觉系统编码的AU来训练机器学习模型来识别疼痛。我们还研究了iMotions(自动)和人类(手动)编码AU之间的关系。我们发现,自动编码的AU与在FACS系统中训练的人编码的AUs不同,并且人类编码器对环境变化不太敏感。为了提高当前工作中的分类性能,我们通过训练另一个机器学习模型来应用迁移学习,将自动AU编码映射到手动AU编码的子空间,以在只有自动编码的AU可用于测试数据时实现更稳健的疼痛识别性能。通过这种迁移学习方法,我们将目标域中新参与者的独立数据的ROC曲线下面积(AUC)从0.67提高到0.72。
{"title":"Automated Pain Detection in Facial Videos of Children using Human-Assisted Transfer Learning.","authors":"Xiaojing Xu,&nbsp;Kenneth D Craig,&nbsp;Damaris Diaz,&nbsp;Matthew S Goodwin,&nbsp;Murat Akcakaya,&nbsp;Büşra Tuğçe Susam,&nbsp;Jeannie S Huang,&nbsp;Virginia R de Sa","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Accurately determining pain levels in children is difficult, even for trained professionals and parents. Facial activity provides sensitive and specific information about pain, and computer vision algorithms have been developed to automatically detect Facial Action Units (AUs) defined by the Facial Action Coding System (FACS). Our prior work utilized information from computer vision, i.e., automatically detected facial AUs, to develop classifiers to distinguish between pain and no-pain conditions. However, application of pain/no-pain classifiers based on automated AU codings across different environmental domains results in diminished performance. In contrast, classifiers based on manually coded AUs demonstrate reduced environmentally-based variability in performance. In this paper, we train a machine learning model to recognize pain using AUs coded by a computer vision system embedded in a software package called iMotions. We also study the relationship between iMotions (automatically) and human (manually) coded AUs. We find that AUs coded automatically are different from those coded by a human trained in the FACS system, and that the human coder is less sensitive to environmental changes. To improve classification performance in the current work, we applied transfer learning by training another machine learning model to map automated AU codings to a subspace of manual AU codings to enable more robust pain recognition performance when only automatically coded AUs are available for the test data. With this transfer learning method, we improved the Area Under the ROC Curve (AUC) on independent data from new participants in our target domain from 0.67 to 0.72.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2142 ","pages":"10-21"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6352979/pdf/nihms-1001649.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41164655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated pain detection in facial videos of children using human-assisted transfer learning 使用人工辅助迁移学习的儿童面部视频中的自动疼痛检测
Pub Date : 2018-07-01 DOI: 10.1007/978-3-030-12738-1_12
Xiaojing Xu, K. Craig, Damaris Diaz, M. Goodwin, M. Akçakaya, Busra T. Susam, Jeannie S. Huang, V. D. Sa
{"title":"Automated pain detection in facial videos of children using human-assisted transfer learning","authors":"Xiaojing Xu, K. Craig, Damaris Diaz, M. Goodwin, M. Akçakaya, Busra T. Susam, Jeannie S. Huang, V. D. Sa","doi":"10.1007/978-3-030-12738-1_12","DOIUrl":"https://doi.org/10.1007/978-3-030-12738-1_12","url":null,"abstract":"","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2142 1","pages":"10-21"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-12738-1_12","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49169328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Towards Automated Pain Detection in Children using Facial and Electrodermal Activity. 使用面部和皮肤电活动实现儿童疼痛的自动检测。
Pub Date : 2018-07-01
Xiaojing Xu, Büsra Tuğce Susam, Hooman Nezamfar, Damaris Diaz, Kenneth D Craig, Matthew S Goodwin, Murat Akcakaya, Jeannie S Huang, R de Sa Virginia

Accurately determining pain levels in children is difficult, even for trained professionals and parents. Facial activity and electro- dermal activity (EDA) provide rich information about pain, and both have been used in automated pain detection. In this paper, we discuss preliminary steps towards fusing models trained on video and EDA features respectively. We compare fusion models using original video features and those using transferred video features which are less sensitive to environmental changes. We demonstrate the benefit of the fusion and the transferred video features with a special test case involving domain adaptation and improved performance relative to using EDA and video features alone.

即使是受过培训的专业人员和家长,也很难准确确定儿童的疼痛程度。面部活动和皮肤电活动(EDA)提供了关于疼痛的丰富信息,并且两者都已被用于自动疼痛检测。在本文中,我们分别讨论了融合基于视频和EDA特征训练的模型的初步步骤。我们比较了使用原始视频特征的融合模型和使用对环境变化不太敏感的转移视频特征的模型。我们通过一个特殊的测试案例展示了融合和传输的视频特征的好处,该测试案例涉及领域自适应,并相对于单独使用EDA和视频特征提高了性能。
{"title":"Towards Automated Pain Detection in Children using Facial and Electrodermal Activity.","authors":"Xiaojing Xu,&nbsp;Büsra Tuğce Susam,&nbsp;Hooman Nezamfar,&nbsp;Damaris Diaz,&nbsp;Kenneth D Craig,&nbsp;Matthew S Goodwin,&nbsp;Murat Akcakaya,&nbsp;Jeannie S Huang,&nbsp;R de Sa Virginia","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Accurately determining pain levels in children is difficult, even for trained professionals and parents. Facial activity and electro- dermal activity (EDA) provide rich information about pain, and both have been used in automated pain detection. In this paper, we discuss preliminary steps towards fusing models trained on video and EDA features respectively. We compare fusion models using original video features and those using transferred video features which are less sensitive to environmental changes. We demonstrate the benefit of the fusion and the transferred video features with a special test case involving domain adaptation and improved performance relative to using EDA and video features alone.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2142 ","pages":"208-211"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6352962/pdf/nihms-1001656.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41175227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalizing Mobile Fitness Apps using Reinforcement Learning. 利用强化学习实现移动健身应用程序的个性化。
Pub Date : 2018-03-07
Mo Zhou, Yonatan Mintz, Yoshimi Fukuoka, Ken Goldberg, Elena Flowers, Philip Kaminsky, Alejandro Castillejo, Anil Aswani

Despite the vast number of mobile fitness applications (apps) and their potential advantages in promoting physical activity, many existing apps lack behavior-change features and are not able to maintain behavior change motivation. This paper describes a novel fitness app called CalFit, which implements important behavior-change features like dynamic goal setting and self-monitoring. CalFit uses a reinforcement learning algorithm to generate personalized daily step goals that are challenging but attainable. We conducted the Mobile Student Activity Reinforcement (mSTAR) study with 13 college students to evaluate the efficacy of the CalFit app. The control group (receiving goals of 10,000 steps/day) had a decrease in daily step count of 1,520 (SD ± 740) between baseline and 10-weeks, compared to an increase of 700 (SD ± 830) in the intervention group (receiving personalized step goals). The difference in daily steps between the two groups was 2,220, with a statistically significant p = 0.039.

尽管移动健身应用程序(Apps)数量庞大,而且在促进体育锻炼方面具有潜在优势,但许多现有应用程序缺乏改变行为的功能,无法维持改变行为的动力。本文介绍了一款名为 CalFit 的新型健身应用程序,它实现了动态目标设定和自我监控等重要的行为改变功能。CalFit 使用强化学习算法生成具有挑战性但可实现的个性化每日步数目标。我们对 13 名大学生进行了移动学生活动强化(mSTAR)研究,以评估 CalFit 应用程序的功效。对照组(接受 10,000 步/天的目标)的每日步数在基线和 10 周之间减少了 1,520 步(标准差 ± 740),而干预组(接受个性化步数目标)则增加了 700 步(标准差 ± 830)。两组的每日步数相差 2220 步,P = 0.039,具有显著的统计学意义。
{"title":"Personalizing Mobile Fitness Apps using Reinforcement Learning.","authors":"Mo Zhou, Yonatan Mintz, Yoshimi Fukuoka, Ken Goldberg, Elena Flowers, Philip Kaminsky, Alejandro Castillejo, Anil Aswani","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Despite the vast number of mobile fitness applications (apps) and their potential advantages in promoting physical activity, many existing apps lack behavior-change features and are not able to maintain behavior change motivation. This paper describes a novel fitness app called CalFit, which implements important behavior-change features like dynamic goal setting and self-monitoring. CalFit uses a reinforcement learning algorithm to generate personalized daily step goals that are challenging but attainable. We conducted the Mobile Student Activity Reinforcement (mSTAR) study with 13 college students to evaluate the efficacy of the CalFit app. The control group (receiving goals of 10,000 steps/day) had a decrease in daily step count of 1,520 (SD ± 740) between baseline and 10-weeks, compared to an increase of 700 (SD ± 830) in the intervention group (receiving personalized step goals). The difference in daily steps between the two groups was 2,220, with a statistically significant <i>p</i> = 0.039.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2068 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7220419/pdf/nihms966774.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37932251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Automatic Generation of Portions of Scientific Papers for Large Multi-Institutional Collaborations Based on Semantic Metadata. 基于语义元数据的大型多机构协作科学论文部分自动生成研究。
Pub Date : 2017-10-01
MiHyun Jang, Tejal Patted, Yolanda Gil, Daniel Garijo, Varun Ratnakar, Jie Ji, Prince Wang, Aggie McMahon, Paul M Thompson, Neda Jahanshad

Scientific collaborations involving multiple institutions are increasingly commonplace. It is not unusual for publications to have dozens or hundreds of authors, in some cases even a few thousands. Gathering the information for such papers may be very time consuming, since the author list must include authors who made different kinds of contributions and whose affiliations are hard to track. Similarly, when datasets are contributed by multiple institutions, the collection and processing details may also be hard to assemble due to the many individuals involved. We present our work to date on automatically generating author lists and other portions of scientific papers for multi-institutional collaborations based on the metadata created to represent the people, data, and activities involved. Our initial focus is ENIGMA, a large international collaboration for neuroimaging genetics.

涉及多个机构的科学合作越来越普遍。出版物有几十个或几百个作者,在某些情况下甚至有几千个作者,这并不罕见。收集这类论文的信息可能非常耗时,因为作者名单必须包括做出不同贡献的作者,而他们的隶属关系很难追踪。同样,当数据集由多个机构提供时,由于涉及许多个人,收集和处理细节也可能难以汇总。我们介绍了我们迄今为止在自动生成作者列表和多机构合作科学论文的其他部分方面的工作,这些工作基于创建的元数据来表示所涉及的人员、数据和活动。我们最初的重点是ENIGMA,一个大型的神经成像遗传学国际合作项目。
{"title":"Towards Automatic Generation of Portions of Scientific Papers for Large Multi-Institutional Collaborations Based on Semantic Metadata.","authors":"MiHyun Jang,&nbsp;Tejal Patted,&nbsp;Yolanda Gil,&nbsp;Daniel Garijo,&nbsp;Varun Ratnakar,&nbsp;Jie Ji,&nbsp;Prince Wang,&nbsp;Aggie McMahon,&nbsp;Paul M Thompson,&nbsp;Neda Jahanshad","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Scientific collaborations involving multiple institutions are increasingly commonplace. It is not unusual for publications to have dozens or hundreds of authors, in some cases even a few thousands. Gathering the information for such papers may be very time consuming, since the author list must include authors who made different kinds of contributions and whose affiliations are hard to track. Similarly, when datasets are contributed by multiple institutions, the collection and processing details may also be hard to assemble due to the many individuals involved. We present our work to date on automatically generating author lists and other portions of scientific papers for multi-institutional collaborations based on the metadata created to represent the people, data, and activities involved. Our initial focus is ENIGMA, a large international collaboration for neuroimaging genetics.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"1931 ","pages":"63-70"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6053267/pdf/nihms980712.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36333360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UArizona at the CLEF eRisk 2017 Pilot Task: Linear and Recurrent Models for Early Depression Detection. 2017年CLEF风险试点任务:早期抑郁症检测的线性和循环模型。
Pub Date : 2017-09-01 Epub Date: 2017-07-13
Farig Sadeque, Dongfang Xu, Steven Bethard

The 2017 CLEF eRisk pilot task focuses on automatically detecting depression as early as possible from a users' posts to Reddit. In this paper we present the techniques employed for the University of Arizona team's participation in this early risk detection shared task. We leveraged external information beyond the small training set, including a preexisting depression lexicon and concepts from the Unified Medical Language System as features. For prediction, we used both sequential (recurrent neural network) and non-sequential (support vector machine) models. Our models perform decently on the test data, and the recurrent neural models perform better than the non-sequential support vector machines while using the same feature sets.

2017年CLEF eRisk试点任务的重点是尽早从用户在Reddit上的帖子中自动检测抑郁症。在本文中,我们展示了亚利桑那大学团队参与这一早期风险检测共享任务所采用的技术。我们利用了小训练集之外的外部信息,包括先前存在的抑郁症词汇和统一医学语言系统的概念作为特征。对于预测,我们使用了顺序(循环神经网络)和非顺序(支持向量机)模型。我们的模型在测试数据上表现良好,并且在使用相同的特征集时,循环神经模型比非顺序支持向量机表现更好。
{"title":"UArizona at the CLEF eRisk 2017 Pilot Task: Linear and Recurrent Models for Early Depression Detection.","authors":"Farig Sadeque,&nbsp;Dongfang Xu,&nbsp;Steven Bethard","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The 2017 CLEF eRisk pilot task focuses on automatically detecting depression as early as possible from a users' posts to Reddit. In this paper we present the techniques employed for the University of Arizona team's participation in this early risk detection shared task. We leveraged external information beyond the small training set, including a preexisting depression lexicon and concepts from the Unified Medical Language System as features. For prediction, we used both sequential (recurrent neural network) and non-sequential (support vector machine) models. Our models perform decently on the test data, and the recurrent neural models perform better than the non-sequential support vector machines while using the same feature sets.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"1866 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5654552/pdf/nihms912392.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35552112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clinical Information Extraction at the CLEF eHealth Evaluation lab 2016. CLEF健康评估实验室临床信息提取2016。
Pub Date : 2016-09-01
Aurélie Névéol, K Bretonnel Cohen, Cyril Grouin, Thierry Hamon, Thomas Lavergne, Liadh Kelly, Lorraine Goeuriot, Grégoire Rey, Aude Robert, Xavier Tannier, Pierre Zweigenbaum

This paper reports on Task 2 of the 2016 CLEF eHealth evaluation lab which extended the previous information extraction tasks of ShARe/CLEF eHealth evaluation labs. The task continued with named entity recognition and normalization in French narratives, as offered in CLEF eHealth 2015. Named entity recognition involved ten types of entities including disorders that were defined according to Semantic Groups in the Unified Medical Language System® (UMLS®), which was also used for normalizing the entities. In addition, we introduced a large-scale classification task in French death certificates, which consisted of extracting causes of death as coded in the International Classification of Diseases, tenth revision (ICD10). Participant systems were evaluated against a blind reference standard of 832 titles of scientific articles indexed in MEDLINE, 4 drug monographs published by the European Medicines Agency (EMEA) and 27,850 death certificates using Precision, Recall and F-measure. In total, seven teams participated, including five in the entity recognition and normalization task, and five in the death certificate coding task. Three teams submitted their systems to our newly offered reproducibility track. For entity recognition, the highest performance was achieved on the EMEA corpus, with an overall F-measure of 0.702 for plain entities recognition and 0.529 for normalized entity recognition. For entity normalization, the highest performance was achieved on the MEDLINE corpus, with an overall F-measure of 0.552. For death certificate coding, the highest performance was 0.848 F-measure.

本文报告了2016年CLEF eHealth评估实验室的Task 2,它扩展了ShARe/CLEF eHealth评估实验室之前的信息提取任务。这项任务继续在法语叙述中进行命名实体识别和规范化,如CLEF eHealth 2015所提供的那样。命名实体识别涉及十种类型的实体,包括根据统一医学语言系统®(UMLS®)中的语义组定义的疾病,该系统也用于规范化实体。此外,我们在法国死亡证明中引入了一项大规模分类任务,其中包括提取国际疾病分类第十版(ICD10)编码的死亡原因。参与者系统根据MEDLINE索引的832篇科学文章标题、欧洲药品管理局(EMEA)发表的4篇药物专著和使用Precision、Recall和F-measure的27,850份死亡证明的盲参考标准进行评估。总共有7个小组参加,其中5个小组参加实体识别和规范化任务,5个小组参加死亡证明编码任务。三个团队将他们的系统提交到我们新提供的可重复性轨道上。对于实体识别,在EMEA语料库上实现了最高的性能,普通实体识别的总体f值为0.702,规范化实体识别的总体f值为0.529。对于实体规范化,在MEDLINE语料库上实现了最高的性能,总体f值为0.552。对于死亡证明编码,最高性能为0.848 F-measure。
{"title":"Clinical Information Extraction at the CLEF eHealth Evaluation lab 2016.","authors":"Aurélie Névéol,&nbsp;K Bretonnel Cohen,&nbsp;Cyril Grouin,&nbsp;Thierry Hamon,&nbsp;Thomas Lavergne,&nbsp;Liadh Kelly,&nbsp;Lorraine Goeuriot,&nbsp;Grégoire Rey,&nbsp;Aude Robert,&nbsp;Xavier Tannier,&nbsp;Pierre Zweigenbaum","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This paper reports on Task 2 of the 2016 CLEF eHealth evaluation lab which extended the previous information extraction tasks of ShARe/CLEF eHealth evaluation labs. The task continued with named entity recognition and normalization in French narratives, as offered in CLEF eHealth 2015. Named entity recognition involved ten types of entities including <i>disorders</i> that were defined according to Semantic Groups in the Unified Medical Language System<sup>®</sup> (UMLS<sup>®</sup>), which was also used for normalizing the entities. In addition, we introduced a large-scale classification task in French death certificates, which consisted of extracting causes of death as coded in the International Classification of Diseases, tenth revision (ICD10). Participant systems were evaluated against a blind reference standard of 832 titles of scientific articles indexed in MEDLINE, 4 drug monographs published by the European Medicines Agency (EMEA) and 27,850 death certificates using Precision, Recall and F-measure. In total, seven teams participated, including five in the entity recognition and normalization task, and five in the death certificate coding task. Three teams submitted their systems to our newly offered reproducibility track. For entity recognition, the highest performance was achieved on the EMEA corpus, with an overall F-measure of 0.702 for plain entities recognition and 0.529 for normalized entity recognition. For entity normalization, the highest performance was achieved on the MEDLINE corpus, with an overall F-measure of 0.552. For death certificate coding, the highest performance was 0.848 F-measure.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"1609 ","pages":"28-42"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5756095/pdf/nihms921614.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35715159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying Missing Hierarchical Relations in SNOMED CT from Logical Definitions Based on the Lexical Features of Concept Names. 基于概念名称词法特征的逻辑定义中缺失层次关系识别。
Pub Date : 2016-08-01
Olivier Bodenreider

Objectives: To identify missing hierarchical relations in SNOMED CT from logical definitions based on the lexical features of concept names.

Methods: We first create logical definitions from the lexical features of concept names, which we represent in OWL EL. We infer hierarchical (subClassOf) relations among these concepts using the ELK reasoner. Finally, we compare the hierarchy obtained from lexical features to the original SNOMED CT hierarchy. We review the differences manually for evaluation purposes.

Results: Applied to 15,833 disorder and procedure concepts, our approach identified 559 potentially missing hierarchical relations, of which 78% were deemed valid.

Conclusions: This lexical approach to quality assurance is easy to implement, efficient and scalable.

目的:基于概念名称的词法特征,从逻辑定义中识别SNOMED CT中缺失的层次关系。方法:我们首先根据概念名称的词法特征创建逻辑定义,并在OWL EL中表示。我们使用ELK推理器推断这些概念之间的层次(subClassOf)关系。最后,我们将从词汇特征得到的层次结构与原始的SNOMED CT层次结构进行比较。为了评估目的,我们手动检查差异。结果:应用于15,833个无序和程序概念,我们的方法确定了559个潜在缺失的层次关系,其中78%被认为是有效的。结论:这种词法质量保证方法易于实施,高效且可扩展。
{"title":"Identifying Missing Hierarchical Relations in SNOMED CT from Logical Definitions Based on the Lexical Features of Concept Names.","authors":"Olivier Bodenreider","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Objectives: </strong>To identify missing hierarchical relations in SNOMED CT from logical definitions based on the lexical features of concept names.</p><p><strong>Methods: </strong>We first create logical definitions from the lexical features of concept names, which we represent in OWL EL. We infer hierarchical (<i>subClassOf</i>) relations among these concepts using the ELK reasoner. Finally, we compare the hierarchy obtained from lexical features to the original SNOMED CT hierarchy. We review the differences manually for evaluation purposes.</p><p><strong>Results: </strong>Applied to 15,833 disorder and procedure concepts, our approach identified 559 potentially missing hierarchical relations, of which 78% were deemed valid.</p><p><strong>Conclusions: </strong>This lexical approach to quality assurance is easy to implement, efficient and scalable.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9584353/pdf/nihms-1840462.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40568894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
CEUR workshop proceedings
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1