首页 > 最新文献

CEUR workshop proceedings最新文献

英文 中文
Comparing the representation of medicinal products in RxNorm and SNOMED CT - Consequences on interoperability. 比较RxNorm和SNOMED CT中药品的表示——对互操作性的影响。
Pub Date : 2019-08-01
Jean Noel Nikiema, Olivier Bodenreider

Objectives: To compare the representation of medicinal products in RxNorm and SNOMED CT and assess the consequences on interoperability.

Methods: To compare the two models, we manually establish equivalences between the types and definitional features of medicinal products entities in RxNorm and SNOMED CT. We highlight their similarities and differences.

Results: Both models share major definitional features including ingredient (or substance), strength and dose form. SNOMED CT is more rigorous and better aligned with international standards. In contrast, RxNorm contains implicit knowledge, simplifications and ambiguities, but its model is simpler.

Conclusions: Since their models are largely compatible, medicinal products from RxNorm and SNOMED CT are expected to be interoperable. However, specific aspects of the alignment between the two models require particular attention.

目的:比较RxNorm和SNOMED CT中药品的表现,并评估对互操作性的影响。方法:通过人工建立RxNorm和SNOMED CT中药品实体的类型和定义特征之间的等价关系,对两种模型进行比较。我们强调他们的异同。结果:两种模型都具有主要的定义特征,包括成分(或物质)、强度和剂型。SNOMED CT更严格,更符合国际标准。相比之下,RxNorm包含隐式知识、简化和歧义,但它的模型更简单。结论:由于它们的模型在很大程度上是兼容的,RxNorm和SNOMED CT的医药产品有望实现互操作。然而,两个模型之间的一致性的特定方面需要特别注意。
{"title":"Comparing the representation of medicinal products in RxNorm and SNOMED CT - Consequences on interoperability.","authors":"Jean Noel Nikiema, Olivier Bodenreider","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Objectives: </strong>To compare the representation of medicinal products in RxNorm and SNOMED CT and assess the consequences on interoperability.</p><p><strong>Methods: </strong>To compare the two models, we manually establish equivalences between the types and definitional features of medicinal products entities in RxNorm and SNOMED CT. We highlight their similarities and differences.</p><p><strong>Results: </strong>Both models share major definitional features including ingredient (or substance), strength and dose form. SNOMED CT is more rigorous and better aligned with international standards. In contrast, RxNorm contains implicit knowledge, simplifications and ambiguities, but its model is simpler.</p><p><strong>Conclusions: </strong>Since their models are largely compatible, medicinal products from RxNorm and SNOMED CT are expected to be interoperable. However, specific aspects of the alignment between the two models require particular attention.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2931 ","pages":"F1-F6"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9584356/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40567815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explaining Deep Classification of Time-Series Data with Learned Prototypes. 用学习原型解释时间序列数据的深度分类。
Pub Date : 2019-08-01
Alan H Gee, Diego Garcia-Olano, Joydeep Ghosh, David Paydarfar

The emergence of deep learning networks raises a need for explainable AI so that users and domain experts can be confident applying them to high-risk decisions. In this paper, we leverage data from the latent space induced by deep learning models to learn stereotypical representations or "prototypes" during training to elucidate the algorithmic decision-making process. We study how leveraging prototypes effect classification decisions of two dimensional time-series data in a few different settings: (1) electrocardiogram (ECG) waveforms to detect clinical bradycardia, a slowing of heart rate, in preterm infants, (2) respiration waveforms to detect apnea of prematurity, and (3) audio waveforms to classify spoken digits. We improve upon existing models by optimizing for increased prototype diversity and robustness, visualize how these prototypes in the latent space are used by the model to distinguish classes, and show that prototypes are capable of learning features on two dimensional time-series data to produce explainable insights during classification tasks. We show that the prototypes are capable of learning real-world features - bradycardia in ECG, apnea in respiration, and articulation in speech - as well as features within sub-classes. Our novel work leverages learned prototypical framework on two dimensional time-series data to produce explainable insights during classification tasks.

深度学习网络的出现提出了对可解释的人工智能的需求,以便用户和领域专家可以自信地将它们应用于高风险决策。在本文中,我们利用深度学习模型诱导的潜在空间数据来学习训练过程中的刻板印象表征或“原型”,以阐明算法决策过程。我们研究了如何在几种不同的设置中利用二维时间序列数据的原型效应分类决策:(1)心电图(ECG)波形来检测早产儿的临床心动缓,心率减慢,(2)呼吸波形来检测早产儿的呼吸暂停,(3)音频波形来分类语音。我们通过优化增加原型多样性和鲁棒性来改进现有模型,可视化模型如何使用潜在空间中的这些原型来区分类别,并表明原型能够学习二维时间序列数据上的特征,从而在分类任务中产生可解释的见解。我们表明,原型能够学习现实世界的特征——心电图心动过缓、呼吸呼吸暂停和语音发音——以及子类中的特征。我们的新工作利用在二维时间序列数据上学习的原型框架,在分类任务中产生可解释的见解。
{"title":"Explaining Deep Classification of Time-Series Data with Learned Prototypes.","authors":"Alan H Gee,&nbsp;Diego Garcia-Olano,&nbsp;Joydeep Ghosh,&nbsp;David Paydarfar","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The emergence of deep learning networks raises a need for explainable AI so that users and domain experts can be confident applying them to high-risk decisions. In this paper, we leverage data from the latent space induced by deep learning models to learn stereotypical representations or \"prototypes\" during training to elucidate the algorithmic decision-making process. We study how leveraging prototypes effect classification decisions of two dimensional time-series data in a few different settings: (1) electrocardiogram (ECG) waveforms to detect clinical bradycardia, a slowing of heart rate, in preterm infants, (2) respiration waveforms to detect apnea of prematurity, and (3) audio waveforms to classify spoken digits. We improve upon existing models by optimizing for increased prototype diversity and robustness, visualize how these prototypes in the latent space are used by the model to distinguish classes, and show that prototypes are capable of learning features on two dimensional time-series data to produce explainable insights during classification tasks. We show that the prototypes are capable of learning real-world features - bradycardia in ECG, apnea in respiration, and articulation in speech - as well as features within sub-classes. Our novel work leverages learned prototypical framework on two dimensional time-series data to produce explainable insights during classification tasks.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2429 ","pages":"15-22"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8050893/pdf/nihms-1668684.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38884015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The New SNOMED CT International Medicinal Product Model. 新型SNOMED CT国际医药产品模型。
Pub Date : 2018-12-01
Olivier Bodenreider, Julie James

Objectives: To present the new SNOMED CT international medicinal product model.

Methods: We present the main elements of the model, with focus on types of entities and their interrelations, definitional attributes for clinical drugs, and categories of groupers.

Results: We present the status of implementation as of July 2018 and illustrate differences between the original and new models through an example.

Conclusions: Benefits of the new medicinal product model include comprehensive representation of clinical drugs, logical definitions with necessary and sufficient conditions for all medicinal product entities, better high-level organization through distinct categories of groupers, and compliance with international standards.

目的:建立新的SNOMED CT国际医药产品模型。方法:我们提出了模型的主要元素,重点是实体类型及其相互关系,临床药物的定义属性和石斑鱼的类别。结果:我们介绍了截至2018年7月的实施情况,并通过一个例子说明了原模型与新模型的差异。结论:新药品模型的好处包括临床药品的全面表征,所有药品实体具有必要和充分条件的逻辑定义,通过不同类别的石头鱼更好的高层组织,以及符合国际标准。
{"title":"The New SNOMED CT International Medicinal Product Model.","authors":"Olivier Bodenreider, Julie James","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Objectives: </strong>To present the new SNOMED CT international medicinal product model.</p><p><strong>Methods: </strong>We present the main elements of the model, with focus on types of entities and their interrelations, definitional attributes for clinical drugs, and categories of groupers.</p><p><strong>Results: </strong>We present the status of implementation as of July 2018 and illustrate differences between the original and new models through an example.</p><p><strong>Conclusions: </strong>Benefits of the new medicinal product model include comprehensive representation of clinical drugs, logical definitions with necessary and sufficient conditions for all medicinal product entities, better high-level organization through distinct categories of groupers, and compliance with international standards.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2285 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9584358/pdf/nihms-1840460.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40665758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized Health Knowledge Graph. 个性化健康知识图谱。
Pub Date : 2018-10-01
Amelie Gyrard, Manas Gaur, Saeedeh Shekarpour, Krishnaprasad Thirunarayan, Amit Sheth

Our current health applications do not adequately take into account contextual and personalized knowledge about patients. In order to design "Personalized Coach for Healthcare" applications to manage chronic diseases, there is a need to create a Personalized Healthcare Knowledge Graph (PHKG) that takes into consideration a patient's health condition (personalized knowledge) and enriches that with contextualized knowledge from environmental sensors and Web of Data (e.g., symptoms and treatments for diseases). To develop PHKG, aggregating knowledge from various heterogeneous sources such as the Internet of Things (IoT) devices, clinical notes, and Electronic Medical Records (EMRs) is necessary. In this paper, we explain the challenges of collecting, managing, analyzing, and integrating patients' health data from various sources in order to synthesize and deduce meaningful information embodying the vision of the Data, Information, Knowledge, and Wisdom (DIKW) pyramid. Furthermore, we sketch a solution that combines: 1) IoT data analytics, and 2) explicit knowledge and illustrate it using three chronic disease use cases - asthma, obesity, and Parkinson's.

我们目前的健康应用程序没有充分考虑到患者的情境和个性化知识。为了设计“个性化医疗保健教练”应用程序来管理慢性疾病,需要创建一个个性化医疗保健知识图(PHKG),该知识图考虑了患者的健康状况(个性化知识),并使用来自环境传感器和数据网络的情境化知识(例如疾病的症状和治疗)来丰富患者的健康状况。为了开发PHKG,需要从各种异构来源(如物联网(IoT)设备、临床记录和电子医疗记录(emr))聚集知识。在本文中,我们解释了收集、管理、分析和整合来自各种来源的患者健康数据的挑战,以便综合和推断体现数据、信息、知识和智慧(DIKW)金字塔愿景的有意义的信息。此外,我们还概述了一个解决方案,该解决方案结合了:1)物联网数据分析和2)明确的知识,并使用三种慢性病用例(哮喘、肥胖和帕金森病)进行说明。
{"title":"Personalized Health Knowledge Graph.","authors":"Amelie Gyrard,&nbsp;Manas Gaur,&nbsp;Saeedeh Shekarpour,&nbsp;Krishnaprasad Thirunarayan,&nbsp;Amit Sheth","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Our current health applications do not adequately take into account contextual and personalized knowledge about patients. In order to design \"Personalized Coach for Healthcare\" applications to manage chronic diseases, there is a need to create a Personalized Healthcare Knowledge Graph (PHKG) that takes into consideration a patient's health condition (personalized knowledge) and enriches that with contextualized knowledge from environmental sensors and Web of Data (e.g., symptoms and treatments for diseases). To develop PHKG, aggregating knowledge from various heterogeneous sources such as the Internet of Things (IoT) devices, clinical notes, and Electronic Medical Records (EMRs) is necessary. In this paper, we explain the challenges of collecting, managing, analyzing, and integrating patients' health data from various sources in order to synthesize and deduce meaningful information embodying the vision of the Data, Information, Knowledge, and Wisdom (DIKW) pyramid. Furthermore, we sketch a solution that combines: 1) IoT data analytics, and 2) explicit knowledge and illustrate it using three chronic disease use cases - asthma, obesity, and Parkinson's.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2317 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8532078/pdf/nihms-1743812.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39551742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ontology-Enhanced Representations of Non-image Data in The Cancer Imaging Archive. 肿瘤影像档案中非影像资料的本体增强表示。
Pub Date : 2018-08-01
Jonathan P Bona, Tracy S Nolan, Mathias Brochhausen

The Cancer Imaging Archive (TCIA) hosts over 11 million de-identified medical images related to cancer for research reuse. These are organized around DICOM-format radiological collections that are grouped by disease type, modality, or research focus. Many collections also include diverse non-image datasets in a variety of formats without a common approach to representing the entities that the data are about. This paper describes work to make these diverse non-image data more accessible and usable by transforming them into integrated semantic representations using Open Biomedical Ontologies, highlights obstacles encountered in the data, and presents detailed representations data found in select collections.

癌症影像档案(TCIA)拥有超过1100万张与癌症相关的去识别医学图像,用于研究再利用。它们是围绕dicom格式的放射学集合进行组织的,这些集合按疾病类型、模式或研究重点分组。许多集合还包括各种格式的各种非图像数据集,而没有一种通用的方法来表示数据所涉及的实体。本文描述了通过使用开放生物医学本体将这些不同的非图像数据转换为集成语义表示,使它们更易于访问和使用的工作,突出了数据中遇到的障碍,并展示了在选定集合中发现的详细表示数据。
{"title":"Ontology-Enhanced Representations of Non-image Data in The Cancer Imaging Archive.","authors":"Jonathan P Bona, Tracy S Nolan, Mathias Brochhausen","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The Cancer Imaging Archive (TCIA) hosts over 11 million de-identified medical images related to cancer for research reuse. These are organized around DICOM-format radiological collections that are grouped by disease type, modality, or research focus. Many collections also include diverse non-image datasets in a variety of formats without a common approach to representing the entities that the data are about. This paper describes work to make these diverse non-image data more accessible and usable by transforming them into integrated semantic representations using Open Biomedical Ontologies, highlights obstacles encountered in the data, and presents detailed representations data found in select collections.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2285 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12076581/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144082331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards automated pain detection in children using facial and electrodermal activity 利用面部和皮肤电活动实现儿童疼痛自动检测
Pub Date : 2018-07-13 DOI: 10.1007/978-3-030-12738-1_13
Xiaojing Xu, Busra T. Susam, H. Nezamfar, K. Craig, Damaris Diaz, Jeannie S. Huang, M. Goodwin, M. Akçakaya, V. D. Sa
{"title":"Towards automated pain detection in children using facial and electrodermal activity","authors":"Xiaojing Xu, Busra T. Susam, H. Nezamfar, K. Craig, Damaris Diaz, Jeannie S. Huang, M. Goodwin, M. Akçakaya, V. D. Sa","doi":"10.1007/978-3-030-12738-1_13","DOIUrl":"https://doi.org/10.1007/978-3-030-12738-1_13","url":null,"abstract":"","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2142 1","pages":"208-211"},"PeriodicalIF":0.0,"publicationDate":"2018-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42071686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Automated Pain Detection in Facial Videos of Children using Human-Assisted Transfer Learning. 使用人类辅助迁移学习的儿童面部视频中的自动疼痛检测。
Pub Date : 2018-07-01
Xiaojing Xu, Kenneth D Craig, Damaris Diaz, Matthew S Goodwin, Murat Akcakaya, Büşra Tuğçe Susam, Jeannie S Huang, Virginia R de Sa

Accurately determining pain levels in children is difficult, even for trained professionals and parents. Facial activity provides sensitive and specific information about pain, and computer vision algorithms have been developed to automatically detect Facial Action Units (AUs) defined by the Facial Action Coding System (FACS). Our prior work utilized information from computer vision, i.e., automatically detected facial AUs, to develop classifiers to distinguish between pain and no-pain conditions. However, application of pain/no-pain classifiers based on automated AU codings across different environmental domains results in diminished performance. In contrast, classifiers based on manually coded AUs demonstrate reduced environmentally-based variability in performance. In this paper, we train a machine learning model to recognize pain using AUs coded by a computer vision system embedded in a software package called iMotions. We also study the relationship between iMotions (automatically) and human (manually) coded AUs. We find that AUs coded automatically are different from those coded by a human trained in the FACS system, and that the human coder is less sensitive to environmental changes. To improve classification performance in the current work, we applied transfer learning by training another machine learning model to map automated AU codings to a subspace of manual AU codings to enable more robust pain recognition performance when only automatically coded AUs are available for the test data. With this transfer learning method, we improved the Area Under the ROC Curve (AUC) on independent data from new participants in our target domain from 0.67 to 0.72.

即使是受过培训的专业人员和家长,也很难准确确定儿童的疼痛程度。面部活动提供了关于疼痛的敏感和特定信息,并且已经开发了计算机视觉算法来自动检测面部动作编码系统(FACS)定义的面部动作单元(AU)。我们之前的工作利用了来自计算机视觉的信息,即自动检测到的面部AU,来开发分类器来区分疼痛和无疼痛情况。然而,基于自动AU编码的疼痛/无疼痛分类器在不同环境领域的应用会导致性能下降。相比之下,基于手动编码AU的分类器表现出减少了基于环境的性能可变性。在本文中,我们使用嵌入名为iMotions的软件包中的计算机视觉系统编码的AU来训练机器学习模型来识别疼痛。我们还研究了iMotions(自动)和人类(手动)编码AU之间的关系。我们发现,自动编码的AU与在FACS系统中训练的人编码的AUs不同,并且人类编码器对环境变化不太敏感。为了提高当前工作中的分类性能,我们通过训练另一个机器学习模型来应用迁移学习,将自动AU编码映射到手动AU编码的子空间,以在只有自动编码的AU可用于测试数据时实现更稳健的疼痛识别性能。通过这种迁移学习方法,我们将目标域中新参与者的独立数据的ROC曲线下面积(AUC)从0.67提高到0.72。
{"title":"Automated Pain Detection in Facial Videos of Children using Human-Assisted Transfer Learning.","authors":"Xiaojing Xu,&nbsp;Kenneth D Craig,&nbsp;Damaris Diaz,&nbsp;Matthew S Goodwin,&nbsp;Murat Akcakaya,&nbsp;Büşra Tuğçe Susam,&nbsp;Jeannie S Huang,&nbsp;Virginia R de Sa","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Accurately determining pain levels in children is difficult, even for trained professionals and parents. Facial activity provides sensitive and specific information about pain, and computer vision algorithms have been developed to automatically detect Facial Action Units (AUs) defined by the Facial Action Coding System (FACS). Our prior work utilized information from computer vision, i.e., automatically detected facial AUs, to develop classifiers to distinguish between pain and no-pain conditions. However, application of pain/no-pain classifiers based on automated AU codings across different environmental domains results in diminished performance. In contrast, classifiers based on manually coded AUs demonstrate reduced environmentally-based variability in performance. In this paper, we train a machine learning model to recognize pain using AUs coded by a computer vision system embedded in a software package called iMotions. We also study the relationship between iMotions (automatically) and human (manually) coded AUs. We find that AUs coded automatically are different from those coded by a human trained in the FACS system, and that the human coder is less sensitive to environmental changes. To improve classification performance in the current work, we applied transfer learning by training another machine learning model to map automated AU codings to a subspace of manual AU codings to enable more robust pain recognition performance when only automatically coded AUs are available for the test data. With this transfer learning method, we improved the Area Under the ROC Curve (AUC) on independent data from new participants in our target domain from 0.67 to 0.72.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2142 ","pages":"10-21"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6352979/pdf/nihms-1001649.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41164655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated pain detection in facial videos of children using human-assisted transfer learning 使用人工辅助迁移学习的儿童面部视频中的自动疼痛检测
Pub Date : 2018-07-01 DOI: 10.1007/978-3-030-12738-1_12
Xiaojing Xu, K. Craig, Damaris Diaz, M. Goodwin, M. Akçakaya, Busra T. Susam, Jeannie S. Huang, V. D. Sa
{"title":"Automated pain detection in facial videos of children using human-assisted transfer learning","authors":"Xiaojing Xu, K. Craig, Damaris Diaz, M. Goodwin, M. Akçakaya, Busra T. Susam, Jeannie S. Huang, V. D. Sa","doi":"10.1007/978-3-030-12738-1_12","DOIUrl":"https://doi.org/10.1007/978-3-030-12738-1_12","url":null,"abstract":"","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2142 1","pages":"10-21"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-12738-1_12","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49169328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Towards Automated Pain Detection in Children using Facial and Electrodermal Activity. 使用面部和皮肤电活动实现儿童疼痛的自动检测。
Pub Date : 2018-07-01
Xiaojing Xu, Büsra Tuğce Susam, Hooman Nezamfar, Damaris Diaz, Kenneth D Craig, Matthew S Goodwin, Murat Akcakaya, Jeannie S Huang, R de Sa Virginia

Accurately determining pain levels in children is difficult, even for trained professionals and parents. Facial activity and electro- dermal activity (EDA) provide rich information about pain, and both have been used in automated pain detection. In this paper, we discuss preliminary steps towards fusing models trained on video and EDA features respectively. We compare fusion models using original video features and those using transferred video features which are less sensitive to environmental changes. We demonstrate the benefit of the fusion and the transferred video features with a special test case involving domain adaptation and improved performance relative to using EDA and video features alone.

即使是受过培训的专业人员和家长,也很难准确确定儿童的疼痛程度。面部活动和皮肤电活动(EDA)提供了关于疼痛的丰富信息,并且两者都已被用于自动疼痛检测。在本文中,我们分别讨论了融合基于视频和EDA特征训练的模型的初步步骤。我们比较了使用原始视频特征的融合模型和使用对环境变化不太敏感的转移视频特征的模型。我们通过一个特殊的测试案例展示了融合和传输的视频特征的好处,该测试案例涉及领域自适应,并相对于单独使用EDA和视频特征提高了性能。
{"title":"Towards Automated Pain Detection in Children using Facial and Electrodermal Activity.","authors":"Xiaojing Xu,&nbsp;Büsra Tuğce Susam,&nbsp;Hooman Nezamfar,&nbsp;Damaris Diaz,&nbsp;Kenneth D Craig,&nbsp;Matthew S Goodwin,&nbsp;Murat Akcakaya,&nbsp;Jeannie S Huang,&nbsp;R de Sa Virginia","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Accurately determining pain levels in children is difficult, even for trained professionals and parents. Facial activity and electro- dermal activity (EDA) provide rich information about pain, and both have been used in automated pain detection. In this paper, we discuss preliminary steps towards fusing models trained on video and EDA features respectively. We compare fusion models using original video features and those using transferred video features which are less sensitive to environmental changes. We demonstrate the benefit of the fusion and the transferred video features with a special test case involving domain adaptation and improved performance relative to using EDA and video features alone.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2142 ","pages":"208-211"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6352962/pdf/nihms-1001656.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41175227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalizing Mobile Fitness Apps using Reinforcement Learning. 利用强化学习实现移动健身应用程序的个性化。
Pub Date : 2018-03-07
Mo Zhou, Yonatan Mintz, Yoshimi Fukuoka, Ken Goldberg, Elena Flowers, Philip Kaminsky, Alejandro Castillejo, Anil Aswani

Despite the vast number of mobile fitness applications (apps) and their potential advantages in promoting physical activity, many existing apps lack behavior-change features and are not able to maintain behavior change motivation. This paper describes a novel fitness app called CalFit, which implements important behavior-change features like dynamic goal setting and self-monitoring. CalFit uses a reinforcement learning algorithm to generate personalized daily step goals that are challenging but attainable. We conducted the Mobile Student Activity Reinforcement (mSTAR) study with 13 college students to evaluate the efficacy of the CalFit app. The control group (receiving goals of 10,000 steps/day) had a decrease in daily step count of 1,520 (SD ± 740) between baseline and 10-weeks, compared to an increase of 700 (SD ± 830) in the intervention group (receiving personalized step goals). The difference in daily steps between the two groups was 2,220, with a statistically significant p = 0.039.

尽管移动健身应用程序(Apps)数量庞大,而且在促进体育锻炼方面具有潜在优势,但许多现有应用程序缺乏改变行为的功能,无法维持改变行为的动力。本文介绍了一款名为 CalFit 的新型健身应用程序,它实现了动态目标设定和自我监控等重要的行为改变功能。CalFit 使用强化学习算法生成具有挑战性但可实现的个性化每日步数目标。我们对 13 名大学生进行了移动学生活动强化(mSTAR)研究,以评估 CalFit 应用程序的功效。对照组(接受 10,000 步/天的目标)的每日步数在基线和 10 周之间减少了 1,520 步(标准差 ± 740),而干预组(接受个性化步数目标)则增加了 700 步(标准差 ± 830)。两组的每日步数相差 2220 步,P = 0.039,具有显著的统计学意义。
{"title":"Personalizing Mobile Fitness Apps using Reinforcement Learning.","authors":"Mo Zhou, Yonatan Mintz, Yoshimi Fukuoka, Ken Goldberg, Elena Flowers, Philip Kaminsky, Alejandro Castillejo, Anil Aswani","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Despite the vast number of mobile fitness applications (apps) and their potential advantages in promoting physical activity, many existing apps lack behavior-change features and are not able to maintain behavior change motivation. This paper describes a novel fitness app called CalFit, which implements important behavior-change features like dynamic goal setting and self-monitoring. CalFit uses a reinforcement learning algorithm to generate personalized daily step goals that are challenging but attainable. We conducted the Mobile Student Activity Reinforcement (mSTAR) study with 13 college students to evaluate the efficacy of the CalFit app. The control group (receiving goals of 10,000 steps/day) had a decrease in daily step count of 1,520 (SD ± 740) between baseline and 10-weeks, compared to an increase of 700 (SD ± 830) in the intervention group (receiving personalized step goals). The difference in daily steps between the two groups was 2,220, with a statistically significant <i>p</i> = 0.039.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"2068 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7220419/pdf/nihms966774.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37932251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
CEUR workshop proceedings
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1