Romero Morais, Truyen Tran, Caroline Alexander, Natasha Amery, Catherine Morgan, Alicia Spittle, Vuong Le, Nadia Badawi, Alison Salt, Jane Valentine, Catherine Elliott, Elizabeth M Hurrion, Paul A Dawson, Svetha Venkatesh
{"title":"利用主动学习进行细粒度浮躁动作分类","authors":"Romero Morais, Truyen Tran, Caroline Alexander, Natasha Amery, Catherine Morgan, Alicia Spittle, Vuong Le, Nadia Badawi, Alison Salt, Jane Valentine, Catherine Elliott, Elizabeth M Hurrion, Paul A Dawson, Svetha Venkatesh","doi":"10.1109/JBHI.2024.3473947","DOIUrl":null,"url":null,"abstract":"<p><p>Typically developing infants, between the corrected age of 9-20 weeks, produce fidgety movements. These movements can be identified with the General Movement Assessment, but their identification requires trained professionals to conduct the assessment from video recordings. Since trained professionals are expensive and their demand may be higher than their availability, computer vision-based solutions have been developed to assist practitioners. However, most solutions to date treat the problem as a direct mapping from video to infant status, without modeling fidgety movements throughout the video. To address that, we propose to directly model infants' short movements and classify them as fidgety or non-fidgety. In this way, we model the explanatory factor behind the infant's status and improve model interpretability. The issue with our proposal is that labels for an infant's short movements are not available, which precludes us to train such a model. We overcome this issue with active learning. Active learning is a framework that minimizes the amount of labeled data required to train a model, by only labeling examples that are considered \"informative\" to the model. The assumption is that a model trained on informative examples reaches a higher performance level than a model trained with randomly selected examples. We validate our framework by modeling the movements of infants' hips on two representative cohorts: typically developing and at-risk infants. Our results show that active learning is suitable to our problem and that it works adequately even when the models are trained with labels provided by a novice annotator.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7000,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fine-grained Fidgety Movement Classification using Active Learning.\",\"authors\":\"Romero Morais, Truyen Tran, Caroline Alexander, Natasha Amery, Catherine Morgan, Alicia Spittle, Vuong Le, Nadia Badawi, Alison Salt, Jane Valentine, Catherine Elliott, Elizabeth M Hurrion, Paul A Dawson, Svetha Venkatesh\",\"doi\":\"10.1109/JBHI.2024.3473947\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Typically developing infants, between the corrected age of 9-20 weeks, produce fidgety movements. These movements can be identified with the General Movement Assessment, but their identification requires trained professionals to conduct the assessment from video recordings. Since trained professionals are expensive and their demand may be higher than their availability, computer vision-based solutions have been developed to assist practitioners. However, most solutions to date treat the problem as a direct mapping from video to infant status, without modeling fidgety movements throughout the video. To address that, we propose to directly model infants' short movements and classify them as fidgety or non-fidgety. In this way, we model the explanatory factor behind the infant's status and improve model interpretability. The issue with our proposal is that labels for an infant's short movements are not available, which precludes us to train such a model. We overcome this issue with active learning. Active learning is a framework that minimizes the amount of labeled data required to train a model, by only labeling examples that are considered \\\"informative\\\" to the model. The assumption is that a model trained on informative examples reaches a higher performance level than a model trained with randomly selected examples. We validate our framework by modeling the movements of infants' hips on two representative cohorts: typically developing and at-risk infants. Our results show that active learning is suitable to our problem and that it works adequately even when the models are trained with labels provided by a novice annotator.</p>\",\"PeriodicalId\":13073,\"journal\":{\"name\":\"IEEE Journal of Biomedical and Health Informatics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2024-10-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal of Biomedical and Health Informatics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1109/JBHI.2024.3473947\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Biomedical and Health Informatics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/JBHI.2024.3473947","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
发育正常的婴儿在 9-20 周大时会出现烦躁不安的动作。这些动作可以通过 "一般动作评估"(General Movement Assessment)来识别,但需要训练有素的专业人员通过录像来进行评估。由于训练有素的专业人员价格昂贵,且供不应求,因此人们开发了基于计算机视觉的解决方案来协助从业人员。然而,迄今为止的大多数解决方案都是将问题作为从视频到婴儿状态的直接映射,而没有对整个视频中的躁动动作进行建模。为了解决这个问题,我们建议直接对婴儿的短促动作进行建模,并将其分为烦躁和非烦躁两种。这样,我们就能对婴儿状态背后的解释因素进行建模,并提高模型的可解释性。我们建议的问题在于,婴儿的短促动作没有标签,这使我们无法训练这样的模型。我们通过主动学习克服了这一问题。主动学习是一种将训练模型所需的标注数据量最小化的框架,方法是只标注被认为对模型 "有参考价值 "的示例。我们的假设是,与使用随机选择的示例训练的模型相比,使用有信息量的示例训练的模型能达到更高的性能水平。我们通过对婴儿臀部运动建模来验证我们的框架,建模对象是两个具有代表性的群体:典型发育婴儿和问题婴儿。我们的结果表明,主动学习适合我们的问题,而且即使在使用新手注释者提供的标签训练模型时,主动学习也能充分发挥作用。
Fine-grained Fidgety Movement Classification using Active Learning.
Typically developing infants, between the corrected age of 9-20 weeks, produce fidgety movements. These movements can be identified with the General Movement Assessment, but their identification requires trained professionals to conduct the assessment from video recordings. Since trained professionals are expensive and their demand may be higher than their availability, computer vision-based solutions have been developed to assist practitioners. However, most solutions to date treat the problem as a direct mapping from video to infant status, without modeling fidgety movements throughout the video. To address that, we propose to directly model infants' short movements and classify them as fidgety or non-fidgety. In this way, we model the explanatory factor behind the infant's status and improve model interpretability. The issue with our proposal is that labels for an infant's short movements are not available, which precludes us to train such a model. We overcome this issue with active learning. Active learning is a framework that minimizes the amount of labeled data required to train a model, by only labeling examples that are considered "informative" to the model. The assumption is that a model trained on informative examples reaches a higher performance level than a model trained with randomly selected examples. We validate our framework by modeling the movements of infants' hips on two representative cohorts: typically developing and at-risk infants. Our results show that active learning is suitable to our problem and that it works adequately even when the models are trained with labels provided by a novice annotator.
期刊介绍:
IEEE Journal of Biomedical and Health Informatics publishes original papers presenting recent advances where information and communication technologies intersect with health, healthcare, life sciences, and biomedicine. Topics include acquisition, transmission, storage, retrieval, management, and analysis of biomedical and health information. The journal covers applications of information technologies in healthcare, patient monitoring, preventive care, early disease diagnosis, therapy discovery, and personalized treatment protocols. It explores electronic medical and health records, clinical information systems, decision support systems, medical and biological imaging informatics, wearable systems, body area/sensor networks, and more. Integration-related topics like interoperability, evidence-based medicine, and secure patient data are also addressed.