首页 > 最新文献

26th International Conference on Intelligent User Interfaces - Companion最新文献

英文 中文
Stress Detection by Machine Learning and Wearable Sensors 基于机器学习和可穿戴传感器的应力检测
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450732
Prerna Garg, Jayasankar Santhosh, A. Dengel, Shoya Ishimaru
Mental states like stress, depression, and anxiety have become a huge problem in our modern society. The main objective of this work is to detect stress among people, using Machine Learning approaches with the final aim of improving their quality of life. We propose various Machine Learning models for the detection of stress on individuals using a publicly available multimodal dataset, WESAD. Sensor data including electrocardiogram (ECG), body temperature (TEMP), respiration (RESP), electromyogram (EMG), and electrodermal activity (EDA) are taken for three physiological conditions - neutral (baseline), stress and amusement. The F1-score and accuracy for three-class (amusement vs. baseline vs. stress) and binary (stress vs. non-stress) classifications were computed and compared using machine learning techniques like k-NN, Linear Discriminant Analysis, Random Forest, AdaBoost, and Support Vector Machine. For both binary classification and three-class classification, the Random Forest model outperformed other models with F1-scores of 83.34 and 65.73 respectively.
精神状态,如压力、抑郁和焦虑已经成为我们现代社会的一个巨大问题。这项工作的主要目标是利用机器学习方法检测人们的压力,最终目的是提高他们的生活质量。我们提出了各种机器学习模型,用于使用公开可用的多模态数据集WESAD来检测个体的压力。传感器数据包括心电图(ECG)、体温(TEMP)、呼吸(RESP)、肌电图(EMG)和皮电活动(EDA)三种生理状态——中性(基线)、应激和娱乐。使用机器学习技术,如k-NN、线性判别分析、随机森林、AdaBoost和支持向量机,计算并比较了三级(娱乐、基线、压力)和二元(压力、非压力)分类的f1分数和准确性。对于二元分类和三类分类,随机森林模型的f1得分分别为83.34分和65.73分,优于其他模型。
{"title":"Stress Detection by Machine Learning and Wearable Sensors","authors":"Prerna Garg, Jayasankar Santhosh, A. Dengel, Shoya Ishimaru","doi":"10.1145/3397482.3450732","DOIUrl":"https://doi.org/10.1145/3397482.3450732","url":null,"abstract":"Mental states like stress, depression, and anxiety have become a huge problem in our modern society. The main objective of this work is to detect stress among people, using Machine Learning approaches with the final aim of improving their quality of life. We propose various Machine Learning models for the detection of stress on individuals using a publicly available multimodal dataset, WESAD. Sensor data including electrocardiogram (ECG), body temperature (TEMP), respiration (RESP), electromyogram (EMG), and electrodermal activity (EDA) are taken for three physiological conditions - neutral (baseline), stress and amusement. The F1-score and accuracy for three-class (amusement vs. baseline vs. stress) and binary (stress vs. non-stress) classifications were computed and compared using machine learning techniques like k-NN, Linear Discriminant Analysis, Random Forest, AdaBoost, and Support Vector Machine. For both binary classification and three-class classification, the Random Forest model outperformed other models with F1-scores of 83.34 and 65.73 respectively.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130323623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
ARCoD: An Augmented Reality Serious Game to Identify Cognitive Distortion ARCoD:一款识别认知扭曲的增强现实严肃游戏
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450723
Rifat Ara Tasnim, Farjana Z. Eishita
The widespread presence of mental disorders is increasing at an alarming rate around the globe. According to World Health Organization (WHO), mental health circumstances have worsened all over the world due to the COVID-19 pandemic. In spite of the existence of effective psychotherapy strategies, a significant percentage of individuals do not get access to mental healthcare facilities. Under these circumstances, technologies such as Augmented Reality (AR) and its availability in handheld devices can unveil an expansive opportunity to utilize these features in fields of mental health treatment via digital gaming. In this paper, we have proposed a serious game embedding smart Augmented Reality (AR) technology to identify the Cognitive Distortions of the individual playing the game. Later, a comprehensive analysis of clinical impact of the AR gaming on mental health treatment will be conducted followed by evaluation of Player Experience (PX).
精神障碍的普遍存在正在全球以惊人的速度增加。据世界卫生组织(WHO)称,由于新冠肺炎(COVID-19)大流行,世界各地的精神健康状况恶化。尽管存在有效的心理治疗策略,但很大比例的个人无法获得精神保健设施的服务。在这种情况下,增强现实(AR)等技术及其在手持设备上的可用性可以通过数字游戏在心理健康治疗领域利用这些功能提供广阔的机会。在本文中,我们提出了一种嵌入智能增强现实(AR)技术的严肃游戏,以识别玩游戏的个体的认知扭曲。随后,我们将对AR游戏对心理健康治疗的临床影响进行综合分析,并对玩家体验(PX)进行评估。
{"title":"ARCoD: An Augmented Reality Serious Game to Identify Cognitive Distortion","authors":"Rifat Ara Tasnim, Farjana Z. Eishita","doi":"10.1145/3397482.3450723","DOIUrl":"https://doi.org/10.1145/3397482.3450723","url":null,"abstract":"The widespread presence of mental disorders is increasing at an alarming rate around the globe. According to World Health Organization (WHO), mental health circumstances have worsened all over the world due to the COVID-19 pandemic. In spite of the existence of effective psychotherapy strategies, a significant percentage of individuals do not get access to mental healthcare facilities. Under these circumstances, technologies such as Augmented Reality (AR) and its availability in handheld devices can unveil an expansive opportunity to utilize these features in fields of mental health treatment via digital gaming. In this paper, we have proposed a serious game embedding smart Augmented Reality (AR) technology to identify the Cognitive Distortions of the individual playing the game. Later, a comprehensive analysis of clinical impact of the AR gaming on mental health treatment will be conducted followed by evaluation of Player Experience (PX).","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"281 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131426192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
User-Controlled Content Translation in Social Media 社交媒体中用户控制的内容翻译
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450714
A. Gupta
As it has become increasingly common for social network users to write and view post in languages other than English, most social networks now provide machine translations to allow posts to be read by an audience beyond native speakers. However, authors typically cannot view the translations of their posts and have little control over these translations. To address this issue, I am developing a prototype that will provide authors with transparency of and more personalized control over the translation of their posts.
随着社交网络用户使用英语以外的语言撰写和查看帖子变得越来越普遍,大多数社交网络现在都提供机器翻译,让母语以外的用户也能阅读帖子。然而,作者通常无法查看其帖子的翻译,并且对这些翻译几乎没有控制权。为了解决这个问题,我正在开发一个原型,它将为作者提供对其文章翻译的透明度和更个性化的控制。
{"title":"User-Controlled Content Translation in Social Media","authors":"A. Gupta","doi":"10.1145/3397482.3450714","DOIUrl":"https://doi.org/10.1145/3397482.3450714","url":null,"abstract":"As it has become increasingly common for social network users to write and view post in languages other than English, most social networks now provide machine translations to allow posts to be read by an audience beyond native speakers. However, authors typically cannot view the translations of their posts and have little control over these translations. To address this issue, I am developing a prototype that will provide authors with transparency of and more personalized control over the translation of their posts.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115531730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TExSS: Transparency and Explanations in Smart Systems 智能系统中的透明度和解释
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450705
Alison Smith-Renner, Styliani Kleanthous Loizou, Jonathan Dodge, Casey Dugan, Min Kyung Lee, Brian Y. Lim, T. Kuflik, Advait Sarkar, Avital Shulner-Tal, S. Stumpf
Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop provides a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, we focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings, such as awareness, data provenance, and validation.
应用复杂推理来做出决策和计划行为的智能系统,如决策支持系统和个性化推荐,用户很难理解。算法允许利用丰富多样的数据源,以支持人类决策和/或采取直接行动;然而,由于这些过程对用户来说通常是不透明的,因此人们越来越关注它们的透明度和问责制。透明度和问责制吸引了越来越多的兴趣,以提供更有效的系统培训、更好的可靠性和改进的可用性。本次研讨会为探索在设计、开发和评估提供系统透明度或其行为解释的智能用户界面中出现的问题提供了一个场所。此外,我们专注于减轻算法偏差的方法,这些方法可以由研究人员应用,即使没有访问给定系统的内部工作,例如意识,数据来源和验证。
{"title":"TExSS: Transparency and Explanations in Smart Systems","authors":"Alison Smith-Renner, Styliani Kleanthous Loizou, Jonathan Dodge, Casey Dugan, Min Kyung Lee, Brian Y. Lim, T. Kuflik, Advait Sarkar, Avital Shulner-Tal, S. Stumpf","doi":"10.1145/3397482.3450705","DOIUrl":"https://doi.org/10.1145/3397482.3450705","url":null,"abstract":"Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop provides a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, we focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings, such as awareness, data provenance, and validation.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131966629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Over-sketching Operation to Realize Geometrical and Topological Editing across Multiple Objects in Sketch-based CAD Interface 在基于草图的CAD界面中实现多对象几何拓扑编辑的过绘操作
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450735
Tomohiko Ito, Teruyoshi Kaneko, Yoshiki Tanaka, S. Saga
We developed a new general-purpose sketch-based interface for use in two-dimensional computer-aided design (CAD) systems. In this interface, a sketch-based editing operation is used to modify the geometry and topology of multiple geometric objects via over-sketching. The interface was developed by inheriting a fuzzy logic-based strategy of the existing sketch-based interface SKIT (SKetch Input Tracer). Using this interface, a user can make drawings in a creative manner; e.g., they can start with a rough sketch and progressively achieve a detailed design while repeating the over-sketches.
我们开发了一个新的通用的基于草图的界面,用于二维计算机辅助设计(CAD)系统。在该界面中,使用基于草图的编辑操作,通过过绘来修改多个几何对象的几何和拓扑结构。该接口继承了现有基于草图的界面SKIT (SKetch Input Tracer)的模糊逻辑策略。使用这个界面,用户可以以创造性的方式进行绘图;例如,他们可以从一个粗略的草图开始,在重复草图的同时逐步实现详细的设计。
{"title":"Over-sketching Operation to Realize Geometrical and Topological Editing across Multiple Objects in Sketch-based CAD Interface","authors":"Tomohiko Ito, Teruyoshi Kaneko, Yoshiki Tanaka, S. Saga","doi":"10.1145/3397482.3450735","DOIUrl":"https://doi.org/10.1145/3397482.3450735","url":null,"abstract":"We developed a new general-purpose sketch-based interface for use in two-dimensional computer-aided design (CAD) systems. In this interface, a sketch-based editing operation is used to modify the geometry and topology of multiple geometric objects via over-sketching. The interface was developed by inheriting a fuzzy logic-based strategy of the existing sketch-based interface SKIT (SKetch Input Tracer). Using this interface, a user can make drawings in a creative manner; e.g., they can start with a rough sketch and progressively achieve a detailed design while repeating the over-sketches.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"71 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114034608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
KUMITRON: Artificial Intelligence System to Monitor Karate Fights that Synchronize Aerial Images with Physiological and Inertial Signals 人工智能系统监控空手道战斗,使空中图像与生理和惯性信号同步
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450730
J. Echeverria, O. Santos
New technologies make it possible to develop tools that allow more efficient and personalized interaction in unsuspected areas such as martial arts. From the point of view of the modelling of human movement in relation to the learning of complex motor skills, martial arts are of interest because they are articulated around a system of movements that are predefined -or at least, bounded- and governed by the Laws of Physics. Their execution must be learned after continuous practice over time. Artificial Intelligence algorithms can be used to obtain motion patterns that can be used to compare a learners’ practice against the execution of an expert, as well as to analyse its temporal evolution during learning. In this paper we introduce KUMITRON, which collects motion data from wearable sensors and integrates computer vision and machine learning algorithms to help karate practitioners improve their skills in combat. The current version focuses on using the computer vision algorithms to identify the anticipation of the opponent's movements. This information is computed in real time and can be communicated to the learner together with a recommendation of the type of strategy to use in the combat.
新技术使开发工具成为可能,这些工具允许在武术等意想不到的领域进行更有效和个性化的互动。从与学习复杂运动技能相关的人体运动建模的角度来看,武术之所以引起人们的兴趣,是因为它们是围绕着一个预定义的(或至少是有界限的)、受物理定律支配的运动系统进行阐述的。它们的执行必须经过长时间的持续练习才能学会。人工智能算法可用于获取运动模式,可用于将学习者的练习与专家的执行进行比较,并分析其在学习过程中的时间演变。在本文中,我们介绍了KUMITRON,它从可穿戴传感器收集运动数据,并集成了计算机视觉和机器学习算法,以帮助空手道练习者提高他们的战斗技能。目前的版本侧重于使用计算机视觉算法来识别对手的动作预期。这些信息是实时计算出来的,可以与在战斗中使用的策略类型的建议一起传达给学习者。
{"title":"KUMITRON: Artificial Intelligence System to Monitor Karate Fights that Synchronize Aerial Images with Physiological and Inertial Signals","authors":"J. Echeverria, O. Santos","doi":"10.1145/3397482.3450730","DOIUrl":"https://doi.org/10.1145/3397482.3450730","url":null,"abstract":"New technologies make it possible to develop tools that allow more efficient and personalized interaction in unsuspected areas such as martial arts. From the point of view of the modelling of human movement in relation to the learning of complex motor skills, martial arts are of interest because they are articulated around a system of movements that are predefined -or at least, bounded- and governed by the Laws of Physics. Their execution must be learned after continuous practice over time. Artificial Intelligence algorithms can be used to obtain motion patterns that can be used to compare a learners’ practice against the execution of an expert, as well as to analyse its temporal evolution during learning. In this paper we introduce KUMITRON, which collects motion data from wearable sensors and integrates computer vision and machine learning algorithms to help karate practitioners improve their skills in combat. The current version focuses on using the computer vision algorithms to identify the anticipation of the opponent's movements. This information is computed in real time and can be communicated to the learner together with a recommendation of the type of strategy to use in the combat.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"433 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116997457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
XNLP: A Living Survey for XAI Research in Natural Language Processing XNLP:自然语言处理中XAI研究的活综述
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450728
Kun Qian, Marina Danilevsky, Yannis Katsis, B. Kawas, Erick Oduor, Lucian Popa, Yunyao Li
We present XNLP: an interactive browser-based system embodying a living survey of recent state-of-the-art research in the field of Explainable AI (XAI) within the domain of Natural Language Processing (NLP). The system visually organizes and illustrates XAI-NLP publications and distills their content to allow users to gain insights, generate ideas, and explore the field. We hope that XNLP can become a leading demonstrative example of a living survey, balancing the depth and quality of a traditional well-constructed survey paper with the collaborative dynamism of a widely available interactive tool. XNLP can be accessed at: https://xainlp2020.github.io/xainlp.
我们提出了XNLP:一个基于浏览器的交互式系统,体现了自然语言处理(NLP)领域内可解释人工智能(XAI)领域最新研究的动态调查。该系统可视化地组织和说明了XAI-NLP出版物,并提取了其内容,以允许用户获得见解,产生想法和探索该领域。我们希望XNLP能够成为一个活生生的调查的主要示范示例,在传统的构造良好的调查论文的深度和质量与广泛可用的交互式工具的协作动态之间取得平衡。可以通过https://xainlp2020.github.io/xainlp访问XNLP。
{"title":"XNLP: A Living Survey for XAI Research in Natural Language Processing","authors":"Kun Qian, Marina Danilevsky, Yannis Katsis, B. Kawas, Erick Oduor, Lucian Popa, Yunyao Li","doi":"10.1145/3397482.3450728","DOIUrl":"https://doi.org/10.1145/3397482.3450728","url":null,"abstract":"We present XNLP: an interactive browser-based system embodying a living survey of recent state-of-the-art research in the field of Explainable AI (XAI) within the domain of Natural Language Processing (NLP). The system visually organizes and illustrates XAI-NLP publications and distills their content to allow users to gain insights, generate ideas, and explore the field. We hope that XNLP can become a leading demonstrative example of a living survey, balancing the depth and quality of a traditional well-constructed survey paper with the collaborative dynamism of a widely available interactive tool. XNLP can be accessed at: https://xainlp2020.github.io/xainlp.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"6 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123455824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Tutorial: Human-Centered AI: Reliable, Safe and Trustworthy 教程:以人为本的AI:可靠、安全、值得信赖
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3453994
B. Shneiderman
This 3-hour tutorial proposes a new synthesis, in which Artificial Intelligence (AI) algorithms are combined with human-centered thinking to make Human-Centered AI (HCAI). This approach combines research on AI algorithms with user experience design methods to shape technologies that amplify, augment, empower, and enhance human performance. Researchers and developers for HCAI systems value meaningful human control, putting people first by serving human needs, values, and goals.
这个3小时的教程提出了一种新的综合方法,将人工智能(AI)算法与以人为本的思维相结合,形成以人为本的人工智能(HCAI)。这种方法将人工智能算法的研究与用户体验设计方法相结合,以塑造放大、增强、授权和增强人类表现的技术。HCAI系统的研究人员和开发人员重视有意义的人类控制,通过满足人类的需求、价值观和目标来把人放在第一位。
{"title":"Tutorial: Human-Centered AI: Reliable, Safe and Trustworthy","authors":"B. Shneiderman","doi":"10.1145/3397482.3453994","DOIUrl":"https://doi.org/10.1145/3397482.3453994","url":null,"abstract":"This 3-hour tutorial proposes a new synthesis, in which Artificial Intelligence (AI) algorithms are combined with human-centered thinking to make Human-Centered AI (HCAI). This approach combines research on AI algorithms with user experience design methods to shape technologies that amplify, augment, empower, and enhance human performance. Researchers and developers for HCAI systems value meaningful human control, putting people first by serving human needs, values, and goals.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133490226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
TIEVis: a Visual Analytics Dashboard for Temporal Information Extracted from Clinical Reports TIEVis:从临床报告中提取时间信息的可视化分析仪表板
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450731
Robin De Croon, A. Leeuwenberg, J. Aerts, Marie-Francine Moens, Vero Vanden Abeele, K. Verbert
Clinical reports, as unstructured texts, contain important temporal information. However, it remains a challenge for natural language processing (NLP) models to accurately combine temporal cues into a single coherent temporal ordering of described events. In this paper, we present TIEVis, a visual analytics dashboard that visualizes event-timelines extracted from clinical reports. We present the findings of a pilot study in which healthcare professionals explored and used the dashboard to complete a set of tasks. Results highlight the importance of seeing events in their context, and the ability to manually verify and update critical events in a patient history, as a basis to increase user trust.
临床报告作为非结构化文本,包含重要的时间信息。然而,对于自然语言处理(NLP)模型来说,将时间线索准确地组合成描述事件的单一连贯时间顺序仍然是一个挑战。在本文中,我们介绍了TIEVis,一个可视化分析仪表板,可以将从临床报告中提取的事件时间线可视化。我们介绍了一项试点研究的结果,在该研究中,医疗保健专业人员探索并使用仪表板完成了一组任务。结果强调了在其上下文中查看事件的重要性,以及手动验证和更新患者历史中的关键事件的能力,作为增加用户信任的基础。
{"title":"TIEVis: a Visual Analytics Dashboard for Temporal Information Extracted from Clinical Reports","authors":"Robin De Croon, A. Leeuwenberg, J. Aerts, Marie-Francine Moens, Vero Vanden Abeele, K. Verbert","doi":"10.1145/3397482.3450731","DOIUrl":"https://doi.org/10.1145/3397482.3450731","url":null,"abstract":"Clinical reports, as unstructured texts, contain important temporal information. However, it remains a challenge for natural language processing (NLP) models to accurately combine temporal cues into a single coherent temporal ordering of described events. In this paper, we present TIEVis, a visual analytics dashboard that visualizes event-timelines extracted from clinical reports. We present the findings of a pilot study in which healthcare professionals explored and used the dashboard to complete a set of tasks. Results highlight the importance of seeing events in their context, and the ability to manually verify and update critical events in a patient history, as a basis to increase user trust.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114338278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
OYaYa: A Desktop Robot Enabling Multimodal Interaction with Emotions OYaYa:一个能够与情绪进行多模式交互的桌面机器人
Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450729
Yucheng Jin, Yu Deng, Jiangtao Gong, Xi Wan, Ge Gao, Qianying Wang
We demonstrate a desktop robot OYaYa that imitates users’ emotional facial expressions and helps users manage emotions. Multiple equipped sensors in OYaYa enable multimodal interaction; for example, it recognizes users’ emotions from facial expressions and speeches. Besides, a dashboard illustrates how users interact with OYaYa and how their emotions change. We expect that OYaYa allows users to manage their emotions in a fun way.
我们展示了一个桌面机器人OYaYa,它可以模仿用户的情绪面部表情,帮助用户管理情绪。OYaYa中配备的多个传感器可实现多模式交互;例如,它可以从面部表情和讲话中识别用户的情绪。此外,一个仪表板展示了用户如何与OYaYa互动,以及他们的情绪如何变化。我们希望OYaYa能让用户以一种有趣的方式管理自己的情绪。
{"title":"OYaYa: A Desktop Robot Enabling Multimodal Interaction with Emotions","authors":"Yucheng Jin, Yu Deng, Jiangtao Gong, Xi Wan, Ge Gao, Qianying Wang","doi":"10.1145/3397482.3450729","DOIUrl":"https://doi.org/10.1145/3397482.3450729","url":null,"abstract":"We demonstrate a desktop robot OYaYa that imitates users’ emotional facial expressions and helps users manage emotions. Multiple equipped sensors in OYaYa enable multimodal interaction; for example, it recognizes users’ emotions from facial expressions and speeches. Besides, a dashboard illustrates how users interact with OYaYa and how their emotions change. We expect that OYaYa allows users to manage their emotions in a fun way.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"254 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134313536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
26th International Conference on Intelligent User Interfaces - Companion
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1