Prerna Garg, Jayasankar Santhosh, A. Dengel, Shoya Ishimaru
Mental states like stress, depression, and anxiety have become a huge problem in our modern society. The main objective of this work is to detect stress among people, using Machine Learning approaches with the final aim of improving their quality of life. We propose various Machine Learning models for the detection of stress on individuals using a publicly available multimodal dataset, WESAD. Sensor data including electrocardiogram (ECG), body temperature (TEMP), respiration (RESP), electromyogram (EMG), and electrodermal activity (EDA) are taken for three physiological conditions - neutral (baseline), stress and amusement. The F1-score and accuracy for three-class (amusement vs. baseline vs. stress) and binary (stress vs. non-stress) classifications were computed and compared using machine learning techniques like k-NN, Linear Discriminant Analysis, Random Forest, AdaBoost, and Support Vector Machine. For both binary classification and three-class classification, the Random Forest model outperformed other models with F1-scores of 83.34 and 65.73 respectively.
{"title":"Stress Detection by Machine Learning and Wearable Sensors","authors":"Prerna Garg, Jayasankar Santhosh, A. Dengel, Shoya Ishimaru","doi":"10.1145/3397482.3450732","DOIUrl":"https://doi.org/10.1145/3397482.3450732","url":null,"abstract":"Mental states like stress, depression, and anxiety have become a huge problem in our modern society. The main objective of this work is to detect stress among people, using Machine Learning approaches with the final aim of improving their quality of life. We propose various Machine Learning models for the detection of stress on individuals using a publicly available multimodal dataset, WESAD. Sensor data including electrocardiogram (ECG), body temperature (TEMP), respiration (RESP), electromyogram (EMG), and electrodermal activity (EDA) are taken for three physiological conditions - neutral (baseline), stress and amusement. The F1-score and accuracy for three-class (amusement vs. baseline vs. stress) and binary (stress vs. non-stress) classifications were computed and compared using machine learning techniques like k-NN, Linear Discriminant Analysis, Random Forest, AdaBoost, and Support Vector Machine. For both binary classification and three-class classification, the Random Forest model outperformed other models with F1-scores of 83.34 and 65.73 respectively.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130323623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The widespread presence of mental disorders is increasing at an alarming rate around the globe. According to World Health Organization (WHO), mental health circumstances have worsened all over the world due to the COVID-19 pandemic. In spite of the existence of effective psychotherapy strategies, a significant percentage of individuals do not get access to mental healthcare facilities. Under these circumstances, technologies such as Augmented Reality (AR) and its availability in handheld devices can unveil an expansive opportunity to utilize these features in fields of mental health treatment via digital gaming. In this paper, we have proposed a serious game embedding smart Augmented Reality (AR) technology to identify the Cognitive Distortions of the individual playing the game. Later, a comprehensive analysis of clinical impact of the AR gaming on mental health treatment will be conducted followed by evaluation of Player Experience (PX).
{"title":"ARCoD: An Augmented Reality Serious Game to Identify Cognitive Distortion","authors":"Rifat Ara Tasnim, Farjana Z. Eishita","doi":"10.1145/3397482.3450723","DOIUrl":"https://doi.org/10.1145/3397482.3450723","url":null,"abstract":"The widespread presence of mental disorders is increasing at an alarming rate around the globe. According to World Health Organization (WHO), mental health circumstances have worsened all over the world due to the COVID-19 pandemic. In spite of the existence of effective psychotherapy strategies, a significant percentage of individuals do not get access to mental healthcare facilities. Under these circumstances, technologies such as Augmented Reality (AR) and its availability in handheld devices can unveil an expansive opportunity to utilize these features in fields of mental health treatment via digital gaming. In this paper, we have proposed a serious game embedding smart Augmented Reality (AR) technology to identify the Cognitive Distortions of the individual playing the game. Later, a comprehensive analysis of clinical impact of the AR gaming on mental health treatment will be conducted followed by evaluation of Player Experience (PX).","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"281 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131426192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As it has become increasingly common for social network users to write and view post in languages other than English, most social networks now provide machine translations to allow posts to be read by an audience beyond native speakers. However, authors typically cannot view the translations of their posts and have little control over these translations. To address this issue, I am developing a prototype that will provide authors with transparency of and more personalized control over the translation of their posts.
{"title":"User-Controlled Content Translation in Social Media","authors":"A. Gupta","doi":"10.1145/3397482.3450714","DOIUrl":"https://doi.org/10.1145/3397482.3450714","url":null,"abstract":"As it has become increasingly common for social network users to write and view post in languages other than English, most social networks now provide machine translations to allow posts to be read by an audience beyond native speakers. However, authors typically cannot view the translations of their posts and have little control over these translations. To address this issue, I am developing a prototype that will provide authors with transparency of and more personalized control over the translation of their posts.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115531730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alison Smith-Renner, Styliani Kleanthous Loizou, Jonathan Dodge, Casey Dugan, Min Kyung Lee, Brian Y. Lim, T. Kuflik, Advait Sarkar, Avital Shulner-Tal, S. Stumpf
Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop provides a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, we focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings, such as awareness, data provenance, and validation.
{"title":"TExSS: Transparency and Explanations in Smart Systems","authors":"Alison Smith-Renner, Styliani Kleanthous Loizou, Jonathan Dodge, Casey Dugan, Min Kyung Lee, Brian Y. Lim, T. Kuflik, Advait Sarkar, Avital Shulner-Tal, S. Stumpf","doi":"10.1145/3397482.3450705","DOIUrl":"https://doi.org/10.1145/3397482.3450705","url":null,"abstract":"Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user. Transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop provides a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, we focus on approaches to mitigate algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings, such as awareness, data provenance, and validation.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131966629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomohiko Ito, Teruyoshi Kaneko, Yoshiki Tanaka, S. Saga
We developed a new general-purpose sketch-based interface for use in two-dimensional computer-aided design (CAD) systems. In this interface, a sketch-based editing operation is used to modify the geometry and topology of multiple geometric objects via over-sketching. The interface was developed by inheriting a fuzzy logic-based strategy of the existing sketch-based interface SKIT (SKetch Input Tracer). Using this interface, a user can make drawings in a creative manner; e.g., they can start with a rough sketch and progressively achieve a detailed design while repeating the over-sketches.
{"title":"Over-sketching Operation to Realize Geometrical and Topological Editing across Multiple Objects in Sketch-based CAD Interface","authors":"Tomohiko Ito, Teruyoshi Kaneko, Yoshiki Tanaka, S. Saga","doi":"10.1145/3397482.3450735","DOIUrl":"https://doi.org/10.1145/3397482.3450735","url":null,"abstract":"We developed a new general-purpose sketch-based interface for use in two-dimensional computer-aided design (CAD) systems. In this interface, a sketch-based editing operation is used to modify the geometry and topology of multiple geometric objects via over-sketching. The interface was developed by inheriting a fuzzy logic-based strategy of the existing sketch-based interface SKIT (SKetch Input Tracer). Using this interface, a user can make drawings in a creative manner; e.g., they can start with a rough sketch and progressively achieve a detailed design while repeating the over-sketches.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"71 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114034608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
New technologies make it possible to develop tools that allow more efficient and personalized interaction in unsuspected areas such as martial arts. From the point of view of the modelling of human movement in relation to the learning of complex motor skills, martial arts are of interest because they are articulated around a system of movements that are predefined -or at least, bounded- and governed by the Laws of Physics. Their execution must be learned after continuous practice over time. Artificial Intelligence algorithms can be used to obtain motion patterns that can be used to compare a learners’ practice against the execution of an expert, as well as to analyse its temporal evolution during learning. In this paper we introduce KUMITRON, which collects motion data from wearable sensors and integrates computer vision and machine learning algorithms to help karate practitioners improve their skills in combat. The current version focuses on using the computer vision algorithms to identify the anticipation of the opponent's movements. This information is computed in real time and can be communicated to the learner together with a recommendation of the type of strategy to use in the combat.
{"title":"KUMITRON: Artificial Intelligence System to Monitor Karate Fights that Synchronize Aerial Images with Physiological and Inertial Signals","authors":"J. Echeverria, O. Santos","doi":"10.1145/3397482.3450730","DOIUrl":"https://doi.org/10.1145/3397482.3450730","url":null,"abstract":"New technologies make it possible to develop tools that allow more efficient and personalized interaction in unsuspected areas such as martial arts. From the point of view of the modelling of human movement in relation to the learning of complex motor skills, martial arts are of interest because they are articulated around a system of movements that are predefined -or at least, bounded- and governed by the Laws of Physics. Their execution must be learned after continuous practice over time. Artificial Intelligence algorithms can be used to obtain motion patterns that can be used to compare a learners’ practice against the execution of an expert, as well as to analyse its temporal evolution during learning. In this paper we introduce KUMITRON, which collects motion data from wearable sensors and integrates computer vision and machine learning algorithms to help karate practitioners improve their skills in combat. The current version focuses on using the computer vision algorithms to identify the anticipation of the opponent's movements. This information is computed in real time and can be communicated to the learner together with a recommendation of the type of strategy to use in the combat.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"433 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116997457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kun Qian, Marina Danilevsky, Yannis Katsis, B. Kawas, Erick Oduor, Lucian Popa, Yunyao Li
We present XNLP: an interactive browser-based system embodying a living survey of recent state-of-the-art research in the field of Explainable AI (XAI) within the domain of Natural Language Processing (NLP). The system visually organizes and illustrates XAI-NLP publications and distills their content to allow users to gain insights, generate ideas, and explore the field. We hope that XNLP can become a leading demonstrative example of a living survey, balancing the depth and quality of a traditional well-constructed survey paper with the collaborative dynamism of a widely available interactive tool. XNLP can be accessed at: https://xainlp2020.github.io/xainlp.
{"title":"XNLP: A Living Survey for XAI Research in Natural Language Processing","authors":"Kun Qian, Marina Danilevsky, Yannis Katsis, B. Kawas, Erick Oduor, Lucian Popa, Yunyao Li","doi":"10.1145/3397482.3450728","DOIUrl":"https://doi.org/10.1145/3397482.3450728","url":null,"abstract":"We present XNLP: an interactive browser-based system embodying a living survey of recent state-of-the-art research in the field of Explainable AI (XAI) within the domain of Natural Language Processing (NLP). The system visually organizes and illustrates XAI-NLP publications and distills their content to allow users to gain insights, generate ideas, and explore the field. We hope that XNLP can become a leading demonstrative example of a living survey, balancing the depth and quality of a traditional well-constructed survey paper with the collaborative dynamism of a widely available interactive tool. XNLP can be accessed at: https://xainlp2020.github.io/xainlp.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"6 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123455824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This 3-hour tutorial proposes a new synthesis, in which Artificial Intelligence (AI) algorithms are combined with human-centered thinking to make Human-Centered AI (HCAI). This approach combines research on AI algorithms with user experience design methods to shape technologies that amplify, augment, empower, and enhance human performance. Researchers and developers for HCAI systems value meaningful human control, putting people first by serving human needs, values, and goals.
{"title":"Tutorial: Human-Centered AI: Reliable, Safe and Trustworthy","authors":"B. Shneiderman","doi":"10.1145/3397482.3453994","DOIUrl":"https://doi.org/10.1145/3397482.3453994","url":null,"abstract":"This 3-hour tutorial proposes a new synthesis, in which Artificial Intelligence (AI) algorithms are combined with human-centered thinking to make Human-Centered AI (HCAI). This approach combines research on AI algorithms with user experience design methods to shape technologies that amplify, augment, empower, and enhance human performance. Researchers and developers for HCAI systems value meaningful human control, putting people first by serving human needs, values, and goals.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133490226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robin De Croon, A. Leeuwenberg, J. Aerts, Marie-Francine Moens, Vero Vanden Abeele, K. Verbert
Clinical reports, as unstructured texts, contain important temporal information. However, it remains a challenge for natural language processing (NLP) models to accurately combine temporal cues into a single coherent temporal ordering of described events. In this paper, we present TIEVis, a visual analytics dashboard that visualizes event-timelines extracted from clinical reports. We present the findings of a pilot study in which healthcare professionals explored and used the dashboard to complete a set of tasks. Results highlight the importance of seeing events in their context, and the ability to manually verify and update critical events in a patient history, as a basis to increase user trust.
{"title":"TIEVis: a Visual Analytics Dashboard for Temporal Information Extracted from Clinical Reports","authors":"Robin De Croon, A. Leeuwenberg, J. Aerts, Marie-Francine Moens, Vero Vanden Abeele, K. Verbert","doi":"10.1145/3397482.3450731","DOIUrl":"https://doi.org/10.1145/3397482.3450731","url":null,"abstract":"Clinical reports, as unstructured texts, contain important temporal information. However, it remains a challenge for natural language processing (NLP) models to accurately combine temporal cues into a single coherent temporal ordering of described events. In this paper, we present TIEVis, a visual analytics dashboard that visualizes event-timelines extracted from clinical reports. We present the findings of a pilot study in which healthcare professionals explored and used the dashboard to complete a set of tasks. Results highlight the importance of seeing events in their context, and the ability to manually verify and update critical events in a patient history, as a basis to increase user trust.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114338278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yucheng Jin, Yu Deng, Jiangtao Gong, Xi Wan, Ge Gao, Qianying Wang
We demonstrate a desktop robot OYaYa that imitates users’ emotional facial expressions and helps users manage emotions. Multiple equipped sensors in OYaYa enable multimodal interaction; for example, it recognizes users’ emotions from facial expressions and speeches. Besides, a dashboard illustrates how users interact with OYaYa and how their emotions change. We expect that OYaYa allows users to manage their emotions in a fun way.
{"title":"OYaYa: A Desktop Robot Enabling Multimodal Interaction with Emotions","authors":"Yucheng Jin, Yu Deng, Jiangtao Gong, Xi Wan, Ge Gao, Qianying Wang","doi":"10.1145/3397482.3450729","DOIUrl":"https://doi.org/10.1145/3397482.3450729","url":null,"abstract":"We demonstrate a desktop robot OYaYa that imitates users’ emotional facial expressions and helps users manage emotions. Multiple equipped sensors in OYaYa enable multimodal interaction; for example, it recognizes users’ emotions from facial expressions and speeches. Besides, a dashboard illustrates how users interact with OYaYa and how their emotions change. We expect that OYaYa allows users to manage their emotions in a fun way.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"254 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134313536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}