As the application of design and technology has become more interdisciplinary and integrated, the development of interactive service robots (ISRs), which are designed according to unique situational requirements, has emerged as a popular trend. Research has shown that if affective computing technologies and machine learning mechanisms can be introduced to enhance interaction and feedback between ISRs and users, ISRs may be better aligned with both the service scenarios and the future development of innovative services. Based on an interdisciplinary integration framework, this study combined the concept and methodologies of design thinking, emotion detection technologies, and case-based reasoning (CBR), based on the use case of a simulated interview for empirical research, and developed a prototype emotion-sensing robot (ESR) system for the planning and testing of emotion sensing. Three emotion detection and analysis indicators, namely, happiness index of facial expressions, blink rate, and semantic emotions conveyed by the text on their resume, were proposed as the basis for analyzing emotional perception in this study. The experimental results were then used to analyze the effectiveness of the technologies as well as the value, utility, and affordance of the interactive interview bot system.
{"title":"Design Thinking for Developing a Case-based Reasoning Emotion-Sensing Robot for Interactive Interview","authors":"Sheng-Ming Wang, Wei-Min Cheng","doi":"10.1145/3391203.3391205","DOIUrl":"https://doi.org/10.1145/3391203.3391205","url":null,"abstract":"As the application of design and technology has become more interdisciplinary and integrated, the development of interactive service robots (ISRs), which are designed according to unique situational requirements, has emerged as a popular trend. Research has shown that if affective computing technologies and machine learning mechanisms can be introduced to enhance interaction and feedback between ISRs and users, ISRs may be better aligned with both the service scenarios and the future development of innovative services. Based on an interdisciplinary integration framework, this study combined the concept and methodologies of design thinking, emotion detection technologies, and case-based reasoning (CBR), based on the use case of a simulated interview for empirical research, and developed a prototype emotion-sensing robot (ESR) system for the planning and testing of emotion sensing. Three emotion detection and analysis indicators, namely, happiness index of facial expressions, blink rate, and semantic emotions conveyed by the text on their resume, were proposed as the basis for analyzing emotional perception in this study. The experimental results were then used to analyze the effectiveness of the technologies as well as the value, utility, and affordance of the interactive interview bot system.","PeriodicalId":403163,"journal":{"name":"Proceedings of the 2020 Symposium on Emerging Research from Asia and on Asian Contexts and Cultures","volume":"649 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116181926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In these days, artificial intelligence (AI) finds applications in many different domains, including art creation such as painting and music. However, there is lack of know-how involving AI in developing food products. In this case, we analyzed a large number of Japanese newspaper articles and developed a chocolate that represents the mood of each year. Specifically, the mood was expressed by predicting the taste of words by machine learning. This paper describes how to create software that converts human sensation into taste, and how people develop products based on it, and how to evaluate the resulting product. In the result, we introduce a product developed with the support of artificial intelligence that has an effect of stimulating curiosity in consumers and can lead developers to new discoveries and perspectives. In the future, we aim to create a system that supports collaborative recipe development involving AI, developers, and consumers.
{"title":"A case study of Food Production Using Artificial Intelligence","authors":"Takuya Sera, Sayaka Izukura, Izumi Hashimoto, Takashi Motegi, Yosuke Motohashi","doi":"10.1145/3391203.3391211","DOIUrl":"https://doi.org/10.1145/3391203.3391211","url":null,"abstract":"In these days, artificial intelligence (AI) finds applications in many different domains, including art creation such as painting and music. However, there is lack of know-how involving AI in developing food products. In this case, we analyzed a large number of Japanese newspaper articles and developed a chocolate that represents the mood of each year. Specifically, the mood was expressed by predicting the taste of words by machine learning. This paper describes how to create software that converts human sensation into taste, and how people develop products based on it, and how to evaluate the resulting product. In the result, we introduce a product developed with the support of artificial intelligence that has an effect of stimulating curiosity in consumers and can lead developers to new discoveries and perspectives. In the future, we aim to create a system that supports collaborative recipe development involving AI, developers, and consumers.","PeriodicalId":403163,"journal":{"name":"Proceedings of the 2020 Symposium on Emerging Research from Asia and on Asian Contexts and Cultures","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130912496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The issues of localization, globalization of websites, and effects of cultural diversity in design of the user-interfaces have been around for a while, where most of the studies focused on Western, American, Southeast Asian which are commonly portrayed from their perspectives and cultures. Therefore, this research aims to define the design elements of user interfaces which are affected by the cultural diversity through a pilot study on Arabic and Western government websites. This study intends to identify the differences between the government websites of each of (UK, French, Iraq, UAE) in terms of the design features, as well as examine to what extent these features are affected by cultural diversity of nations. The outcomes of this study contribute as guidelines within a content framework that takes into consideration the process of Arabic website development, especially towards improving the e-government websites design.
{"title":"Localization and Globalization of Website Design: A pilot study focuses on comparison of government websites","authors":"M. Adnan, Wong Chung Wei, Masitah Ghazali","doi":"10.1145/3391203.3391212","DOIUrl":"https://doi.org/10.1145/3391203.3391212","url":null,"abstract":"The issues of localization, globalization of websites, and effects of cultural diversity in design of the user-interfaces have been around for a while, where most of the studies focused on Western, American, Southeast Asian which are commonly portrayed from their perspectives and cultures. Therefore, this research aims to define the design elements of user interfaces which are affected by the cultural diversity through a pilot study on Arabic and Western government websites. This study intends to identify the differences between the government websites of each of (UK, French, Iraq, UAE) in terms of the design features, as well as examine to what extent these features are affected by cultural diversity of nations. The outcomes of this study contribute as guidelines within a content framework that takes into consideration the process of Arabic website development, especially towards improving the e-government websites design.","PeriodicalId":403163,"journal":{"name":"Proceedings of the 2020 Symposium on Emerging Research from Asia and on Asian Contexts and Cultures","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122008031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper is a work in progress report on the development of DigiMo, a chatbot with emotional intelligence. The chatbot development is based on a data collection and annotations of real dialogues between local Singaporeans expressing genuine emotions. The models were trained with cakechat, an open source sequence-to-sequence deep neural network. Perplexity measurements from automatic testing, as well as feedback from 6 expert evaluators confirmed the chatbot answers have high accuracy.
{"title":"DigiMo - towards developing an emotional intelligent chatbot in Singapore","authors":"Andreea Niculescu, Ivan Kukanov, Bimlesh Wadhwa","doi":"10.1145/3391203.3391210","DOIUrl":"https://doi.org/10.1145/3391203.3391210","url":null,"abstract":"The paper is a work in progress report on the development of DigiMo, a chatbot with emotional intelligence. The chatbot development is based on a data collection and annotations of real dialogues between local Singaporeans expressing genuine emotions. The models were trained with cakechat, an open source sequence-to-sequence deep neural network. Perplexity measurements from automatic testing, as well as feedback from 6 expert evaluators confirmed the chatbot answers have high accuracy.","PeriodicalId":403163,"journal":{"name":"Proceedings of the 2020 Symposium on Emerging Research from Asia and on Asian Contexts and Cultures","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123863940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A peer review process is an essential part of not only in the CHI conference but also in many scientific research areas to determine which paper submitted should be accepted. In order to improve the peer review process, tools have been developed to support reviewers by reducing the paper they should review or by highlighting key sentences to read faster. However, only a few studies investigated reviewers' perspectives regarding their tasks during the peer review process. In this paper, we conducted semi-structured interviews with CHI reviewers who have experienced in reviewing paper submitted to the CHI conference. As a result, we better understand how paper-reviewing tasks are performed, which tasks reviewers felt most challenging, and reviewers' needs for a better peer review experience.
{"title":"Uncovering CHI Reviewers Needs and Barriers","authors":"Wanhae Lee, Minji Kwon, Yewon Hyun, Jihyun Lee, Joonho Gwon, Hyunggu Jung","doi":"10.1145/3391203.3391218","DOIUrl":"https://doi.org/10.1145/3391203.3391218","url":null,"abstract":"A peer review process is an essential part of not only in the CHI conference but also in many scientific research areas to determine which paper submitted should be accepted. In order to improve the peer review process, tools have been developed to support reviewers by reducing the paper they should review or by highlighting key sentences to read faster. However, only a few studies investigated reviewers' perspectives regarding their tasks during the peer review process. In this paper, we conducted semi-structured interviews with CHI reviewers who have experienced in reviewing paper submitted to the CHI conference. As a result, we better understand how paper-reviewing tasks are performed, which tasks reviewers felt most challenging, and reviewers' needs for a better peer review experience.","PeriodicalId":403163,"journal":{"name":"Proceedings of the 2020 Symposium on Emerging Research from Asia and on Asian Contexts and Cultures","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127056106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yudai Kawakami, Kohei Matsumura, Naomi Iga, H. Noma
In designing an interactive system for children, it is imperative to hear the opinions and emotions of children themselves. However, it can be challenging to investigate their emotions because of their limited abilities of expression. Children are still in the process of developing their literacy and communication skills. The smiley face Likert scale is a survey method that utilizes face icons to express emotion regardless of the subject's literacy. The method typically uses five to seven face icons for a survey. However, this limited number of face icons restricts the measurement of children's emotions in terms of detail. This paper proposes a novel survey method that introduces the visual analog scale into the smiley face Likert scale. We have developed a tool that would allow children to change the styles of their eyebrows and mouth continuously. This method would also allow investigators to understand their subjects' emotions in detail. Through our preliminary study, we have confirmed that children could express emotion using our face icons to an extent similar to that of adults.
{"title":"A Smiley Face Icon Creator for Evaluating Emotion with Children","authors":"Yudai Kawakami, Kohei Matsumura, Naomi Iga, H. Noma","doi":"10.1145/3391203.3391228","DOIUrl":"https://doi.org/10.1145/3391203.3391228","url":null,"abstract":"In designing an interactive system for children, it is imperative to hear the opinions and emotions of children themselves. However, it can be challenging to investigate their emotions because of their limited abilities of expression. Children are still in the process of developing their literacy and communication skills. The smiley face Likert scale is a survey method that utilizes face icons to express emotion regardless of the subject's literacy. The method typically uses five to seven face icons for a survey. However, this limited number of face icons restricts the measurement of children's emotions in terms of detail. This paper proposes a novel survey method that introduces the visual analog scale into the smiley face Likert scale. We have developed a tool that would allow children to change the styles of their eyebrows and mouth continuously. This method would also allow investigators to understand their subjects' emotions in detail. Through our preliminary study, we have confirmed that children could express emotion using our face icons to an extent similar to that of adults.","PeriodicalId":403163,"journal":{"name":"Proceedings of the 2020 Symposium on Emerging Research from Asia and on Asian Contexts and Cultures","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129625285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shoya Ishimaru, Takanori Maruichi, K. Kise, A. Dengel
Self-confidence - an assurance of one's personal decision and ability - is one of the most important factors in learning. When there is a gap between a learner's confidence and comprehension, the learner loses a chance to review a learning subject correctly. To solve this problem, we propose a system which estimates self-confidence while solving multiple-choice questions by eye tracking and gives feedback about which question should be reviewed carefully. The system was evaluated in our experiment involving 20 participants. We observed that correct answer rates of questions were increased by 14% and 17% by giving feedback about correct answers without confidence and incorrect answers with confidence, respectively.
{"title":"Gaze-Based Self-Confidence Estimation on Multiple-Choice Questions and Its Feedback","authors":"Shoya Ishimaru, Takanori Maruichi, K. Kise, A. Dengel","doi":"10.1145/3391203.3391227","DOIUrl":"https://doi.org/10.1145/3391203.3391227","url":null,"abstract":"Self-confidence - an assurance of one's personal decision and ability - is one of the most important factors in learning. When there is a gap between a learner's confidence and comprehension, the learner loses a chance to review a learning subject correctly. To solve this problem, we propose a system which estimates self-confidence while solving multiple-choice questions by eye tracking and gives feedback about which question should be reviewed carefully. The system was evaluated in our experiment involving 20 participants. We observed that correct answer rates of questions were increased by 14% and 17% by giving feedback about correct answers without confidence and incorrect answers with confidence, respectively.","PeriodicalId":403163,"journal":{"name":"Proceedings of the 2020 Symposium on Emerging Research from Asia and on Asian Contexts and Cultures","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121723953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Technology has the potential to improve the lives of underprivileged communities from developing regions of the world, especially those with low-literacy skills. Human Computer Interaction (HCI) researchers have conducted studies to understand an effective way to communicate with this group using Information and Communication Technologies (ICTs). Beyond facilitating communication, ICTs have transformed the way low-literate users send money, learn skills and seek references, etc. As instructions (guidelines, rules, laws or warnings) play an important part while using these services on ICTs, finding effective ways to deliver those instructions has become crucial for HCI practitioners, as low-literate users face difficulties in using ICTs with only textual interfaces [1]. This study focuses on communicating these instructions through visual communication in the form of instructional illustrations. The study further investigates the effectiveness of 'Instructional Illustrations' using an educational mobile app and compares it with traditional instructional video communication with the low-literate group of Anganwadi workers in India.
{"title":"Study of Instructional Illustrations on ICTs: Considering persona of low-literate users from India","authors":"Rucha Tulaskar","doi":"10.1145/3391203.3391217","DOIUrl":"https://doi.org/10.1145/3391203.3391217","url":null,"abstract":"Technology has the potential to improve the lives of underprivileged communities from developing regions of the world, especially those with low-literacy skills. Human Computer Interaction (HCI) researchers have conducted studies to understand an effective way to communicate with this group using Information and Communication Technologies (ICTs). Beyond facilitating communication, ICTs have transformed the way low-literate users send money, learn skills and seek references, etc. As instructions (guidelines, rules, laws or warnings) play an important part while using these services on ICTs, finding effective ways to deliver those instructions has become crucial for HCI practitioners, as low-literate users face difficulties in using ICTs with only textual interfaces [1]. This study focuses on communicating these instructions through visual communication in the form of instructional illustrations. The study further investigates the effectiveness of 'Instructional Illustrations' using an educational mobile app and compares it with traditional instructional video communication with the low-literate group of Anganwadi workers in India.","PeriodicalId":403163,"journal":{"name":"Proceedings of the 2020 Symposium on Emerging Research from Asia and on Asian Contexts and Cultures","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124771770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a method of expanding the input vocabulary of a smartphone by using tapping force on its pressure-sensitive touchscreen. In our method, the input mode is switched by users controlling multiple levels of tapping force. To design our method, we conducted a preliminary user study to investigate the maximum number of levels in which users can control their tapping force. We found the thresholds for distinguishing the tapping force that users exert. The results showed that the accuracy of the 3 and 4 levels of tapping force without feedback were 84.9% and 77.7%, respectively, and that the thresholds should be calibrated per user.
{"title":"Preliminary Investigation of Tapping Force on Pressure-Sensitive Touchscreen for Expanding Input Vocabulary on Smartphone","authors":"Ryo Ikeda, Yuta Urushiyama, B. Shizuki","doi":"10.1145/3391203.3391224","DOIUrl":"https://doi.org/10.1145/3391203.3391224","url":null,"abstract":"We propose a method of expanding the input vocabulary of a smartphone by using tapping force on its pressure-sensitive touchscreen. In our method, the input mode is switched by users controlling multiple levels of tapping force. To design our method, we conducted a preliminary user study to investigate the maximum number of levels in which users can control their tapping force. We found the thresholds for distinguishing the tapping force that users exert. The results showed that the accuracy of the 3 and 4 levels of tapping force without feedback were 84.9% and 77.7%, respectively, and that the thresholds should be calibrated per user.","PeriodicalId":403163,"journal":{"name":"Proceedings of the 2020 Symposium on Emerging Research from Asia and on Asian Contexts and Cultures","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126424426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a device shape sensing method using coils that are inductively coupled in the horizontal direction. By observing the change in the coupling characteristics between the coils arranged side by side, the positional relationship between the coils can be detected. By using the proposed technique, the shape of an object having a coil can be sensed without an external sensing mechanism such as a camera. In this paper, we show two types of prototypes, a rigid device combining multiple small tiles and a flexible device, and verify the basic characteristics. By utilizing the proposed technique, a deformable user interface with various inputs can be constructed simply by forming simple metal wiring on the device.
{"title":"A Self-Sensing Technique Using Inductively-Coupled Coils for Deformable User Interfaces","authors":"J. Kadomoto, H. Irie, S. Sakai","doi":"10.1145/3391203.3391226","DOIUrl":"https://doi.org/10.1145/3391203.3391226","url":null,"abstract":"We present a device shape sensing method using coils that are inductively coupled in the horizontal direction. By observing the change in the coupling characteristics between the coils arranged side by side, the positional relationship between the coils can be detected. By using the proposed technique, the shape of an object having a coil can be sensed without an external sensing mechanism such as a camera. In this paper, we show two types of prototypes, a rigid device combining multiple small tiles and a flexible device, and verify the basic characteristics. By utilizing the proposed technique, a deformable user interface with various inputs can be constructed simply by forming simple metal wiring on the device.","PeriodicalId":403163,"journal":{"name":"Proceedings of the 2020 Symposium on Emerging Research from Asia and on Asian Contexts and Cultures","volume":"224 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114378352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}