In this essay, we are interested in investigating some of the possible relations between design and digital humanities. In particular, we analyze the contribution that communication and interface design can bring to digital humanities. In a scene currently characterized by a heterogeneous set of activities and humanistic, technological, and cultural studies, the involvement of design seems confined to the development of digital instruments in accessing, exploring, and manipulating cultural data. How can design and the humanities work in an interdisciplinary way in order to shape new digital means to explore humanistic content? This essay presents four case studies (three of them developed by the authors), each of which suggests some methods and tools focused on the interdisciplinary relationships of scholars. The findings are both models of collaboration and models of digital architecture (data visualization) and showcase applied digital interactive platforms that present several paths to discovering different levels of content in the fields of art, psychology, literature, and history. In conclusion, this essay presents a manifesto focusing on ten points of virtuous relation between design humanities and the field of information visualization.
{"title":"Design, Digital Humanities, and Information Visualization for Cultural Heritage","authors":"Raffaella Trocchianesi, Letizia Bollini","doi":"10.3390/mti7110102","DOIUrl":"https://doi.org/10.3390/mti7110102","url":null,"abstract":"In this essay, we are interested in investigating some of the possible relations between design and digital humanities. In particular, we analyze the contribution that communication and interface design can bring to digital humanities. In a scene currently characterized by a heterogeneous set of activities and humanistic, technological, and cultural studies, the involvement of design seems confined to the development of digital instruments in accessing, exploring, and manipulating cultural data. How can design and the humanities work in an interdisciplinary way in order to shape new digital means to explore humanistic content? This essay presents four case studies (three of them developed by the authors), each of which suggests some methods and tools focused on the interdisciplinary relationships of scholars. The findings are both models of collaboration and models of digital architecture (data visualization) and showcase applied digital interactive platforms that present several paths to discovering different levels of content in the fields of art, psychology, literature, and history. In conclusion, this essay presents a manifesto focusing on ten points of virtuous relation between design humanities and the field of information visualization.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"44 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135221200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study explored drivers’ emotion-based impressions of car-borne central control platforms (CBCCPs) for personal-use vehicles. Thus, this preference-based study examined experts’ and drivers’ opinions regarding the appeal of CBCCPs from the perspective of Miryoku engineering. To this end, this study analyzed data via the EGM (evaluation grid method (EGM) and quantification theory type I. Results: Drivers’ preferences for specific CBCCP design characteristics were categorized into the factors “legible”, convenient”, and “tasteful”, which comprised the core of the EGM semantic hierarchical diagram. In addition, the importance of CBCCPs’ appeal factors and characteristics was assessed through quantification theory type I. The findings of this study provide valuable insights for designers, manufacturers, and researchers interested in the design of CBCCPs. Additionally, the results of this study can contribute to research on applied psychology, human–computer interactions, and car interface design.
{"title":"Exploring the Appeal of Car-Borne Central Control Platforms Based on Driving Experience","authors":"Chih-Kuan Lin, Chien-Hsiung Chen, Kai-Shuan Shen","doi":"10.3390/mti7110101","DOIUrl":"https://doi.org/10.3390/mti7110101","url":null,"abstract":"This study explored drivers’ emotion-based impressions of car-borne central control platforms (CBCCPs) for personal-use vehicles. Thus, this preference-based study examined experts’ and drivers’ opinions regarding the appeal of CBCCPs from the perspective of Miryoku engineering. To this end, this study analyzed data via the EGM (evaluation grid method (EGM) and quantification theory type I. Results: Drivers’ preferences for specific CBCCP design characteristics were categorized into the factors “legible”, convenient”, and “tasteful”, which comprised the core of the EGM semantic hierarchical diagram. In addition, the importance of CBCCPs’ appeal factors and characteristics was assessed through quantification theory type I. The findings of this study provide valuable insights for designers, manufacturers, and researchers interested in the design of CBCCPs. Additionally, the results of this study can contribute to research on applied psychology, human–computer interactions, and car interface design.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"27 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136134721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research pursues the question of how the computer-generated analysis and visualization of communication can foster collaboration in teams that work together online. The audio data of regular online video meetings of three different teams were analyzed. Structural information regarding their communication was visualized in a communication report, and then, discussed with the teams in so-called digitally supported coaching (DSC) sessions. The aim of the DSC is to improve team collaboration by discerning helpful and less helpful patterns in the teams’ communication. This report allows us to recognize individual positions within the teams, as well as communication structures, such as conversational turn taking, that are relevant for group intelligence, as other research has shown. The findings pertaining to the team members during the DSC were gathered via questionnaires. These qualitative data were then matched with the quantitative data derived from the calls, particularly social network analysis (SNA). The SNA was inferred using the average number of interactions between the participants as measured in the calls. The qualitative findings of the teams were then cross-checked with the quantitative analysis. As a result, the assessment of team members’ roles was highly coherent with the SNA. Furthermore, all teams managed to derive concrete measures for improving their collaboration based on the reflection in the DSC.
{"title":"Developing Teams by Visualizing Their Communication Structures in Online Meetings","authors":"Thomas Spielhofer, Renate Motschnig","doi":"10.3390/mti7100100","DOIUrl":"https://doi.org/10.3390/mti7100100","url":null,"abstract":"This research pursues the question of how the computer-generated analysis and visualization of communication can foster collaboration in teams that work together online. The audio data of regular online video meetings of three different teams were analyzed. Structural information regarding their communication was visualized in a communication report, and then, discussed with the teams in so-called digitally supported coaching (DSC) sessions. The aim of the DSC is to improve team collaboration by discerning helpful and less helpful patterns in the teams’ communication. This report allows us to recognize individual positions within the teams, as well as communication structures, such as conversational turn taking, that are relevant for group intelligence, as other research has shown. The findings pertaining to the team members during the DSC were gathered via questionnaires. These qualitative data were then matched with the quantitative data derived from the calls, particularly social network analysis (SNA). The SNA was inferred using the average number of interactions between the participants as measured in the calls. The qualitative findings of the teams were then cross-checked with the quantitative analysis. As a result, the assessment of team members’ roles was highly coherent with the SNA. Furthermore, all teams managed to derive concrete measures for improving their collaboration based on the reflection in the DSC.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135730868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Coe, Grigori Evreinov, Mounia Ziat, Roope Raisamo
In this paper, we report a method of implementing a universal volumetric haptic actuation platform which can be adapted to fit a wide variety of visual displays with flat surfaces. This platform aims to enable the simulation of the 3D features of input interfaces. This goal is achieved using four readily available stepper motors in a diagonal cross configuration with which we can quickly change the position of a surface in a manner that can render these volumetric features. In our research, we use a Microsoft Surface Go tablet placed on the haptic enhancement actuation platform to replicate the exploratory features of virtual keyboard keycaps displayed on the touchscreen. We ask seven participants to explore the surface of a virtual keypad comprised of 12 keycaps. As a second task, random key positions are announced one at a time, which the participant is expected to locate. These experiments are used to understand how and with what fidelity the volumetric feedback could improve performance (detection time, track length, and error rate) of detecting the specific keycaps location with haptic feedback and in the absence of visual feedback. Participants complete the tasks with great success (p < 0.05). In addition, their ability to feel convex keycaps is confirmed within the subjective comments.
{"title":"A Universal Volumetric Haptic Actuation Platform","authors":"Patrick Coe, Grigori Evreinov, Mounia Ziat, Roope Raisamo","doi":"10.3390/mti7100099","DOIUrl":"https://doi.org/10.3390/mti7100099","url":null,"abstract":"In this paper, we report a method of implementing a universal volumetric haptic actuation platform which can be adapted to fit a wide variety of visual displays with flat surfaces. This platform aims to enable the simulation of the 3D features of input interfaces. This goal is achieved using four readily available stepper motors in a diagonal cross configuration with which we can quickly change the position of a surface in a manner that can render these volumetric features. In our research, we use a Microsoft Surface Go tablet placed on the haptic enhancement actuation platform to replicate the exploratory features of virtual keyboard keycaps displayed on the touchscreen. We ask seven participants to explore the surface of a virtual keypad comprised of 12 keycaps. As a second task, random key positions are announced one at a time, which the participant is expected to locate. These experiments are used to understand how and with what fidelity the volumetric feedback could improve performance (detection time, track length, and error rate) of detecting the specific keycaps location with haptic feedback and in the absence of visual feedback. Participants complete the tasks with great success (p < 0.05). In addition, their ability to feel convex keycaps is confirmed within the subjective comments.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135993449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ana Beatriz Marques, Vasco Branco, Rui Costa, Nina Costa
Immersive Unit Visualization is an emergent form of visualization that arose from Immersive Analytics where, unlike traditional visualizations, each data point is represented by an individual visual mark in an immersive virtual environment. This practice has focused almost exclusively on virtual reality, excluding augmented reality (AR). This article develops and tests a prototype of an Immersive Unit Visualization (Floating Companies II) with two AR devices: head-mounted display (HMD) and hand-held display (HHD). Results from the testing sessions with 20 users were analyzed through qualitative research analysis and thematic coding indicating that, while the HHD enabled a first contact with AR visualization on a familiar device, HMD improved the perception of hybrid space by supporting greater stability of virtual content, wider field of view, improved spatial perception, increased sense of immersion, and more realistic simulation, which had an impact on information reading and sense-making. The materialization of abstract quantitative values into concrete reality through its simulation in the real environment and the ludic dimension stand out as important opportunities for this type of visualization. This paper investigates the aspects distinguishing two experiences regarding data visualization in hybrid space, and characterizes ways of seeing information with AR, identifying opportunities to advance information design research.
{"title":"Immersive Unit Visualization with Augmented Reality","authors":"Ana Beatriz Marques, Vasco Branco, Rui Costa, Nina Costa","doi":"10.3390/mti7100098","DOIUrl":"https://doi.org/10.3390/mti7100098","url":null,"abstract":"Immersive Unit Visualization is an emergent form of visualization that arose from Immersive Analytics where, unlike traditional visualizations, each data point is represented by an individual visual mark in an immersive virtual environment. This practice has focused almost exclusively on virtual reality, excluding augmented reality (AR). This article develops and tests a prototype of an Immersive Unit Visualization (Floating Companies II) with two AR devices: head-mounted display (HMD) and hand-held display (HHD). Results from the testing sessions with 20 users were analyzed through qualitative research analysis and thematic coding indicating that, while the HHD enabled a first contact with AR visualization on a familiar device, HMD improved the perception of hybrid space by supporting greater stability of virtual content, wider field of view, improved spatial perception, increased sense of immersion, and more realistic simulation, which had an impact on information reading and sense-making. The materialization of abstract quantitative values into concrete reality through its simulation in the real environment and the ludic dimension stand out as important opportunities for this type of visualization. This paper investigates the aspects distinguishing two experiences regarding data visualization in hybrid space, and characterizes ways of seeing information with AR, identifying opportunities to advance information design research.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136033124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sign language (SL) avatar systems aid communication between the hearing and deaf communities. Despite technological progress, there is a lack of a standardized avatar development framework. This paper offers a systematic review of SL avatar systems spanning from 1982 to 2022. Using PRISMA guidelines, we shortlisted 47 papers from an initial 1765, focusing on sign synthesis techniques, corpora, design strategies, and facial expression methods. We also discuss both objective and subjective evaluation methodologies. Our findings highlight key trends and suggest new research avenues for improving SL avatars.
{"title":"Evolution and Trends in Sign Language Avatar Systems: Unveiling a 40-Year Journey via Systematic Review","authors":"Maryam Aziz, Achraf Othman","doi":"10.3390/mti7100097","DOIUrl":"https://doi.org/10.3390/mti7100097","url":null,"abstract":"Sign language (SL) avatar systems aid communication between the hearing and deaf communities. Despite technological progress, there is a lack of a standardized avatar development framework. This paper offers a systematic review of SL avatar systems spanning from 1982 to 2022. Using PRISMA guidelines, we shortlisted 47 papers from an initial 1765, focusing on sign synthesis techniques, corpora, design strategies, and facial expression methods. We also discuss both objective and subjective evaluation methodologies. Our findings highlight key trends and suggest new research avenues for improving SL avatars.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136116352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alberto Sanchez-Acedo, Alejandro Carbonell-Alcocer, Manuel Gertrudix, Jose Luis Rubio-Tamayo
Immersive journalism is a new form of media communication that uses extended reality systems to produce its content. Despite the possibilities it offers, its use is still limited in the media due to the lack of systematised and scientific knowledge regarding its application. This is a problem because it is a very powerful technology that changes the way audiences receive information and can be used both for new forms of storytelling that generate greater user engagement and for very sophisticated disinformation, which is why it is really important to study it. This study analyses articles published in the last 5 years that cover the use of extended technologies and the metaverse applied to immersive journalism. A systematic literature review applying PRISMA was carried out to identify literature within Web of Science, Scopus and Google Scholar (n = 61). Quantitative and qualitative analyses were conducted on the data collection techniques, the type of the data and the analysis techniques used. The results show a low level of methodological maturity, with research that is fundamentally descriptive and not very formalised, which limits the scope of its results and, therefore, the transfer of knowledge for its application in the configuration of new immersive journalistic products. The metaverse and extended technologies are considered independently and with distinct applications. It is concluded that research in this area is still in an initial exploratory and generalist stage that offers results that are not yet applicable to the promotion of this type of media format.
沉浸式新闻是一种新的媒体传播形式,它使用扩展现实系统来生产其内容。尽管它提供了可能性,但由于缺乏关于其应用的系统和科学知识,它在媒体中的使用仍然有限。这是一个问题,因为它是一种非常强大的技术,它改变了观众接受信息的方式,既可以用于新形式的讲故事,产生更大的用户参与度,也可以用于非常复杂的虚假信息,这就是为什么研究它真的很重要。本研究分析了过去5年发表的文章,这些文章涵盖了扩展技术和虚拟世界在沉浸式新闻中的应用。应用PRISMA进行系统文献综述,在Web of Science、Scopus和Google Scholar中识别文献(n = 61)。对数据收集技术、数据类型和使用的分析技术进行了定量和定性分析。结果显示,方法成熟度较低,研究基本上是描述性的,不是很形式化,这限制了其结果的范围,因此,在新的沉浸式新闻产品配置中应用知识的转移。元技术和扩展技术被视为独立的,具有不同的应用程序。结论是,该领域的研究仍处于初步探索和通才阶段,其结果尚不适用于推广这种类型的媒体格式。
{"title":"Metaverse and Extended Realities in Immersive Journalism: A Systematic Literature Review","authors":"Alberto Sanchez-Acedo, Alejandro Carbonell-Alcocer, Manuel Gertrudix, Jose Luis Rubio-Tamayo","doi":"10.3390/mti7100096","DOIUrl":"https://doi.org/10.3390/mti7100096","url":null,"abstract":"Immersive journalism is a new form of media communication that uses extended reality systems to produce its content. Despite the possibilities it offers, its use is still limited in the media due to the lack of systematised and scientific knowledge regarding its application. This is a problem because it is a very powerful technology that changes the way audiences receive information and can be used both for new forms of storytelling that generate greater user engagement and for very sophisticated disinformation, which is why it is really important to study it. This study analyses articles published in the last 5 years that cover the use of extended technologies and the metaverse applied to immersive journalism. A systematic literature review applying PRISMA was carried out to identify literature within Web of Science, Scopus and Google Scholar (n = 61). Quantitative and qualitative analyses were conducted on the data collection techniques, the type of the data and the analysis techniques used. The results show a low level of methodological maturity, with research that is fundamentally descriptive and not very formalised, which limits the scope of its results and, therefore, the transfer of knowledge for its application in the configuration of new immersive journalistic products. The metaverse and extended technologies are considered independently and with distinct applications. It is concluded that research in this area is still in an initial exploratory and generalist stage that offers results that are not yet applicable to the promotion of this type of media format.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135968072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The field of brain–computer interface (BCI) enables us to establish a pathway between the human brain and computers, with applications in the medical and nonmedical field. Brain computer interfaces can have a significant impact on the way humans interact with machines. In recent years, the surge in computational power has enabled deep learning algorithms to act as a robust avenue for leveraging BCIs. This paper provides an up-to-date review of deep and hybrid deep learning techniques utilized in the field of BCI through motor imagery. It delves into the adoption of deep learning techniques, including convolutional neural networks (CNNs), autoencoders (AEs), and recurrent structures such as long short-term memory (LSTM) networks. Moreover, hybrid approaches, such as combining CNNs with LSTMs or AEs and other techniques, are reviewed for their potential to enhance classification performance. Finally, we address challenges within motor imagery BCIs and highlight further research directions in this emerging field.
{"title":"Current Trends, Challenges, and Future Research Directions of Hybrid and Deep Learning Techniques for Motor Imagery Brain–Computer Interface","authors":"Emmanouil Lionakis, Konstantinos Karampidis, Giorgos Papadourakis","doi":"10.3390/mti7100095","DOIUrl":"https://doi.org/10.3390/mti7100095","url":null,"abstract":"The field of brain–computer interface (BCI) enables us to establish a pathway between the human brain and computers, with applications in the medical and nonmedical field. Brain computer interfaces can have a significant impact on the way humans interact with machines. In recent years, the surge in computational power has enabled deep learning algorithms to act as a robust avenue for leveraging BCIs. This paper provides an up-to-date review of deep and hybrid deep learning techniques utilized in the field of BCI through motor imagery. It delves into the adoption of deep learning techniques, including convolutional neural networks (CNNs), autoencoders (AEs), and recurrent structures such as long short-term memory (LSTM) networks. Moreover, hybrid approaches, such as combining CNNs with LSTMs or AEs and other techniques, are reviewed for their potential to enhance classification performance. Finally, we address challenges within motor imagery BCIs and highlight further research directions in this emerging field.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136013824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sari Yli-Kauhaluoma, Milt Statheropoulos, Anne Zygmanowski, Osmo Anttalainen, Hanna Hakulinen, Maria Theodora Kontogianni, Matti Kuula, Johannes Pernaa, Paula Vanninen
Public warning systems are an essential element of safe cities. However, the functionality of neither traditional nor digital emergency warnings is understood well enough from the perspective of citizens. This study examines smart city development from the perspective of safety by exploring citizens’ viewpoints. It investigates people’s perceptions of the ways in which they obtain warnings and information about emergencies involving health risks. Data were collected in the form of focus group interviews and semi-structured interviews in Finland, Germany, and Greece. The results suggest that people place a lot of trust in their social network, receiving text messages, and their ability to use web-based search engines in order to obtain public warnings. The study discusses the challenges identified by citizens in the use of conventional radio and television transmissions and sirens for public warnings. Based on the results, citizens demonstrate informed ignorance about existing mobile emergency applications. Our results imply that it is not sufficient to build emergency communication infrastructure: the development of smart, safe cities requires continuous work and the integration of both hard and soft infrastructure-oriented strategies, i.e., technological infrastructure development including digitalisation and education, advancement of knowledge, and participation of people. Both strategic aspects are essential to enable people to take advantage of novel digital applications in emergency situations.
{"title":"Safe City: A Study of Channels for Public Warnings for Emergency Communication in Finland, Germany, and Greece","authors":"Sari Yli-Kauhaluoma, Milt Statheropoulos, Anne Zygmanowski, Osmo Anttalainen, Hanna Hakulinen, Maria Theodora Kontogianni, Matti Kuula, Johannes Pernaa, Paula Vanninen","doi":"10.3390/mti7100094","DOIUrl":"https://doi.org/10.3390/mti7100094","url":null,"abstract":"Public warning systems are an essential element of safe cities. However, the functionality of neither traditional nor digital emergency warnings is understood well enough from the perspective of citizens. This study examines smart city development from the perspective of safety by exploring citizens’ viewpoints. It investigates people’s perceptions of the ways in which they obtain warnings and information about emergencies involving health risks. Data were collected in the form of focus group interviews and semi-structured interviews in Finland, Germany, and Greece. The results suggest that people place a lot of trust in their social network, receiving text messages, and their ability to use web-based search engines in order to obtain public warnings. The study discusses the challenges identified by citizens in the use of conventional radio and television transmissions and sirens for public warnings. Based on the results, citizens demonstrate informed ignorance about existing mobile emergency applications. Our results imply that it is not sufficient to build emergency communication infrastructure: the development of smart, safe cities requires continuous work and the integration of both hard and soft infrastructure-oriented strategies, i.e., technological infrastructure development including digitalisation and education, advancement of knowledge, and participation of people. Both strategic aspects are essential to enable people to take advantage of novel digital applications in emergency situations.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136356872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Relational cues are extracts from actual verbal dialogues that help build the therapist–patient working alliance and stronger bond through the depiction of empathy, respect and openness. ECAs (Embodied conversational agents) are human-like virtual agents that exhibit verbal and non-verbal behaviours. In the digital health space, ECAs act as health coaches or experts. ECA dialogues have previously been designed to include relational cues to motivate patients to change their current behaviours and encourage adherence to a treatment plan. However, there is little understanding of who finds specific relational cues delivered by an ECA helpful or not. Drawing the literature together, we have categorised relational cues into empowering, working alliance, affirmative and social dialogue. In this study, we have embedded the dialogue of Alex, an ECA, to encourage healthy behaviours with all the relational cues (empathic Alex) or with none of the relational cues (neutral Alex). A total of 206 participants were randomly assigned to interact with either empathic or neutral Alex and were also asked to rate the helpfulness of selected relational cues. We explore if the perceived helpfulness of the relational cues is a good predictor of users’ intention to change the recommended health behaviours and/or development of a working alliance. Our models also investigate the impact of individual factors, including gender, age, culture and personality traits of the users. The idea is to establish whether a certain group of individuals having similarities in terms of individual factors found a particular cue or group of cues helpful. This will establish future versions of Alex and allow Alex to tailor its dialogue to specific groups, as well as help in building ECAs with multiple personalities and roles.
{"title":"Identifying Which Relational Cues Users Find Helpful to Allow Tailoring of e-Coach Dialogues","authors":"Sana Salman, Deborah Richards, Mark Dras","doi":"10.3390/mti7100093","DOIUrl":"https://doi.org/10.3390/mti7100093","url":null,"abstract":"Relational cues are extracts from actual verbal dialogues that help build the therapist–patient working alliance and stronger bond through the depiction of empathy, respect and openness. ECAs (Embodied conversational agents) are human-like virtual agents that exhibit verbal and non-verbal behaviours. In the digital health space, ECAs act as health coaches or experts. ECA dialogues have previously been designed to include relational cues to motivate patients to change their current behaviours and encourage adherence to a treatment plan. However, there is little understanding of who finds specific relational cues delivered by an ECA helpful or not. Drawing the literature together, we have categorised relational cues into empowering, working alliance, affirmative and social dialogue. In this study, we have embedded the dialogue of Alex, an ECA, to encourage healthy behaviours with all the relational cues (empathic Alex) or with none of the relational cues (neutral Alex). A total of 206 participants were randomly assigned to interact with either empathic or neutral Alex and were also asked to rate the helpfulness of selected relational cues. We explore if the perceived helpfulness of the relational cues is a good predictor of users’ intention to change the recommended health behaviours and/or development of a working alliance. Our models also investigate the impact of individual factors, including gender, age, culture and personality traits of the users. The idea is to establish whether a certain group of individuals having similarities in terms of individual factors found a particular cue or group of cues helpful. This will establish future versions of Alex and allow Alex to tailor its dialogue to specific groups, as well as help in building ECAs with multiple personalities and roles.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135829061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}