This research pursues the question of how the computer-generated analysis and visualization of communication can foster collaboration in teams that work together online. The audio data of regular online video meetings of three different teams were analyzed. Structural information regarding their communication was visualized in a communication report, and then, discussed with the teams in so-called digitally supported coaching (DSC) sessions. The aim of the DSC is to improve team collaboration by discerning helpful and less helpful patterns in the teams’ communication. This report allows us to recognize individual positions within the teams, as well as communication structures, such as conversational turn taking, that are relevant for group intelligence, as other research has shown. The findings pertaining to the team members during the DSC were gathered via questionnaires. These qualitative data were then matched with the quantitative data derived from the calls, particularly social network analysis (SNA). The SNA was inferred using the average number of interactions between the participants as measured in the calls. The qualitative findings of the teams were then cross-checked with the quantitative analysis. As a result, the assessment of team members’ roles was highly coherent with the SNA. Furthermore, all teams managed to derive concrete measures for improving their collaboration based on the reflection in the DSC.
{"title":"Developing Teams by Visualizing Their Communication Structures in Online Meetings","authors":"Thomas Spielhofer, Renate Motschnig","doi":"10.3390/mti7100100","DOIUrl":"https://doi.org/10.3390/mti7100100","url":null,"abstract":"This research pursues the question of how the computer-generated analysis and visualization of communication can foster collaboration in teams that work together online. The audio data of regular online video meetings of three different teams were analyzed. Structural information regarding their communication was visualized in a communication report, and then, discussed with the teams in so-called digitally supported coaching (DSC) sessions. The aim of the DSC is to improve team collaboration by discerning helpful and less helpful patterns in the teams’ communication. This report allows us to recognize individual positions within the teams, as well as communication structures, such as conversational turn taking, that are relevant for group intelligence, as other research has shown. The findings pertaining to the team members during the DSC were gathered via questionnaires. These qualitative data were then matched with the quantitative data derived from the calls, particularly social network analysis (SNA). The SNA was inferred using the average number of interactions between the participants as measured in the calls. The qualitative findings of the teams were then cross-checked with the quantitative analysis. As a result, the assessment of team members’ roles was highly coherent with the SNA. Furthermore, all teams managed to derive concrete measures for improving their collaboration based on the reflection in the DSC.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135730868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Coe, Grigori Evreinov, Mounia Ziat, Roope Raisamo
In this paper, we report a method of implementing a universal volumetric haptic actuation platform which can be adapted to fit a wide variety of visual displays with flat surfaces. This platform aims to enable the simulation of the 3D features of input interfaces. This goal is achieved using four readily available stepper motors in a diagonal cross configuration with which we can quickly change the position of a surface in a manner that can render these volumetric features. In our research, we use a Microsoft Surface Go tablet placed on the haptic enhancement actuation platform to replicate the exploratory features of virtual keyboard keycaps displayed on the touchscreen. We ask seven participants to explore the surface of a virtual keypad comprised of 12 keycaps. As a second task, random key positions are announced one at a time, which the participant is expected to locate. These experiments are used to understand how and with what fidelity the volumetric feedback could improve performance (detection time, track length, and error rate) of detecting the specific keycaps location with haptic feedback and in the absence of visual feedback. Participants complete the tasks with great success (p < 0.05). In addition, their ability to feel convex keycaps is confirmed within the subjective comments.
{"title":"A Universal Volumetric Haptic Actuation Platform","authors":"Patrick Coe, Grigori Evreinov, Mounia Ziat, Roope Raisamo","doi":"10.3390/mti7100099","DOIUrl":"https://doi.org/10.3390/mti7100099","url":null,"abstract":"In this paper, we report a method of implementing a universal volumetric haptic actuation platform which can be adapted to fit a wide variety of visual displays with flat surfaces. This platform aims to enable the simulation of the 3D features of input interfaces. This goal is achieved using four readily available stepper motors in a diagonal cross configuration with which we can quickly change the position of a surface in a manner that can render these volumetric features. In our research, we use a Microsoft Surface Go tablet placed on the haptic enhancement actuation platform to replicate the exploratory features of virtual keyboard keycaps displayed on the touchscreen. We ask seven participants to explore the surface of a virtual keypad comprised of 12 keycaps. As a second task, random key positions are announced one at a time, which the participant is expected to locate. These experiments are used to understand how and with what fidelity the volumetric feedback could improve performance (detection time, track length, and error rate) of detecting the specific keycaps location with haptic feedback and in the absence of visual feedback. Participants complete the tasks with great success (p < 0.05). In addition, their ability to feel convex keycaps is confirmed within the subjective comments.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135993449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ana Beatriz Marques, Vasco Branco, Rui Costa, Nina Costa
Immersive Unit Visualization is an emergent form of visualization that arose from Immersive Analytics where, unlike traditional visualizations, each data point is represented by an individual visual mark in an immersive virtual environment. This practice has focused almost exclusively on virtual reality, excluding augmented reality (AR). This article develops and tests a prototype of an Immersive Unit Visualization (Floating Companies II) with two AR devices: head-mounted display (HMD) and hand-held display (HHD). Results from the testing sessions with 20 users were analyzed through qualitative research analysis and thematic coding indicating that, while the HHD enabled a first contact with AR visualization on a familiar device, HMD improved the perception of hybrid space by supporting greater stability of virtual content, wider field of view, improved spatial perception, increased sense of immersion, and more realistic simulation, which had an impact on information reading and sense-making. The materialization of abstract quantitative values into concrete reality through its simulation in the real environment and the ludic dimension stand out as important opportunities for this type of visualization. This paper investigates the aspects distinguishing two experiences regarding data visualization in hybrid space, and characterizes ways of seeing information with AR, identifying opportunities to advance information design research.
{"title":"Immersive Unit Visualization with Augmented Reality","authors":"Ana Beatriz Marques, Vasco Branco, Rui Costa, Nina Costa","doi":"10.3390/mti7100098","DOIUrl":"https://doi.org/10.3390/mti7100098","url":null,"abstract":"Immersive Unit Visualization is an emergent form of visualization that arose from Immersive Analytics where, unlike traditional visualizations, each data point is represented by an individual visual mark in an immersive virtual environment. This practice has focused almost exclusively on virtual reality, excluding augmented reality (AR). This article develops and tests a prototype of an Immersive Unit Visualization (Floating Companies II) with two AR devices: head-mounted display (HMD) and hand-held display (HHD). Results from the testing sessions with 20 users were analyzed through qualitative research analysis and thematic coding indicating that, while the HHD enabled a first contact with AR visualization on a familiar device, HMD improved the perception of hybrid space by supporting greater stability of virtual content, wider field of view, improved spatial perception, increased sense of immersion, and more realistic simulation, which had an impact on information reading and sense-making. The materialization of abstract quantitative values into concrete reality through its simulation in the real environment and the ludic dimension stand out as important opportunities for this type of visualization. This paper investigates the aspects distinguishing two experiences regarding data visualization in hybrid space, and characterizes ways of seeing information with AR, identifying opportunities to advance information design research.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136033124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sign language (SL) avatar systems aid communication between the hearing and deaf communities. Despite technological progress, there is a lack of a standardized avatar development framework. This paper offers a systematic review of SL avatar systems spanning from 1982 to 2022. Using PRISMA guidelines, we shortlisted 47 papers from an initial 1765, focusing on sign synthesis techniques, corpora, design strategies, and facial expression methods. We also discuss both objective and subjective evaluation methodologies. Our findings highlight key trends and suggest new research avenues for improving SL avatars.
{"title":"Evolution and Trends in Sign Language Avatar Systems: Unveiling a 40-Year Journey via Systematic Review","authors":"Maryam Aziz, Achraf Othman","doi":"10.3390/mti7100097","DOIUrl":"https://doi.org/10.3390/mti7100097","url":null,"abstract":"Sign language (SL) avatar systems aid communication between the hearing and deaf communities. Despite technological progress, there is a lack of a standardized avatar development framework. This paper offers a systematic review of SL avatar systems spanning from 1982 to 2022. Using PRISMA guidelines, we shortlisted 47 papers from an initial 1765, focusing on sign synthesis techniques, corpora, design strategies, and facial expression methods. We also discuss both objective and subjective evaluation methodologies. Our findings highlight key trends and suggest new research avenues for improving SL avatars.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136116352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alberto Sanchez-Acedo, Alejandro Carbonell-Alcocer, Manuel Gertrudix, Jose Luis Rubio-Tamayo
Immersive journalism is a new form of media communication that uses extended reality systems to produce its content. Despite the possibilities it offers, its use is still limited in the media due to the lack of systematised and scientific knowledge regarding its application. This is a problem because it is a very powerful technology that changes the way audiences receive information and can be used both for new forms of storytelling that generate greater user engagement and for very sophisticated disinformation, which is why it is really important to study it. This study analyses articles published in the last 5 years that cover the use of extended technologies and the metaverse applied to immersive journalism. A systematic literature review applying PRISMA was carried out to identify literature within Web of Science, Scopus and Google Scholar (n = 61). Quantitative and qualitative analyses were conducted on the data collection techniques, the type of the data and the analysis techniques used. The results show a low level of methodological maturity, with research that is fundamentally descriptive and not very formalised, which limits the scope of its results and, therefore, the transfer of knowledge for its application in the configuration of new immersive journalistic products. The metaverse and extended technologies are considered independently and with distinct applications. It is concluded that research in this area is still in an initial exploratory and generalist stage that offers results that are not yet applicable to the promotion of this type of media format.
沉浸式新闻是一种新的媒体传播形式,它使用扩展现实系统来生产其内容。尽管它提供了可能性,但由于缺乏关于其应用的系统和科学知识,它在媒体中的使用仍然有限。这是一个问题,因为它是一种非常强大的技术,它改变了观众接受信息的方式,既可以用于新形式的讲故事,产生更大的用户参与度,也可以用于非常复杂的虚假信息,这就是为什么研究它真的很重要。本研究分析了过去5年发表的文章,这些文章涵盖了扩展技术和虚拟世界在沉浸式新闻中的应用。应用PRISMA进行系统文献综述,在Web of Science、Scopus和Google Scholar中识别文献(n = 61)。对数据收集技术、数据类型和使用的分析技术进行了定量和定性分析。结果显示,方法成熟度较低,研究基本上是描述性的,不是很形式化,这限制了其结果的范围,因此,在新的沉浸式新闻产品配置中应用知识的转移。元技术和扩展技术被视为独立的,具有不同的应用程序。结论是,该领域的研究仍处于初步探索和通才阶段,其结果尚不适用于推广这种类型的媒体格式。
{"title":"Metaverse and Extended Realities in Immersive Journalism: A Systematic Literature Review","authors":"Alberto Sanchez-Acedo, Alejandro Carbonell-Alcocer, Manuel Gertrudix, Jose Luis Rubio-Tamayo","doi":"10.3390/mti7100096","DOIUrl":"https://doi.org/10.3390/mti7100096","url":null,"abstract":"Immersive journalism is a new form of media communication that uses extended reality systems to produce its content. Despite the possibilities it offers, its use is still limited in the media due to the lack of systematised and scientific knowledge regarding its application. This is a problem because it is a very powerful technology that changes the way audiences receive information and can be used both for new forms of storytelling that generate greater user engagement and for very sophisticated disinformation, which is why it is really important to study it. This study analyses articles published in the last 5 years that cover the use of extended technologies and the metaverse applied to immersive journalism. A systematic literature review applying PRISMA was carried out to identify literature within Web of Science, Scopus and Google Scholar (n = 61). Quantitative and qualitative analyses were conducted on the data collection techniques, the type of the data and the analysis techniques used. The results show a low level of methodological maturity, with research that is fundamentally descriptive and not very formalised, which limits the scope of its results and, therefore, the transfer of knowledge for its application in the configuration of new immersive journalistic products. The metaverse and extended technologies are considered independently and with distinct applications. It is concluded that research in this area is still in an initial exploratory and generalist stage that offers results that are not yet applicable to the promotion of this type of media format.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135968072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The field of brain–computer interface (BCI) enables us to establish a pathway between the human brain and computers, with applications in the medical and nonmedical field. Brain computer interfaces can have a significant impact on the way humans interact with machines. In recent years, the surge in computational power has enabled deep learning algorithms to act as a robust avenue for leveraging BCIs. This paper provides an up-to-date review of deep and hybrid deep learning techniques utilized in the field of BCI through motor imagery. It delves into the adoption of deep learning techniques, including convolutional neural networks (CNNs), autoencoders (AEs), and recurrent structures such as long short-term memory (LSTM) networks. Moreover, hybrid approaches, such as combining CNNs with LSTMs or AEs and other techniques, are reviewed for their potential to enhance classification performance. Finally, we address challenges within motor imagery BCIs and highlight further research directions in this emerging field.
{"title":"Current Trends, Challenges, and Future Research Directions of Hybrid and Deep Learning Techniques for Motor Imagery Brain–Computer Interface","authors":"Emmanouil Lionakis, Konstantinos Karampidis, Giorgos Papadourakis","doi":"10.3390/mti7100095","DOIUrl":"https://doi.org/10.3390/mti7100095","url":null,"abstract":"The field of brain–computer interface (BCI) enables us to establish a pathway between the human brain and computers, with applications in the medical and nonmedical field. Brain computer interfaces can have a significant impact on the way humans interact with machines. In recent years, the surge in computational power has enabled deep learning algorithms to act as a robust avenue for leveraging BCIs. This paper provides an up-to-date review of deep and hybrid deep learning techniques utilized in the field of BCI through motor imagery. It delves into the adoption of deep learning techniques, including convolutional neural networks (CNNs), autoencoders (AEs), and recurrent structures such as long short-term memory (LSTM) networks. Moreover, hybrid approaches, such as combining CNNs with LSTMs or AEs and other techniques, are reviewed for their potential to enhance classification performance. Finally, we address challenges within motor imagery BCIs and highlight further research directions in this emerging field.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136013824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sari Yli-Kauhaluoma, Milt Statheropoulos, Anne Zygmanowski, Osmo Anttalainen, Hanna Hakulinen, Maria Theodora Kontogianni, Matti Kuula, Johannes Pernaa, Paula Vanninen
Public warning systems are an essential element of safe cities. However, the functionality of neither traditional nor digital emergency warnings is understood well enough from the perspective of citizens. This study examines smart city development from the perspective of safety by exploring citizens’ viewpoints. It investigates people’s perceptions of the ways in which they obtain warnings and information about emergencies involving health risks. Data were collected in the form of focus group interviews and semi-structured interviews in Finland, Germany, and Greece. The results suggest that people place a lot of trust in their social network, receiving text messages, and their ability to use web-based search engines in order to obtain public warnings. The study discusses the challenges identified by citizens in the use of conventional radio and television transmissions and sirens for public warnings. Based on the results, citizens demonstrate informed ignorance about existing mobile emergency applications. Our results imply that it is not sufficient to build emergency communication infrastructure: the development of smart, safe cities requires continuous work and the integration of both hard and soft infrastructure-oriented strategies, i.e., technological infrastructure development including digitalisation and education, advancement of knowledge, and participation of people. Both strategic aspects are essential to enable people to take advantage of novel digital applications in emergency situations.
{"title":"Safe City: A Study of Channels for Public Warnings for Emergency Communication in Finland, Germany, and Greece","authors":"Sari Yli-Kauhaluoma, Milt Statheropoulos, Anne Zygmanowski, Osmo Anttalainen, Hanna Hakulinen, Maria Theodora Kontogianni, Matti Kuula, Johannes Pernaa, Paula Vanninen","doi":"10.3390/mti7100094","DOIUrl":"https://doi.org/10.3390/mti7100094","url":null,"abstract":"Public warning systems are an essential element of safe cities. However, the functionality of neither traditional nor digital emergency warnings is understood well enough from the perspective of citizens. This study examines smart city development from the perspective of safety by exploring citizens’ viewpoints. It investigates people’s perceptions of the ways in which they obtain warnings and information about emergencies involving health risks. Data were collected in the form of focus group interviews and semi-structured interviews in Finland, Germany, and Greece. The results suggest that people place a lot of trust in their social network, receiving text messages, and their ability to use web-based search engines in order to obtain public warnings. The study discusses the challenges identified by citizens in the use of conventional radio and television transmissions and sirens for public warnings. Based on the results, citizens demonstrate informed ignorance about existing mobile emergency applications. Our results imply that it is not sufficient to build emergency communication infrastructure: the development of smart, safe cities requires continuous work and the integration of both hard and soft infrastructure-oriented strategies, i.e., technological infrastructure development including digitalisation and education, advancement of knowledge, and participation of people. Both strategic aspects are essential to enable people to take advantage of novel digital applications in emergency situations.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136356872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Relational cues are extracts from actual verbal dialogues that help build the therapist–patient working alliance and stronger bond through the depiction of empathy, respect and openness. ECAs (Embodied conversational agents) are human-like virtual agents that exhibit verbal and non-verbal behaviours. In the digital health space, ECAs act as health coaches or experts. ECA dialogues have previously been designed to include relational cues to motivate patients to change their current behaviours and encourage adherence to a treatment plan. However, there is little understanding of who finds specific relational cues delivered by an ECA helpful or not. Drawing the literature together, we have categorised relational cues into empowering, working alliance, affirmative and social dialogue. In this study, we have embedded the dialogue of Alex, an ECA, to encourage healthy behaviours with all the relational cues (empathic Alex) or with none of the relational cues (neutral Alex). A total of 206 participants were randomly assigned to interact with either empathic or neutral Alex and were also asked to rate the helpfulness of selected relational cues. We explore if the perceived helpfulness of the relational cues is a good predictor of users’ intention to change the recommended health behaviours and/or development of a working alliance. Our models also investigate the impact of individual factors, including gender, age, culture and personality traits of the users. The idea is to establish whether a certain group of individuals having similarities in terms of individual factors found a particular cue or group of cues helpful. This will establish future versions of Alex and allow Alex to tailor its dialogue to specific groups, as well as help in building ECAs with multiple personalities and roles.
{"title":"Identifying Which Relational Cues Users Find Helpful to Allow Tailoring of e-Coach Dialogues","authors":"Sana Salman, Deborah Richards, Mark Dras","doi":"10.3390/mti7100093","DOIUrl":"https://doi.org/10.3390/mti7100093","url":null,"abstract":"Relational cues are extracts from actual verbal dialogues that help build the therapist–patient working alliance and stronger bond through the depiction of empathy, respect and openness. ECAs (Embodied conversational agents) are human-like virtual agents that exhibit verbal and non-verbal behaviours. In the digital health space, ECAs act as health coaches or experts. ECA dialogues have previously been designed to include relational cues to motivate patients to change their current behaviours and encourage adherence to a treatment plan. However, there is little understanding of who finds specific relational cues delivered by an ECA helpful or not. Drawing the literature together, we have categorised relational cues into empowering, working alliance, affirmative and social dialogue. In this study, we have embedded the dialogue of Alex, an ECA, to encourage healthy behaviours with all the relational cues (empathic Alex) or with none of the relational cues (neutral Alex). A total of 206 participants were randomly assigned to interact with either empathic or neutral Alex and were also asked to rate the helpfulness of selected relational cues. We explore if the perceived helpfulness of the relational cues is a good predictor of users’ intention to change the recommended health behaviours and/or development of a working alliance. Our models also investigate the impact of individual factors, including gender, age, culture and personality traits of the users. The idea is to establish whether a certain group of individuals having similarities in terms of individual factors found a particular cue or group of cues helpful. This will establish future versions of Alex and allow Alex to tailor its dialogue to specific groups, as well as help in building ECAs with multiple personalities and roles.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135829061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rima Shishakly, Mohammed Amin Almaiah, Shaha Al-Otaibi, Abdalwali Lutfi, Mahmaod Alrawad, Ahmed Almulhem
Mobile learning has become increasingly important for higher education due to its numerous advantages and transformative potential. The aim of this study is to investigate how students perceive and utilize mobile learning (m-learning) services in universities. To achieve this objective, a conceptual model was developed, combining the TAM with additional new determinants, including perceived security, perceived trust, perceived risk, and service quality. The primary goal of this model is to assess the adoption of m-learning apps among users in university settings. To evaluate the proposed model, SEM was utilized to test the research model. The findings of the study highlight the critical roles of perceived security, perceived trust, and service quality in promoting the adoption of m-learning apps. Moreover, the results indicate that perceived risk negatively impacts both students’ trust and their attitudes towards using mobile learning services. The study reveals that the perceived trust, and service quality factors positively influence students’ attitudes towards adopting m-learning apps. These research findings hold significant implications for universities and academia, offering valuable insights to devise effective strategies for increasing the utilization of m- learning services among students. By gaining a deeper understanding of students’ perceptions and acceptance, universities can optimize their m-learning offerings to cater to students’ needs and preferences more effectively.
{"title":"A New Technological Model on Investigating the Utilization of Mobile Learning Applications: Extending the TAM","authors":"Rima Shishakly, Mohammed Amin Almaiah, Shaha Al-Otaibi, Abdalwali Lutfi, Mahmaod Alrawad, Ahmed Almulhem","doi":"10.3390/mti7090092","DOIUrl":"https://doi.org/10.3390/mti7090092","url":null,"abstract":"Mobile learning has become increasingly important for higher education due to its numerous advantages and transformative potential. The aim of this study is to investigate how students perceive and utilize mobile learning (m-learning) services in universities. To achieve this objective, a conceptual model was developed, combining the TAM with additional new determinants, including perceived security, perceived trust, perceived risk, and service quality. The primary goal of this model is to assess the adoption of m-learning apps among users in university settings. To evaluate the proposed model, SEM was utilized to test the research model. The findings of the study highlight the critical roles of perceived security, perceived trust, and service quality in promoting the adoption of m-learning apps. Moreover, the results indicate that perceived risk negatively impacts both students’ trust and their attitudes towards using mobile learning services. The study reveals that the perceived trust, and service quality factors positively influence students’ attitudes towards adopting m-learning apps. These research findings hold significant implications for universities and academia, offering valuable insights to devise effective strategies for increasing the utilization of m- learning services among students. By gaining a deeper understanding of students’ perceptions and acceptance, universities can optimize their m-learning offerings to cater to students’ needs and preferences more effectively.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136375944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mouadh Guesmi, Mohamed Amine Chatti, Lamees Kadhim, Shoeb Joarder, Qurat Ul Ain
The fast growth of data in the academic field has contributed to making recommendation systems for scientific papers more popular. Content-based filtering (CBF), a pivotal technique in recommender systems (RS), holds particular significance in the realm of scientific publication recommendations. In a content-based scientific publication RS, recommendations are composed by observing the features of users and papers. Content-based recommendation encompasses three primary steps, namely, item representation, user modeling, and recommendation generation. A crucial part of generating recommendations is the user modeling process. Nevertheless, this step is often neglected in existing content-based scientific publication RS. Moreover, most existing approaches do not capture the semantics of user models and papers. To address these limitations, in this paper we present a transparent Recommendation and Interest Modeling Application (RIMA), a content-based scientific publication RS that implicitly derives user interest models from their authored papers. To address the semantic issues, RIMA combines word embedding-based keyphrase extraction techniques with knowledge bases to generate semantically-enriched user interest models, and additionally leverages pretrained transformer sentence encoders to represent user models and papers and compute their similarities. The effectiveness of our approach was assessed through an offline evaluation by conducting extensive experiments on various datasets along with user study (N = 22), demonstrating that (a) combining SIFRank and SqueezeBERT as an embedding-based keyphrase extraction method with DBpedia as a knowledge base improved the quality of the user interest modeling step, and (b) using the msmarco-distilbert-base-tas-b sentence transformer model achieved better results in the recommendation generation step.
{"title":"Semantic Interest Modeling and Content-Based Scientific Publication Recommendation Using Word Embeddings and Sentence Encoders","authors":"Mouadh Guesmi, Mohamed Amine Chatti, Lamees Kadhim, Shoeb Joarder, Qurat Ul Ain","doi":"10.3390/mti7090091","DOIUrl":"https://doi.org/10.3390/mti7090091","url":null,"abstract":"The fast growth of data in the academic field has contributed to making recommendation systems for scientific papers more popular. Content-based filtering (CBF), a pivotal technique in recommender systems (RS), holds particular significance in the realm of scientific publication recommendations. In a content-based scientific publication RS, recommendations are composed by observing the features of users and papers. Content-based recommendation encompasses three primary steps, namely, item representation, user modeling, and recommendation generation. A crucial part of generating recommendations is the user modeling process. Nevertheless, this step is often neglected in existing content-based scientific publication RS. Moreover, most existing approaches do not capture the semantics of user models and papers. To address these limitations, in this paper we present a transparent Recommendation and Interest Modeling Application (RIMA), a content-based scientific publication RS that implicitly derives user interest models from their authored papers. To address the semantic issues, RIMA combines word embedding-based keyphrase extraction techniques with knowledge bases to generate semantically-enriched user interest models, and additionally leverages pretrained transformer sentence encoders to represent user models and papers and compute their similarities. The effectiveness of our approach was assessed through an offline evaluation by conducting extensive experiments on various datasets along with user study (N = 22), demonstrating that (a) combining SIFRank and SqueezeBERT as an embedding-based keyphrase extraction method with DBpedia as a knowledge base improved the quality of the user interest modeling step, and (b) using the msmarco-distilbert-base-tas-b sentence transformer model achieved better results in the recommendation generation step.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135435270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}