Marc Schmidt, Stefan Bente, Bruno Baruque Zanón, Ana María Lara Palma
Abstract Identifying appropriate methods for any process, such as uncovering needs of juveniles in social welfare or designing interactive systems, requires intensive research and generally using a categorization system that brings methods in a systematic order. These taxonomy systems are heavily responsible for the later method usage and start the thinking process for researchers and practitioners alike in a specific direction. So making participation visible in such taxonomy systems directly affects the later method usage and makes participation more visible and easier to use. This article presents the Method Radar, a visualized categorization of methods with a focus on participation using the ladder of participation, that makes participation visible right at the beginning of any method selection. The Method Radar builds on the radar representation established in the technology sector, which allows a multi-dimensional classification. In addition, an implementation and systematic process for categorizing these methods are presented. It can be used for any form of method categorization in which participation is supposed to be thought of.
{"title":"The Method Radar: a way to organize methods for technology development with participation in mind","authors":"Marc Schmidt, Stefan Bente, Bruno Baruque Zanón, Ana María Lara Palma","doi":"10.1515/icom-2023-0012","DOIUrl":"https://doi.org/10.1515/icom-2023-0012","url":null,"abstract":"Abstract Identifying appropriate methods for any process, such as uncovering needs of juveniles in social welfare or designing interactive systems, requires intensive research and generally using a categorization system that brings methods in a systematic order. These taxonomy systems are heavily responsible for the later method usage and start the thinking process for researchers and practitioners alike in a specific direction. So making participation visible in such taxonomy systems directly affects the later method usage and makes participation more visible and easier to use. This article presents the Method Radar, a visualized categorization of methods with a focus on participation using the ladder of participation, that makes participation visible right at the beginning of any method selection. The Method Radar builds on the radar representation established in the technology sector, which allows a multi-dimensional classification. In addition, an implementation and systematic process for categorizing these methods are presented. It can be used for any form of method categorization in which participation is supposed to be thought of.","PeriodicalId":37105,"journal":{"name":"i-com","volume":"11 1","pages":"253 - 268"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138621340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"i-com directory and index listings","authors":"Michael Koch","doi":"10.1515/icom-2023-0034","DOIUrl":"https://doi.org/10.1515/icom-2023-0034","url":null,"abstract":"","PeriodicalId":37105,"journal":{"name":"i-com","volume":"38 1","pages":"173 - 174"},"PeriodicalIF":0.0,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139198592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In recent years, there have been several attempts to help visually impaired and illiterate people to overcome their reading limitations through developing different applications. However, most of these applications are based on physical button interaction and avoid the use of touchscreen devices. This research mainly aims to find a solution that helps both visually impaired and illiterate people to read texts present in their surroundings through a touchscreen-based application. The study also attempts to discover the possibility of building one application that could be used by both type of users and find out whether they would use it in the same efficiency. Therefore, a requirements elicitation study was conducted to identify the users’ requirements and their preferences and so build an interactive interface for both visually impaired and illiterate users. The study resulted in several design considerations, such as using voice instructions, focusing on verbal feedback, and eliminating buttons. Then, the reader mobile application was designed and built based on these design preferences. Finally, an evaluation study was conducted to measure the usability of the developed application. The results revealed that both sight impaired and illiterate users could benefit from the same mobile application, as they were satisfied with using it and found it efficient and effective. However, the measures from the evaluation sessions also reported that illiterate users had used the develop app more efficiently and effectively. Moreover, they were more satisfied, especially with the application’s ease of use.
{"title":"Read for me: developing a mobile based application for both visually impaired and illiterate users to tackle reading challenge","authors":"Zainab Hameed Alfayez, Batool Hameed Alfayez, Nahla Hamad Abdul-Samad","doi":"10.1515/icom-2023-0031","DOIUrl":"https://doi.org/10.1515/icom-2023-0031","url":null,"abstract":"Abstract In recent years, there have been several attempts to help visually impaired and illiterate people to overcome their reading limitations through developing different applications. However, most of these applications are based on physical button interaction and avoid the use of touchscreen devices. This research mainly aims to find a solution that helps both visually impaired and illiterate people to read texts present in their surroundings through a touchscreen-based application. The study also attempts to discover the possibility of building one application that could be used by both type of users and find out whether they would use it in the same efficiency. Therefore, a requirements elicitation study was conducted to identify the users’ requirements and their preferences and so build an interactive interface for both visually impaired and illiterate users. The study resulted in several design considerations, such as using voice instructions, focusing on verbal feedback, and eliminating buttons. Then, the reader mobile application was designed and built based on these design preferences. Finally, an evaluation study was conducted to measure the usability of the developed application. The results revealed that both sight impaired and illiterate users could benefit from the same mobile application, as they were satisfied with using it and found it efficient and effective. However, the measures from the evaluation sessions also reported that illiterate users had used the develop app more efficiently and effectively. Moreover, they were more satisfied, especially with the application’s ease of use.","PeriodicalId":37105,"journal":{"name":"i-com","volume":"7 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136229440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Poor software quality results in avoidable costs of trillion dollars annually in the United States alone. Augmented Reality (AR) applications are a relatively new software category. Currently there are no standards to guide the development process and testing is predominantly ad hoc and manual. Consequently, design guidelines and software test automation techniques are intended to remedy the situation. Here, we present a concept for test automation of AR applications. The concept consists of two parts: design guidelines and process model for testing AR applications, and a case study with a prototype application for test automation. The design guidelines and the process model are based on the state-of-the-art. The prototype application presented in this article demonstrates test automation for a multi-platform AR application for Android devices as well as the HoloLens 2. The presented test automation case study is designed to cover a large part of the functions, such as the different interaction variants. This research work shows that by using the proposed process model and test automation techniques, testing of some features of AR applications can be automated. The results of this research can serve as a basis for future research and contribution towards AR application development standardization efforts.
{"title":"Test automation for augmented reality applications: a development process model and case study","authors":"Sascha Minor, Vix Kemanji Ketoma, Gerrit Meixner","doi":"10.1515/icom-2023-0029","DOIUrl":"https://doi.org/10.1515/icom-2023-0029","url":null,"abstract":"Abstract Poor software quality results in avoidable costs of trillion dollars annually in the United States alone. Augmented Reality (AR) applications are a relatively new software category. Currently there are no standards to guide the development process and testing is predominantly ad hoc and manual. Consequently, design guidelines and software test automation techniques are intended to remedy the situation. Here, we present a concept for test automation of AR applications. The concept consists of two parts: design guidelines and process model for testing AR applications, and a case study with a prototype application for test automation. The design guidelines and the process model are based on the state-of-the-art. The prototype application presented in this article demonstrates test automation for a multi-platform AR application for Android devices as well as the HoloLens 2. The presented test automation case study is designed to cover a large part of the functions, such as the different interaction variants. This research work shows that by using the proposed process model and test automation techniques, testing of some features of AR applications can be automated. The results of this research can serve as a basis for future research and contribution towards AR application development standardization efforts.","PeriodicalId":37105,"journal":{"name":"i-com","volume":" 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135242720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In human-computer interaction, much empirical research exists. Online questionnaires increasingly play an important role. Here the quality of the results depend strongly on the quality of the given answers, and it is essential to distinguish truthful from deceptive answers. There exist elegant single modalities for deception detection in the literature, such as mouse tracking and eye tracking (in this paper, respectively, measuring the pupil diameter). Yet, no combination of these two modalities is available. This paper presents a combined approach of two cognitive-load-based lie detection approaches. We address study administrators who conduct questionnaires in the HCI, wanting to improve the validity of questionnaires.
{"title":"AnswerTruthDetector: a combined cognitive load approach for separating truthful from deceptive answers in computer-administered questionnaires","authors":"Moritz Maleck, Tom Gross","doi":"10.1515/icom-2023-0023","DOIUrl":"https://doi.org/10.1515/icom-2023-0023","url":null,"abstract":"Abstract In human-computer interaction, much empirical research exists. Online questionnaires increasingly play an important role. Here the quality of the results depend strongly on the quality of the given answers, and it is essential to distinguish truthful from deceptive answers. There exist elegant single modalities for deception detection in the literature, such as mouse tracking and eye tracking (in this paper, respectively, measuring the pupil diameter). Yet, no combination of these two modalities is available. This paper presents a combined approach of two cognitive-load-based lie detection approaches. We address study administrators who conduct questionnaires in the HCI, wanting to improve the validity of questionnaires.","PeriodicalId":37105,"journal":{"name":"i-com","volume":" 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135291079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Colceriu, Sabine Theis, Sigrid Brell-Cokcan, Verena Nitsch
Abstract Mobile cobots can increase the potential for assembly work in industry. For human-friendly automation of cooperative assembly work, user-centered interfaces are necessary. The design process regarding user interfaces for mobile human-robot cooperation (HRC) shows large research gaps. In this article an exemplary approach is shown to design a graphical user interface (GUI) for mobile HRC at assembly workplaces. The design is based on a wireframe developed to support situation awareness. An interactive mockup is designed and evaluated. This is done in two iterations. In the first iteration, a user analysis is carried out using a quantitative survey with n = 31 participants to identify preferred input modalities and a qualitative survey with n = 11 participants that addresses touch interfaces. The interactive mockup is developed by implementing design recommendations of the usability standards ISO 9241 – 110, 112 and 13. A heuristic evaluation is conducted with n = 5 usability experts and the measurement of situation awareness with n = 30 end users. In the second iteration, findings from the preceding iteration are implemented in the GUI and a usability test with n = 20 end users is conducted. The process demonstrates a combination of methods that leads to high usability and situation awareness in mobile HRC.
{"title":"User-centered design in mobile human-robot cooperation: consideration of usability and situation awareness in GUI design for mobile robots at assembly workplaces","authors":"Christian Colceriu, Sabine Theis, Sigrid Brell-Cokcan, Verena Nitsch","doi":"10.1515/icom-2023-0016","DOIUrl":"https://doi.org/10.1515/icom-2023-0016","url":null,"abstract":"Abstract Mobile cobots can increase the potential for assembly work in industry. For human-friendly automation of cooperative assembly work, user-centered interfaces are necessary. The design process regarding user interfaces for mobile human-robot cooperation (HRC) shows large research gaps. In this article an exemplary approach is shown to design a graphical user interface (GUI) for mobile HRC at assembly workplaces. The design is based on a wireframe developed to support situation awareness. An interactive mockup is designed and evaluated. This is done in two iterations. In the first iteration, a user analysis is carried out using a quantitative survey with n = 31 participants to identify preferred input modalities and a qualitative survey with n = 11 participants that addresses touch interfaces. The interactive mockup is developed by implementing design recommendations of the usability standards ISO 9241 – 110, 112 and 13. A heuristic evaluation is conducted with n = 5 usability experts and the measurement of situation awareness with n = 30 end users. In the second iteration, findings from the preceding iteration are implemented in the GUI and a usability test with n = 20 end users is conducted. The process demonstrates a combination of methods that leads to high usability and situation awareness in mobile HRC.","PeriodicalId":37105,"journal":{"name":"i-com","volume":"42 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135863505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Maruhn, Lorenz Prasch, Florian Gerhardinger, Sophia Häfner
Abstract Personas have been established as an indispensable tool in software and product development. They continuously accompany the development process and seek to build empathy and establish an emotional relationship between developers and users. However, this does not always succeed, with the reasons for failure often lying in the personas themselves. If they do not have a sufficient level of detail or do not reflect everyday people, they lose credibility and therefore their purpose of representing the user. Poor communication of personas is another reason they are quickly forgotten. In this paper, we present a new approach to experiencing personas beyond traditional means such as posters or slides. With the help of virtual reality, we create immersive, three-dimensional personas that can be visited in their own living room. The basis of the implementation is a comprehensive dataset, containing aggregated data from over 8000 detailed face-to-face interviews. We base the layout of the apartment, the furniture, and the characters themselves on the archetypal characteristics of their corresponding user group. In the future, we plan to validate whether this approach can be successful in creating a deeper and more sustainable connection between personas and developers and designers.
{"title":"Introducing VR personas: an immersive and easy-to-use tool for understanding users","authors":"Philipp Maruhn, Lorenz Prasch, Florian Gerhardinger, Sophia Häfner","doi":"10.1515/icom-2023-0028","DOIUrl":"https://doi.org/10.1515/icom-2023-0028","url":null,"abstract":"Abstract Personas have been established as an indispensable tool in software and product development. They continuously accompany the development process and seek to build empathy and establish an emotional relationship between developers and users. However, this does not always succeed, with the reasons for failure often lying in the personas themselves. If they do not have a sufficient level of detail or do not reflect everyday people, they lose credibility and therefore their purpose of representing the user. Poor communication of personas is another reason they are quickly forgotten. In this paper, we present a new approach to experiencing personas beyond traditional means such as posters or slides. With the help of virtual reality, we create immersive, three-dimensional personas that can be visited in their own living room. The basis of the implementation is a comprehensive dataset, containing aggregated data from over 8000 detailed face-to-face interviews. We base the layout of the apartment, the furniture, and the characters themselves on the archetypal characteristics of their corresponding user group. In the future, we plan to validate whether this approach can be successful in creating a deeper and more sustainable connection between personas and developers and designers.","PeriodicalId":37105,"journal":{"name":"i-com","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136262120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dominick Leppich, Carina Bieber, Katrin Proschek, Patrick Harms, Ulf Schubert
Abstract User experience evaluation is becoming increasingly important, and so is emotion recognition. Recognizing users’ emotions based on their interactions alone would not be intrusive to users and can be implemented in many applications. This is still an area of active research and requires data containing both the user interactions and the corresponding emotions. Currently, there is no public dataset for emotion recognition from keystroke, mouse and touchscreen dynamics. We have created such a dataset for keyboard and mouse interactions through a dedicated user study and made it publicly available for other researchers. This paper examines our study design and the process of creating the dataset. We conducted the study using a test application for travel expense reports with 50 participants. We want to be able to detect predominantly negative emotions, so we added emotional triggers to our test application. However, further research is needed to determine the relationship between user interactions and emotions.
{"title":"DUX: a dataset of user interactions and user emotions","authors":"Dominick Leppich, Carina Bieber, Katrin Proschek, Patrick Harms, Ulf Schubert","doi":"10.1515/icom-2023-0014","DOIUrl":"https://doi.org/10.1515/icom-2023-0014","url":null,"abstract":"Abstract User experience evaluation is becoming increasingly important, and so is emotion recognition. Recognizing users’ emotions based on their interactions alone would not be intrusive to users and can be implemented in many applications. This is still an area of active research and requires data containing both the user interactions and the corresponding emotions. Currently, there is no public dataset for emotion recognition from keystroke, mouse and touchscreen dynamics. We have created such a dataset for keyboard and mouse interactions through a dedicated user study and made it publicly available for other researchers. This paper examines our study design and the process of creating the dataset. We conducted the study using a test application for travel expense reports with 50 participants. We want to be able to detect predominantly negative emotions, so we added emotional triggers to our test application. However, further research is needed to determine the relationship between user interactions and emotions.","PeriodicalId":37105,"journal":{"name":"i-com","volume":"77 1","pages":"101 - 123"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88559437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Madlen Kneile, Till Jürgens, Lara Christoforakos, Matthias Laschke
Abstract Given the threatening consequences of anthropogenic climate change, it is vital to modify energy-intensive daily routines to minimize individual energy consumption. However, changing daily routines is challenging due to their abstract, future-oriented nature and the comfort they provide. Interactive technologies can play a crucial role in facilitating this process. Instead of relying on rhetorical persuasion through information and appeals, we propose two design approaches within the research agenda of the MOVEN research group: (1) employing friction to disrupt routines, and (2) advocating for the interests of natural entities using counterpart technologies. Regarding the disruption of routines, we explore the use of humor as a design element to dampen the resulting resistance (i.e., psychological reactance). Moreover, we reflect on the opportunities of counterpart technologies as a new interaction paradigm in the context of sustainability. Finally, we discuss the potentials and limitations of individual behavior change for a holistic, sustainable transformation.
{"title":"The thing that made me think","authors":"Madlen Kneile, Till Jürgens, Lara Christoforakos, Matthias Laschke","doi":"10.1515/icom-2023-0019","DOIUrl":"https://doi.org/10.1515/icom-2023-0019","url":null,"abstract":"Abstract Given the threatening consequences of anthropogenic climate change, it is vital to modify energy-intensive daily routines to minimize individual energy consumption. However, changing daily routines is challenging due to their abstract, future-oriented nature and the comfort they provide. Interactive technologies can play a crucial role in facilitating this process. Instead of relying on rhetorical persuasion through information and appeals, we propose two design approaches within the research agenda of the MOVEN research group: (1) employing friction to disrupt routines, and (2) advocating for the interests of natural entities using counterpart technologies. Regarding the disruption of routines, we explore the use of humor as a design element to dampen the resulting resistance (i.e., psychological reactance). Moreover, we reflect on the opportunities of counterpart technologies as a new interaction paradigm in the context of sustainability. Finally, we discuss the potentials and limitations of individual behavior change for a holistic, sustainable transformation.","PeriodicalId":37105,"journal":{"name":"i-com","volume":"21 1","pages":"161 - 171"},"PeriodicalIF":0.0,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85518107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}