Carlos Francisco Pérez-Crespo, Maria Martha Pérez-Crespo, R. Costaguta
On many groups activities students need to individually search material available on the web. Although many of the results obtained by each of them are repeated, each result must be analyzed to determine its usefulness, and this entails considerable time and effort. To solve this problem JUNE, an agent-based metasearch engine that enables collaborative web search for groups of students, was developed. The application indicates which member evaluated each result, and it also allows assigning a personal assessment and associating a comment. The results produced by the individual searches are sorted by the metasearcher using a ranking algorithm that considers a group valuation obtained by averaging the individual valuations. Currently, JUNE is being used by groups of university students.
{"title":"June: an agent-based metasearch engine for collaborative student groups","authors":"Carlos Francisco Pérez-Crespo, Maria Martha Pérez-Crespo, R. Costaguta","doi":"10.1145/3123818.3123846","DOIUrl":"https://doi.org/10.1145/3123818.3123846","url":null,"abstract":"On many groups activities students need to individually search material available on the web. Although many of the results obtained by each of them are repeated, each result must be analyzed to determine its usefulness, and this entails considerable time and effort. To solve this problem JUNE, an agent-based metasearch engine that enables collaborative web search for groups of students, was developed. The application indicates which member evaluated each result, and it also allows assigning a personal assessment and associating a comment. The results produced by the individual searches are sorted by the metasearcher using a ranking algorithm that considers a group valuation obtained by averaging the individual valuations. Currently, JUNE is being used by groups of university students.","PeriodicalId":341198,"journal":{"name":"Proceedings of the XVIII International Conference on Human Computer Interaction","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126917346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the development of a new tangible tabletop specially designed for interactive audiovisual and musical control. It explores a new interactive space based on 3D active tangible interaction and user's gestures that allows the user to extend the control of the musical events beyond the tabletop surface. It includes a vertical see-through screen for projecting the feedback of the user's movements and the visualization of the musical show. The tabletop interface and the ways for interaction are totally configurable and customizable. Three different artistic and musical performances are presented to show the ability of Immertable to be adapted for different ways of interaction, music creation and visual interface requirement1.
{"title":"Immertable: a configurable and customizable tangible tabletop for audiovisual and musical control","authors":"S. Baldassarri, E. Cerezo, J. R. Beltrán","doi":"10.1145/3123818.3123842","DOIUrl":"https://doi.org/10.1145/3123818.3123842","url":null,"abstract":"This paper presents the development of a new tangible tabletop specially designed for interactive audiovisual and musical control. It explores a new interactive space based on 3D active tangible interaction and user's gestures that allows the user to extend the control of the musical events beyond the tabletop surface. It includes a vertical see-through screen for projecting the feedback of the user's movements and the visualization of the musical show. The tabletop interface and the ways for interaction are totally configurable and customizable. Three different artistic and musical performances are presented to show the ability of Immertable to be adapted for different ways of interaction, music creation and visual interface requirement1.","PeriodicalId":341198,"journal":{"name":"Proceedings of the XVIII International Conference on Human Computer Interaction","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122127926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Developing an interface for ATM systems is a complex process. Among the issues of this process, are designs without a focus on the final user, a non-defined process, and missed time in meetings with new stakeholders every time. Because of this, one of the leading banks in Peru, BBVA Continental, in the context of its digital transformation, requested new methods and tools for the design of its ATM interfaces. In this sense, we proposed a new methodology, which involves a set of activities and the use of two techniques of user-centered design: storyboarding and video prototyping. This new proposal was employed to implement new features of the system and for the improvement of the current workflow and navigation. In both cases, we obtained promising results for the financial entity: a better time-to-market and a better satisfaction of the stakeholders.
{"title":"Applying a user-centered design methodology to develop usable interfaces for an Automated Teller Machine","authors":"Arturo Moquillaza, Freddy Paz","doi":"10.1145/3123818.3123833","DOIUrl":"https://doi.org/10.1145/3123818.3123833","url":null,"abstract":"Developing an interface for ATM systems is a complex process. Among the issues of this process, are designs without a focus on the final user, a non-defined process, and missed time in meetings with new stakeholders every time. Because of this, one of the leading banks in Peru, BBVA Continental, in the context of its digital transformation, requested new methods and tools for the design of its ATM interfaces. In this sense, we proposed a new methodology, which involves a set of activities and the use of two techniques of user-centered design: storyboarding and video prototyping. This new proposal was employed to implement new features of the system and for the improvement of the current workflow and navigation. In both cases, we obtained promising results for the financial entity: a better time-to-market and a better satisfaction of the stakeholders.","PeriodicalId":341198,"journal":{"name":"Proceedings of the XVIII International Conference on Human Computer Interaction","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131566887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic generation of dialogues is a very important component for the human-robot interaction task ; the dialogues generated must guarantee a coherent conversation between human beings and robots. The aim is that the interaction is as natural and effective as possible, considering aspects such as: age, gender, socio-cultural level, socio-economic level, and so on. This research report presents an overview of a doctoral research work that is intended to be executed during the following three years. We motivate the necessity of researching in automatic generation and evaluation of dialogues in the framework of human-robot interaction, presenting the related work reported in literature, the research objectives and the methodology we are interested in develop. In general, we propose to employ machine learning techniques in a restricted domain of knowledge for generating the human-robot dialogues.
{"title":"A computational model for automatic generation of domain-specific dialogues using machine learning","authors":"Andrés Vázquez, David Pinto, D. V. Ayala","doi":"10.1145/3123818.3123860","DOIUrl":"https://doi.org/10.1145/3123818.3123860","url":null,"abstract":"Automatic generation of dialogues is a very important component for the human-robot interaction task ; the dialogues generated must guarantee a coherent conversation between human beings and robots. The aim is that the interaction is as natural and effective as possible, considering aspects such as: age, gender, socio-cultural level, socio-economic level, and so on. This research report presents an overview of a doctoral research work that is intended to be executed during the following three years. We motivate the necessity of researching in automatic generation and evaluation of dialogues in the framework of human-robot interaction, presenting the related work reported in literature, the research objectives and the methodology we are interested in develop. In general, we propose to employ machine learning techniques in a restricted domain of knowledge for generating the human-robot dialogues.","PeriodicalId":341198,"journal":{"name":"Proceedings of the XVIII International Conference on Human Computer Interaction","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130028933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Josefina Guerrero García, Claudia González, David Pinto
Since the creation of virtual reality many interaction techniques have been proposed for navigating virtual worlds. Some of them involve the use of body gestures and voice commands, while some others rely on some other interactive mechanisms such as mouse and keyboard. Since the appearance of videogames with body interaction it caught our attention how complex is to navigate in some videogames. As the use of voice commands is absent you just rely of your body or control to navigate. We observed a lot of frustration when you rely just on body gestures. So, natural interaction seems not being so natural. In this paper we examine a user defined body gesture language to navigate virtual worlds. We use the wizard of Oz technique to collect the data related and compare performance with traditional desktop based interaction and analyze the results. As a result we propose a body gesture language to navigate virtual worlds.
自虚拟现实诞生以来,人们提出了许多用于导航虚拟世界的交互技术。其中一些涉及使用身体手势和语音命令,而另一些则依赖于其他一些交互机制,如鼠标和键盘。自从带有身体互动的电子游戏出现以来,我们便注意到在某些电子游戏中导航是多么复杂。由于没有使用语音命令,你只能依靠你的身体或控制来导航。我们观察到,当你仅仅依靠肢体动作时,会有很多挫败感。所以,自然的互动似乎不是那么自然。在本文中,我们研究了一种用户定义的肢体手势语言来导航虚拟世界。我们使用wizard of Oz技术收集相关数据,并与传统的基于桌面的交互进行性能比较,并分析结果。因此,我们提出了一种身体手势语言来导航虚拟世界。
{"title":"Studying user-defined body gestures for navigating interactive maps","authors":"Josefina Guerrero García, Claudia González, David Pinto","doi":"10.1145/3123818.3123851","DOIUrl":"https://doi.org/10.1145/3123818.3123851","url":null,"abstract":"Since the creation of virtual reality many interaction techniques have been proposed for navigating virtual worlds. Some of them involve the use of body gestures and voice commands, while some others rely on some other interactive mechanisms such as mouse and keyboard. Since the appearance of videogames with body interaction it caught our attention how complex is to navigate in some videogames. As the use of voice commands is absent you just rely of your body or control to navigate. We observed a lot of frustration when you rely just on body gestures. So, natural interaction seems not being so natural. In this paper we examine a user defined body gesture language to navigate virtual worlds. We use the wizard of Oz technique to collect the data related and compare performance with traditional desktop based interaction and analyze the results. As a result we propose a body gesture language to navigate virtual worlds.","PeriodicalId":341198,"journal":{"name":"Proceedings of the XVIII International Conference on Human Computer Interaction","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114715903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Novel wearable systems allow the measure of very complex physiological phenomena extending their capabilities and maintaining their non-invasiveness. A good example of this is the use of superficial electrodes for recording electromyography signals (also called superficial electromyography- sEMG) which can reveal information regarding muscle force and fatigue. Aiming at demonstrate the accuracy of a commercial grade wearable system for sEMG, the Myo Armband for fatigue measurement, we carried out a comparative study. 3 subjects were used under a standard protocol for fatigue detection using two different sensors: a Base ground-truth sEMG sensor, and the commercial wristband Myo, both connected in the biceps brachii. Time and frequency domain parameters were compared using an ANOVA test and a correlation analysis. Results showed a median correlation for the three subjects between 0.4 and 0.6 between the Base Sensor and the Myo Armband signals exposing significant differences p <0.05 for all three cases. The biomarkers of the sEMG signal of both sensors were consistent research found in the literature. Novel wearables sensors can be used in medical scenarios where high accuracy is not a requirement, instead, non-invasiveness can provide ubiquity for rehabilitation treatments as well as a continuous signal recording and data logging processes.
{"title":"Muscle fatigue detection through wearable sensors: a comparative study using the myo armband","authors":"Maria Fernanda Montoya Vega, Ó. Henao, J. Muñoz","doi":"10.1145/3123818.3123855","DOIUrl":"https://doi.org/10.1145/3123818.3123855","url":null,"abstract":"Novel wearable systems allow the measure of very complex physiological phenomena extending their capabilities and maintaining their non-invasiveness. A good example of this is the use of superficial electrodes for recording electromyography signals (also called superficial electromyography- sEMG) which can reveal information regarding muscle force and fatigue. Aiming at demonstrate the accuracy of a commercial grade wearable system for sEMG, the Myo Armband for fatigue measurement, we carried out a comparative study. 3 subjects were used under a standard protocol for fatigue detection using two different sensors: a Base ground-truth sEMG sensor, and the commercial wristband Myo, both connected in the biceps brachii. Time and frequency domain parameters were compared using an ANOVA test and a correlation analysis. Results showed a median correlation for the three subjects between 0.4 and 0.6 between the Base Sensor and the Myo Armband signals exposing significant differences p <0.05 for all three cases. The biomarkers of the sEMG signal of both sensors were consistent research found in the literature. Novel wearables sensors can be used in medical scenarios where high accuracy is not a requirement, instead, non-invasiveness can provide ubiquity for rehabilitation treatments as well as a continuous signal recording and data logging processes.","PeriodicalId":341198,"journal":{"name":"Proceedings of the XVIII International Conference on Human Computer Interaction","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134206772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Some studies suggest that women learn differently than men [1]. Some teaching mechanisms work more effectively with men and others with women, and this has nothing to do with our brain biology. Biologically our brains are exactly the same [2]. The difference is based on the strategies used to teach us; culturally we have been educated differently, we used different toys in our childhood and we are treated differently.
{"title":"Characterization of collaborative practices with a gender focus in programming courses: case study - university of San Buenaventura","authors":"Beatriz Eugenia Grass, Mayela Coto Chotto","doi":"10.1145/3123818.3123871","DOIUrl":"https://doi.org/10.1145/3123818.3123871","url":null,"abstract":"Some studies suggest that women learn differently than men [1]. Some teaching mechanisms work more effectively with men and others with women, and this has nothing to do with our brain biology. Biologically our brains are exactly the same [2]. The difference is based on the strategies used to teach us; culturally we have been educated differently, we used different toys in our childhood and we are treated differently.","PeriodicalId":341198,"journal":{"name":"Proceedings of the XVIII International Conference on Human Computer Interaction","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133416256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper1 presents the work conducted as part of a doctoral thesis. Its objective is to establish and describe the concept of Personal Space for Multisensory Stimulation (PS4MS) in the field of rehabilitation for patients with cognitive disabilities. It aims at defining a set of design guidelines to be used by developers looking at providing technological support in this type of spaces. We detail the activities of the proposed methodology to conduct the work and present the main findings obtained so far.
{"title":"Personal spaces for multisensory stimulation as support to rehabilitate patients with cognitive disabilities","authors":"Raúl Casillas Figueroa","doi":"10.1145/3123818.3123841","DOIUrl":"https://doi.org/10.1145/3123818.3123841","url":null,"abstract":"This paper1 presents the work conducted as part of a doctoral thesis. Its objective is to establish and describe the concept of Personal Space for Multisensory Stimulation (PS4MS) in the field of rehabilitation for patients with cognitive disabilities. It aims at defining a set of design guidelines to be used by developers looking at providing technological support in this type of spaces. We detail the activities of the proposed methodology to conduct the work and present the main findings obtained so far.","PeriodicalId":341198,"journal":{"name":"Proceedings of the XVIII International Conference on Human Computer Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127295334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The way as a team interact is determinant for increasing the software organization capability. So, recognize the best interaction mechanisms is a key strategy in order to improve performance in software development teams. Since a growing number of available SPEM 2.0 process models, we propose use this kind of the models as a tool, coAVISPA, in order to identify collaboration aspects to be potentially incorporated as improvement strategies for software development teams. coAVISPA follows a visual strategy and collaboration patterns applied to tasks and roles blueprints. A case study indicates that coAVISPA is an effective analysis tool.
{"title":"Integrating collaboration engineering with software process modeling: a visual approach","authors":"César Restrepo, L. Jiménez, J. Hurtado","doi":"10.1145/3123818.3123866","DOIUrl":"https://doi.org/10.1145/3123818.3123866","url":null,"abstract":"The way as a team interact is determinant for increasing the software organization capability. So, recognize the best interaction mechanisms is a key strategy in order to improve performance in software development teams. Since a growing number of available SPEM 2.0 process models, we propose use this kind of the models as a tool, coAVISPA, in order to identify collaboration aspects to be potentially incorporated as improvement strategies for software development teams. coAVISPA follows a visual strategy and collaboration patterns applied to tasks and roles blueprints. A case study indicates that coAVISPA is an effective analysis tool.","PeriodicalId":341198,"journal":{"name":"Proceedings of the XVIII International Conference on Human Computer Interaction","volume":"412 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122790045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we propose the interface to implement the Global method into two learning tools (web page and mobile application). For that it is necessary to select the characteristics to be implemented, analyzing the work done on the different methods of reading tested in real people and used in a traditional way. The implementation will then be tested with a set of children between 5 and 6 years of age to feed back the interface design.
{"title":"Development of a reading-writing tools focused on speed reading for preschool children","authors":"Laura Patricia Ramirez Rivera","doi":"10.1145/3123818.3123819","DOIUrl":"https://doi.org/10.1145/3123818.3123819","url":null,"abstract":"In this work, we propose the interface to implement the Global method into two learning tools (web page and mobile application). For that it is necessary to select the characteristics to be implemented, analyzing the work done on the different methods of reading tested in real people and used in a traditional way. The implementation will then be tested with a set of children between 5 and 6 years of age to feed back the interface design.","PeriodicalId":341198,"journal":{"name":"Proceedings of the XVIII International Conference on Human Computer Interaction","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123988012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}