Virtual environments will deeply alter the way we conduct scientific studies on human behavior. Possible applications range from spatial navigation over addressing moral dilemmas in a more natural manner to therapeutic applications for affective disorders. The decisive factor for this broad range of applications is that virtual reality (VR) is able to combine a well-controlled experimental environment together with the ecological validity of the immersion of test subjects. Until now, however, programming such an environment in Unity® requires profound knowledge of C# programming, 3D design, and computer graphics. In order to give interested research groups access to a realistic VR environment which can easily adapt to the varying needs of experiments, we developed a large, open source, scriptable, and modular VR city. It covers an area of 230 hectare, up to 150 self-driving vehicles and 655 active and passive pedestrians and thousands of nature assets to make it both highly dynamic and realistic. Furthermore, the repository presented here contains a stand-alone City AI toolkit for creating avatars and customizing cars. Finally, the package contains code to easily set up VR studies. All main functions are integrated into the graphical user interface of the Unity® Editor to ease the use of the embedded functionalities. In summary, the project named Westdrive is developed to enable research groups to access a state-of-the-art VR environment that is easily adapted to specific needs and allows focus on the respective research question.
{"title":"Project Westdrive: Unity City With Self-Driving Cars and Pedestrians for Virtual Reality Studies","authors":"F. Nezami, M. A. Wächter, G. Pipa, P. König","doi":"10.3389/fict.2020.00001","DOIUrl":"https://doi.org/10.3389/fict.2020.00001","url":null,"abstract":"Virtual environments will deeply alter the way we conduct scientific studies on human behavior. Possible applications range from spatial navigation over addressing moral dilemmas in a more natural manner to therapeutic applications for affective disorders. The decisive factor for this broad range of applications is that virtual reality (VR) is able to combine a well-controlled experimental environment together with the ecological validity of the immersion of test subjects. Until now, however, programming such an environment in Unity® requires profound knowledge of C# programming, 3D design, and computer graphics. In order to give interested research groups access to a realistic VR environment which can easily adapt to the varying needs of experiments, we developed a large, open source, scriptable, and modular VR city. It covers an area of 230 hectare, up to 150 self-driving vehicles and 655 active and passive pedestrians and thousands of nature assets to make it both highly dynamic and realistic. Furthermore, the repository presented here contains a stand-alone City AI toolkit for creating avatars and customizing cars. Finally, the package contains code to easily set up VR studies. All main functions are integrated into the graphical user interface of the Unity® Editor to ease the use of the embedded functionalities. In summary, the project named Westdrive is developed to enable research groups to access a state-of-the-art VR environment that is easily adapted to specific needs and allows focus on the respective research question.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3389/fict.2020.00001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45009959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a novel vibrotactile rendering algorithm for producing real-time tactile interactions suitable for virtual reality applications. The algorithm uses an energy model to produce smooth tactile sensations by continuously recalculating the location of a phantom actuator that represents a virtual touch point. It also employs syncopations in its rendered amplitude to produce artificial perceptual anchors that make the rendered vibrotactile patterns more recognizable. We conducted two studies to compare this Syncopated Energy algorithm to a standard real-time Grid Region algorithm for rendering touch patterns at different vibration amplitudes and frequencies. We found that the Grid Region algorithm afforded better recognition, but that the Syncopated Energy algorithm was perceived to produce smoother patterns at higher amplitudes. Additionally, we found that higher amplitudes afforded better recognition while a moderate amplitude yielded more perceived continuity. We also found that a higher frequency resulted in better recognition for fine-grained tactile sensations and that frequency can affect perceived continuity.
{"title":"The Syncopated Energy Algorithm for Rendering Real-Time Tactile Interactions","authors":"Fei Tang, Ryan P. McMahan","doi":"10.3389/fict.2019.00019","DOIUrl":"https://doi.org/10.3389/fict.2019.00019","url":null,"abstract":"In this paper, we present a novel vibrotactile rendering algorithm for producing real-time tactile interactions suitable for virtual reality applications. The algorithm uses an energy model to produce smooth tactile sensations by continuously recalculating the location of a phantom actuator that represents a virtual touch point. It also employs syncopations in its rendered amplitude to produce artificial perceptual anchors that make the rendered vibrotactile patterns more recognizable. We conducted two studies to compare this Syncopated Energy algorithm to a standard real-time Grid Region algorithm for rendering touch patterns at different vibration amplitudes and frequencies. We found that the Grid Region algorithm afforded better recognition, but that the Syncopated Energy algorithm was perceived to produce smoother patterns at higher amplitudes. Additionally, we found that higher amplitudes afforded better recognition while a moderate amplitude yielded more perceived continuity. We also found that a higher frequency resulted in better recognition for fine-grained tactile sensations and that frequency can affect perceived continuity.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"68 1","pages":"19"},"PeriodicalIF":0.0,"publicationDate":"2019-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85576034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charles Faure, Annabelle Limballe, A. Sorel, Théo Perrin, B. Bideau, R. Kulpa
People generally coordinate their action to be more efficient. However, in some cases, interference between them occur, resulting in an inefficient collaboration. For example, if two volleyball players collide while performing a serve reception, they can both miss the ball. The main goal of this study is to explore the way two persons regulate their actions when performing a cooperative task of ball interception, and how interference between them may occur. Starting face to face, twenty-four participants (twelve teams of two) had to physically intercept balls moving down from the roof to the floor of a virtual room. To this end, they controlled a virtual paddle attached to their hand moving along the anterior-posterior axis. No communication was allowed between participants so they had to focus on visual cues to decide if they should perform the interception or leave the partner do it. Participants were immersed in a stereoscopic virtual reality setup that allows the control of the situation and the visual stimuli they perceived, such as ball trajectories and the information available on the partner's motion. Results globally showed participants were often able to intercept balls without collision by dividing the interception space in two equivalent parts. However, an area of uncertainty (where many trials were not intercepted) appeared in the center of the scene, highlighting the presence of interference between participants. The width of this area increased when the situation became more complex (facing a real partner and not a stationary one) and when less information was available (only the paddle and not the partner's avatar). Moreover, participants initiated their interception later when real partner was present and often interpreted balls starting above them as balls they should intercept, even when these balls were textit{in fine} intercepted by their partner. Overall, results showed that team coordination here emerges from between-participants interactions and that interference between them depends on task complexity (uncertainty on partner's action and visual information available)
{"title":"Dyadic Interference Leads to Area of Uncertainty During Face-to-Face Cooperative Interception Task","authors":"Charles Faure, Annabelle Limballe, A. Sorel, Théo Perrin, B. Bideau, R. Kulpa","doi":"10.3389/fict.2019.00020","DOIUrl":"https://doi.org/10.3389/fict.2019.00020","url":null,"abstract":"People generally coordinate their action to be more efficient. However, in some cases, interference between them occur, resulting in an inefficient collaboration. For example, if two volleyball players collide while performing a serve reception, they can both miss the ball. The main goal of this study is to explore the way two persons regulate their actions when performing a cooperative task of ball interception, and how interference between them may occur. Starting face to face, twenty-four participants (twelve teams of two) had to physically intercept balls moving down from the roof to the floor of a virtual room. To this end, they controlled a virtual paddle attached to their hand moving along the anterior-posterior axis. No communication was allowed between participants so they had to focus on visual cues to decide if they should perform the interception or leave the partner do it. Participants were immersed in a stereoscopic virtual reality setup that allows the control of the situation and the visual stimuli they perceived, such as ball trajectories and the information available on the partner's motion. Results globally showed participants were often able to intercept balls without collision by dividing the interception space in two equivalent parts. However, an area of uncertainty (where many trials were not intercepted) appeared in the center of the scene, highlighting the presence of interference between participants. The width of this area increased when the situation became more complex (facing a real partner and not a stationary one) and when less information was available (only the paddle and not the partner's avatar). Moreover, participants initiated their interception later when real partner was present and often interpreted balls starting above them as balls they should intercept, even when these balls were textit{in fine} intercepted by their partner. Overall, results showed that team coordination here emerges from between-participants interactions and that interference between them depends on task complexity (uncertainty on partner's action and visual information available)","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"35 1","pages":"20"},"PeriodicalIF":0.0,"publicationDate":"2019-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86686720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The eyelid contour, pupil contour and blink event are important features of eye activity, and their estimation is a crucial research area for emerging wearable camera-based eyewear in a wide range of applications e.g. mental state estimation. Current approaches often estimate a single eye activity, such as blink or pupil center, from far-field and non-infrared (IR) eye images, and often depend on the knowledge of other eye components. This paper presents a unified approach to simultaneously estimate the landmarks for the eyelids, the iris and the pupil, and detect blink from near-field IR eye images based on a statistically learned deformable shape model and local appearance. Unlike the facial landmark estimation problem, by comparison, different shape models are applied to all eye states – closed eye, open eye with iris visible, and open eye with iris and pupil visible – to deal with the self-occluding interactions among the eye components. The most likely eye state is determined based on the learned local appearance. Evaluation on three different realistic datasets demonstrates that the proposed three-state deformable shape model achieves state-of-the-art performance for the open eye with iris and pupil state, where the normalized error was lower than 0.04. Blink detection can be as high as 90% in recall performance, without direct use of pupil detection. Cross-corpus evaluation results show that the proposed method improves on the state-of-the-art eyelid detection algorithm. This unified approach greatly facilitates eye activity analysis for research and practice when different types of eye activity are required rather than employ different techniques for each type. Our work is the first study proposing a unified approach for eye activity estimation from near-field IR eye images and achieved the state-of-the-art eyelid estimation and blink detection performance.
{"title":"Eyelid and Pupil Landmark Detection and Blink Estimation Based on Deformable Shape Models for Near-Field Infrared Video","authors":"Siyuan Chen, J. Epps","doi":"10.3389/fict.2019.00018","DOIUrl":"https://doi.org/10.3389/fict.2019.00018","url":null,"abstract":"The eyelid contour, pupil contour and blink event are important features of eye activity, and their estimation is a crucial research area for emerging wearable camera-based eyewear in a wide range of applications e.g. mental state estimation. Current approaches often estimate a single eye activity, such as blink or pupil center, from far-field and non-infrared (IR) eye images, and often depend on the knowledge of other eye components. This paper presents a unified approach to simultaneously estimate the landmarks for the eyelids, the iris and the pupil, and detect blink from near-field IR eye images based on a statistically learned deformable shape model and local appearance. Unlike the facial landmark estimation problem, by comparison, different shape models are applied to all eye states – closed eye, open eye with iris visible, and open eye with iris and pupil visible – to deal with the self-occluding interactions among the eye components. The most likely eye state is determined based on the learned local appearance. Evaluation on three different realistic datasets demonstrates that the proposed three-state deformable shape model achieves state-of-the-art performance for the open eye with iris and pupil state, where the normalized error was lower than 0.04. Blink detection can be as high as 90% in recall performance, without direct use of pupil detection. Cross-corpus evaluation results show that the proposed method improves on the state-of-the-art eyelid detection algorithm. This unified approach greatly facilitates eye activity analysis for research and practice when different types of eye activity are required rather than employ different techniques for each type. Our work is the first study proposing a unified approach for eye activity estimation from near-field IR eye images and achieved the state-of-the-art eyelid estimation and blink detection performance.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"1 1","pages":"18"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77551630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Belli, Luca Davoli, Alice Medioli, Pier Luigi Marchini, G. Ferrari
Research advances in the last decades have allowed the introduction of Internet of Things (IoT) concepts in several industrial application scenarios, leading to the so-called Industry 4.0 or Industrial IoT (IIoT). The Industry 4.0 has the ambition to revolutionize industry management and business processes, enhancing the productivity of manufacturing technologies through field data collection and analysis, thus creating real-time digital twins of industrial scenarios. Moreover, it is vital for companies to be as "smart" as possible and to adapt to the varying nature of the digital supply chains. This is possible by leveraging IoT in Industry 4.0 scenarios. In this paper, we describe the renovation process, guided by things2i s.r.l., a cross-disciplinary engineering-economic spin-off company of the University of Parma, which a real manufacturing industry is undergoing over consecutive phases spanning a few years. The first phase concerns the digitalization of the control quality process, specifically related to the company's production lines. The use of paper sheets containing different quality checks has been made smarter through the introduction of a digital, smart, and Web-based application, which is currently supporting operators and quality inspectors working on the supply chain through the use of smart devices. The second phase of the IIoT evolution - currently on-going - concerns both digitalization and optimization of the production planning activity, through an innovative Web-based planning tool. The changes introduced have led to significant advantages and improvement for the manufacturing company, in terms of: (i) impressive cost reduction; (ii) better products quality control; (iii) real-time detection and reaction to supply chain issues; (iv) significant reduction of the time spent in planning activity; and (v) resources employment optimization, thanks to the minimization of unproductive setup times on production lines. These two renovation phases represent a basis for possible future developments, such us the integration of sensor-based data on the operational status of production machines and the currently available warehouse supplies. In conclusion, the Industry 4.0-based on-going digitization process guided by things2i allows to continuously collect heterogeneous Human-to-Things (H2T) data, which can be used to optimize the partner manufacturing company as a whole entity.
{"title":"Toward Industry 4.0 With IoT: Optimizing Business Processes in an Evolving Manufacturing Factory","authors":"Laura Belli, Luca Davoli, Alice Medioli, Pier Luigi Marchini, G. Ferrari","doi":"10.3389/fict.2019.00017","DOIUrl":"https://doi.org/10.3389/fict.2019.00017","url":null,"abstract":"Research advances in the last decades have allowed the introduction of Internet of Things (IoT) concepts in several industrial application scenarios, leading to the so-called Industry 4.0 or Industrial IoT (IIoT). The Industry 4.0 has the ambition to revolutionize industry management and business processes, enhancing the productivity of manufacturing technologies through field data collection and analysis, thus creating real-time digital twins of industrial scenarios. Moreover, it is vital for companies to be as \"smart\" as possible and to adapt to the varying nature of the digital supply chains. This is possible by leveraging IoT in Industry 4.0 scenarios. In this paper, we describe the renovation process, guided by things2i s.r.l., a cross-disciplinary engineering-economic spin-off company of the University of Parma, which a real manufacturing industry is undergoing over consecutive phases spanning a few years. The first phase concerns the digitalization of the control quality process, specifically related to the company's production lines. The use of paper sheets containing different quality checks has been made smarter through the introduction of a digital, smart, and Web-based application, which is currently supporting operators and quality inspectors working on the supply chain through the use of smart devices. The second phase of the IIoT evolution - currently on-going - concerns both digitalization and optimization of the production planning activity, through an innovative Web-based planning tool. The changes introduced have led to significant advantages and improvement for the manufacturing company, in terms of: (i) impressive cost reduction; (ii) better products quality control; (iii) real-time detection and reaction to supply chain issues; (iv) significant reduction of the time spent in planning activity; and (v) resources employment optimization, thanks to the minimization of unproductive setup times on production lines. These two renovation phases represent a basis for possible future developments, such us the integration of sensor-based data on the operational status of production machines and the currently available warehouse supplies. In conclusion, the Industry 4.0-based on-going digitization process guided by things2i allows to continuously collect heterogeneous Human-to-Things (H2T) data, which can be used to optimize the partner manufacturing company as a whole entity.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"39 1","pages":"17"},"PeriodicalIF":0.0,"publicationDate":"2019-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80349589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We learn and/or relearn motor skills at all ages. Feedback plays a crucial role in this learning process, and Virtual Reality (VR) constitutes a unique tool to provide feedback and improve motor learning. In particular, VR grants the possibility to edit 3D movements and display augmented feedback in real time. Here we combined VR and motion capture to provide learners with a 3D feedback superimposing in real time the reference movements of an expert (expert feedback) to the movements of the learner (self-feedback). We assessed the effectiveness of this feedback for the learning of a throwing movement in American football. This feedback was used during (concurrent feedback) and/or after movement execution (delayed feedback), and it was compared with a feedback displaying only the reference movements of the expert. In contrast with more traditional studies relying on video feedback, we used the Dynamic Time Warping algorithm coupled to motion capture to measure the spatial characteristics of the movements. We also assessed the regularity with which the learner reproduced the reference movement along its path. For that, we used a new metric computing the dispersion of distance around the mean distance over time. Our results show that when the movements of the expert were superimposed on the movements of the learner during learning (i.e., self + expert), the reproduction of the reference movement improved significantly. On the hand, providing feedback about the movements of the expert only did not give rise to any significant improvement regarding movement reproduction.
{"title":"Superimposing 3D Virtual Self + Expert Modeling for Motor Learning: Application to the Throw in American Football","authors":"Thibaut Le Naour, Ludovic Hamon, J. Bresciani","doi":"10.3389/fict.2019.00016","DOIUrl":"https://doi.org/10.3389/fict.2019.00016","url":null,"abstract":"We learn and/or relearn motor skills at all ages. Feedback plays a crucial role in this learning process, and Virtual Reality (VR) constitutes a unique tool to provide feedback and improve motor learning. In particular, VR grants the possibility to edit 3D movements and display augmented feedback in real time. Here we combined VR and motion capture to provide learners with a 3D feedback superimposing in real time the reference movements of an expert (expert feedback) to the movements of the learner (self-feedback). We assessed the effectiveness of this feedback for the learning of a throwing movement in American football. This feedback was used during (concurrent feedback) and/or after movement execution (delayed feedback), and it was compared with a feedback displaying only the reference movements of the expert. In contrast with more traditional studies relying on video feedback, we used the Dynamic Time Warping algorithm coupled to motion capture to measure the spatial characteristics of the movements. We also assessed the regularity with which the learner reproduced the reference movement along its path. For that, we used a new metric computing the dispersion of distance around the mean distance over time. Our results show that when the movements of the expert were superimposed on the movements of the learner during learning (i.e., self + expert), the reproduction of the reference movement improved significantly. On the hand, providing feedback about the movements of the expert only did not give rise to any significant improvement regarding movement reproduction.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"4 1","pages":"16"},"PeriodicalIF":0.0,"publicationDate":"2019-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89336028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Name-based forwarding plane is a critical but challenging component for Named Data Networking (NDN), where the hash table is an appealing candidate for data structure utilized in FIB on the benefit of its fast lookup speed. However, the hash table is flawed that it does not naturally support the longest-prefix-matching (LPM) algorithm for name-based forwarding. To support LPM in the hash table, besides the linear lookup, the random search (such as binary search) aims at increasing the lookup speed by reconstructing the FIB and optimizing the search path. We propose a composite data structure for random search based on the combination of hash table and trie; the latter is introduced preserve the logical associations among names, so as to recycle memory and prevent the so-called backtracking problem, thus enhancing the lookup efficiency. The experiment indicates the superiority of our scheme in lookup speed, the impact on memory consumption has also been evaluated.
{"title":"A Composite Structure for Fast Name Prefix Lookup","authors":"Jiawei Hu, Hui Li","doi":"10.3389/fict.2019.00015","DOIUrl":"https://doi.org/10.3389/fict.2019.00015","url":null,"abstract":"Name-based forwarding plane is a critical but challenging component for Named Data Networking (NDN), where the hash table is an appealing candidate for data structure utilized in FIB on the benefit of its fast lookup speed. However, the hash table is flawed that it does not naturally support the longest-prefix-matching (LPM) algorithm for name-based forwarding. To support LPM in the hash table, besides the linear lookup, the random search (such as binary search) aims at increasing the lookup speed by reconstructing the FIB and optimizing the search path. We propose a composite data structure for random search based on the combination of hash table and trie; the latter is introduced preserve the logical associations among names, so as to recycle memory and prevent the so-called backtracking problem, thus enhancing the lookup efficiency. The experiment indicates the superiority of our scheme in lookup speed, the impact on memory consumption has also been evaluated.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"28 1","pages":"15"},"PeriodicalIF":0.0,"publicationDate":"2019-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88498065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Kuťák, M. Dolezal, Bojan Kerous, Zdenek Eichler, J. Vasek, F. Liarokapis
Traditional types of mind maps involve means of visually organizing information. They can be created either using physical tools like paper or post-it notes or through the computer-mediated process. Although their utility is established, mind maps and associated methods usually have several shortcomings with regards to effective and intuitive interaction as well as effective collaboration. Latest developments in virtual reality demonstrate new capabilities of visual and interactive augmentation, and in this paper, we propose a multimodal virtual reality mind map that has the potential to transform the ways in which people interact, communicate, and share information. The shared virtual space allows users to be located virtually in the same meeting room and participate in an immersive experience. Users of the system can create, modify, and group notes in categories and intuitively interact with them. They can create or modify inputs using voice recognition, interact using virtual reality controllers, and then make posts on the virtual mind map. When a brainstorming session is finished, users are able to vote about the content and export it for later usage. A user evaluation with 32 participants assessed the effectiveness of the virtual mind map and its functionality. Results indicate that this technology has the potential to be adopted in practice in the future, but a comparative study needs to be performed to have a more general conclusion.
{"title":"An Interactive and Multimodal Virtual Mind Map for Future Workplace","authors":"David Kuťák, M. Dolezal, Bojan Kerous, Zdenek Eichler, J. Vasek, F. Liarokapis","doi":"10.3389/fict.2019.00014","DOIUrl":"https://doi.org/10.3389/fict.2019.00014","url":null,"abstract":"Traditional types of mind maps involve means of visually organizing information. They can be created either using physical tools like paper or post-it notes or through the computer-mediated process. Although their utility is established, mind maps and associated methods usually have several shortcomings with regards to effective and intuitive interaction as well as effective collaboration. Latest developments in virtual reality demonstrate new capabilities of visual and interactive augmentation, and in this paper, we propose a multimodal virtual reality mind map that has the potential to transform the ways in which people interact, communicate, and share information. The shared virtual space allows users to be located virtually in the same meeting room and participate in an immersive experience. Users of the system can create, modify, and group notes in categories and intuitively interact with them. They can create or modify inputs using voice recognition, interact using virtual reality controllers, and then make posts on the virtual mind map. When a brainstorming session is finished, users are able to vote about the content and export it for later usage. A user evaluation with 32 participants assessed the effectiveness of the virtual mind map and its functionality. Results indicate that this technology has the potential to be adopted in practice in the future, but a comparative study needs to be performed to have a more general conclusion.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"6 1","pages":"14"},"PeriodicalIF":0.0,"publicationDate":"2019-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77221959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While the expansion of technologies into the music education classroom has been studied in great depth, there is a lack of published literature regarding the use of digital technologies by students learning via one-to-one instrumental music instruction frameworks. Do musicians take their technology use into the practice room and teacher’s studio, or does the traditional nature of the master-apprentice teaching model promote differing attitudes of musicians toward their use of technology in learning their instrument? The present study examined (1) musicians’ skills with and attitudes toward technologies in their day-to-day lives, (2) how they engage with technology in the learning of musical instruments, (3) how their attitudes as music learners differ from their role as music teachers, and (4) musicians’ attitudes toward potential new technologies and what factors predict adoption of new tools. To investigate these issues, we developed the Technology Use and Attitudes in Music Learning Survey, which included adaptations of Davis’ 1989 scales for Perceived Usefulness and Perceived Ease of Use of Technology. Data were collected from an international cohort of 338 amateur, student, and professional musicians ranging widely in age, instrument, and musical experience. Results showed a generally positive attitude towards current and future technology use among musicians and supported the Technology Acceptance Model (TAM), wherein technology use in music learning was predicted by perceived ease of use via perceived usefulness, although technology use itself, age, and musical experience did not predict hypothetical future use. Musicians’ self-perceived skills with smartphones, laptops, and desktop computers was found to surpass traditional audio and video recording devices regardless of demographics, and the majority of musicians reported using the classic musical technologies of metronomes and tuners on smartphones and tablets rather than bespoke devices. Despite this comfort with and access to new technology, its reported availability within lesson spaces was half of that within practice spaces, and while a large percentage of musicians actively record their playing, these recordings are reviewed with significantly less frequency. These results highlight opportunities for technology to take a greater role in improving music learning through enhanced student-teacher interaction and self-regulated learning.
{"title":"Technology Use and Attitudes in Music Learning","authors":"G. Waddell, A. Williamon","doi":"10.3389/fict.2019.00011","DOIUrl":"https://doi.org/10.3389/fict.2019.00011","url":null,"abstract":"While the expansion of technologies into the music education classroom has been studied in great depth, there is a lack of published literature regarding the use of digital technologies by students learning via one-to-one instrumental music instruction frameworks. Do musicians take their technology use into the practice room and teacher’s studio, or does the traditional nature of the master-apprentice teaching model promote differing attitudes of musicians toward their use of technology in learning their instrument? The present study examined (1) musicians’ skills with and attitudes toward technologies in their day-to-day lives, (2) how they engage with technology in the learning of musical instruments, (3) how their attitudes as music learners differ from their role as music teachers, and (4) musicians’ attitudes toward potential new technologies and what factors predict adoption of new tools. To investigate these issues, we developed the Technology Use and Attitudes in Music Learning Survey, which included adaptations of Davis’ 1989 scales for Perceived Usefulness and Perceived Ease of Use of Technology. Data were collected from an international cohort of 338 amateur, student, and professional musicians ranging widely in age, instrument, and musical experience. Results showed a generally positive attitude towards current and future technology use among musicians and supported the Technology Acceptance Model (TAM), wherein technology use in music learning was predicted by perceived ease of use via perceived usefulness, although technology use itself, age, and musical experience did not predict hypothetical future use. Musicians’ self-perceived skills with smartphones, laptops, and desktop computers was found to surpass traditional audio and video recording devices regardless of demographics, and the majority of musicians reported using the classic musical technologies of metronomes and tuners on smartphones and tablets rather than bespoke devices. Despite this comfort with and access to new technology, its reported availability within lesson spaces was half of that within practice spaces, and while a large percentage of musicians actively record their playing, these recordings are reviewed with significantly less frequency. These results highlight opportunities for technology to take a greater role in improving music learning through enhanced student-teacher interaction and self-regulated learning.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"18 1","pages":"11"},"PeriodicalIF":0.0,"publicationDate":"2019-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82265829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding Health Information Technologies as Complex Interventions With the Need for Thorough Implementation and Monitoring to Sustain Patient Safety","authors":"J. Wienert","doi":"10.3389/fict.2019.00009","DOIUrl":"https://doi.org/10.3389/fict.2019.00009","url":null,"abstract":"","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"75 1","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2019-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85512902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}