Davinia Hernández Leo, Juan I. Asensio-Pérez, M. Derntl, F. Pozzi, Jonathan Chacón, L. Prieto, D. Persico
This work has been partially funded by the EC, EACEA, METIS Project 531262-LLP-2012-ES-KA3-KA3MP, with additional support by TIN2014-53199-C3-3-R, TIN2017-85179-C3-3-R, MDM-2015-0502, RecerCaixa CoT, TIN2014-53199-C3-2-R, TIN2017-85179-C3-2-R, VA082U16 and a Marie Curie Fellowship (MIOCTI, FP7-PEOPLE-2012-IEF project no. 327384).
{"title":"An Integrated Environment for Learning Design","authors":"Davinia Hernández Leo, Juan I. Asensio-Pérez, M. Derntl, F. Pozzi, Jonathan Chacón, L. Prieto, D. Persico","doi":"10.3389/fict.2018.00009","DOIUrl":"https://doi.org/10.3389/fict.2018.00009","url":null,"abstract":"This work has been partially funded by the EC, EACEA, METIS Project 531262-LLP-2012-ES-KA3-KA3MP, with additional support by TIN2014-53199-C3-3-R, TIN2017-85179-C3-3-R, MDM-2015-0502, RecerCaixa CoT, TIN2014-53199-C3-2-R, TIN2017-85179-C3-2-R, VA082U16 and a Marie Curie Fellowship (MIOCTI, FP7-PEOPLE-2012-IEF project no. 327384).","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"150 3 1","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2018-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80387810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning algorithms have increased tremendously in power in recent years but have yet to be fully utilized in many ecology and sustainable resource management domains such as wildlife reserve design, forest fire management and invasive species spread. One thing these domains have in common is that they contain dynamics that can be characterized as a Spatially Spreading Process (SSP) which requires many parameters to be set precisely to model the dynamics, spread rates and directional biases of the elements which are spreading. We present related work in Artificial Intelligence and Machine Learning for SSP sustainability domains including forest wildfire prediction. We then introduce a novel approach for learning in SSP domains using Reinforcement Learning (RL) where fire is the agent at any cell in the landscape and the set of actions the fire can take from a location at any point in time includes spreading North, South, East, West or not spreading. This approach inverts the usual RL setup since the dynamics of the corresponding Markov Decision Process (MDP) is a known function for immediate wildfire spread. Meanwhile, we learn an agent policy for a predictive model of the dynamics of a complex spatially-spreading process. Rewards are provided for correctly classifying which cells are on fire or not compared to satellite and other related data. We examine the behaviour of five RL algorithms on this problem: Value Iteration, Policy Iteration, Q-Learning, Monte Carlo Tree Search and Asynchronous Advantage Actor-Critic (A3C). We compare to a Gaussian process based supervised learning approach and discuss the relation of our approach to manually constructed, state-of-the-art methods from forest wildfire modelling. We also discuss the relation of our approach to manually constructed, state-of-the-art methods from forest wildfire modelling. We validate our approach with satellite image data of two massive wildfire events in Northern Alberta, Canada; the Fort McMurray fire of 2016 and the Richardson fire of 2011. The results show that we can learn predictive, agent-based policies as models of spatial dynamics using RL on readily available satellite images that other methods and have many additional advantages in terms of generalizability and interpretability.
{"title":"Using Spatial Reinforcement Learning to Build Forest Wildfire Dynamics Models From Satellite Images","authors":"Sriram Ganapathi Subramanian, Mark Crowley","doi":"10.3389/fict.2018.00006","DOIUrl":"https://doi.org/10.3389/fict.2018.00006","url":null,"abstract":"Machine learning algorithms have increased tremendously in power in recent years but have yet to be fully utilized in many ecology and sustainable resource management domains such as wildlife reserve design, forest fire management and invasive species spread. One thing these domains have in common is that they contain dynamics that can be characterized as a Spatially Spreading Process (SSP) which requires many parameters to be set precisely to model the dynamics, spread rates and directional biases of the elements which are spreading. We present related work in Artificial Intelligence and Machine Learning for SSP sustainability domains including forest wildfire prediction. We then introduce a novel approach for learning in SSP domains using Reinforcement Learning (RL) where fire is the agent at any cell in the landscape and the set of actions the fire can take from a location at any point in time includes spreading North, South, East, West or not spreading. This approach inverts the usual RL setup since the dynamics of the corresponding Markov Decision Process (MDP) is a known function for immediate wildfire spread. Meanwhile, we learn an agent policy for a predictive model of the dynamics of a complex spatially-spreading process. Rewards are provided for correctly classifying which cells are on fire or not compared to satellite and other related data. We examine the behaviour of five RL algorithms on this problem: Value Iteration, Policy Iteration, Q-Learning, Monte Carlo Tree Search and Asynchronous Advantage Actor-Critic (A3C). We compare to a Gaussian process based supervised learning approach and discuss the relation of our approach to manually constructed, state-of-the-art methods from forest wildfire modelling. We also discuss the relation of our approach to manually constructed, state-of-the-art methods from forest wildfire modelling. We validate our approach with satellite image data of two massive wildfire events in Northern Alberta, Canada; the Fort McMurray fire of 2016 and the Richardson fire of 2011. The results show that we can learn predictive, agent-based policies as models of spatial dynamics using RL on readily available satellite images that other methods and have many additional advantages in terms of generalizability and interpretability.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"25 1","pages":"6"},"PeriodicalIF":0.0,"publicationDate":"2018-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81016340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cindy Perscheid, J. Benzler, Claus Hermann, Michael Janke, D. Moyer, Todd Laedtke, O. Adeoye, K. Denecke, Göran Kirchner, S. Beermann, N. Schwarz, D. Tom-Aba, G. Krause
Background: Since the beginning of the Ebola outbreak in West Africa in 2014, more than 11,000 people died. For outbreaks of infectious diseases like this, the rapid implementation of control measures is a crucial factor for containment. In West African countries, outbreak surveillance is a paper-based process with significant delays in forwarding outbreak information, which affects the ability to react adequately to situational changes. Our objective therefore was to develop a tool that improves data collection, situation assessment, and coordination of response measures in outbreak surveillance processes for a better containment. Methods: We have developed the Surveillance and Outbreak Response Management System (SORMAS) based on findings from Nigeria's 2014 Ebola outbreak. We conducted a thorough requirements engineering and defined personas and processes. We also defined a data schema with specific variables to measure in outbreak situations. We designed our system to be a cloud application that consists of interfaces for both mobile devices and desktop computers to support all stakeholders in the process. In the field, health workers collect data on the outbreak situation via mobile applications and directly transmit it to control centers. At the control centers, health workers access SORMAS via desktop computers, receive instant updates on critical situations, react immediately on emergencies, and coordinate the implementation of control measures with SORMAS. Results: We have tested SORMAS in multiple workshops and a field study in July 2015. Results from workshops confirmed derived requirements and implemented features, but also led to further iterations on the systems regarding usability. Results from the field study are currently under assessment. General feedback showed high enthusiasm about the system and stressed its benefits for an effective outbreak containment of infectious diseases. Conclusions: SORMAS is a software tool to support health workers in efficiently handling outbreak situations of infectious diseases, such as Ebola. Our tool enables a bi-directional exchange of situational data between individual stakeholders in outbreak containment. This allows instant and seamless collection of data from the field and its instantaneous analysis in operational centers. By that, SORMAS accelerates the implementation of control measures, which is crucial for a successful outbreak containment.
{"title":"Ebola Outbreak Containment: Real-Time Task and Resource Coordination With SORMAS","authors":"Cindy Perscheid, J. Benzler, Claus Hermann, Michael Janke, D. Moyer, Todd Laedtke, O. Adeoye, K. Denecke, Göran Kirchner, S. Beermann, N. Schwarz, D. Tom-Aba, G. Krause","doi":"10.3389/fict.2018.00007","DOIUrl":"https://doi.org/10.3389/fict.2018.00007","url":null,"abstract":"Background: Since the beginning of the Ebola outbreak in West Africa in 2014, more than 11,000 people died. For outbreaks of infectious diseases like this, the rapid implementation of control measures is a crucial factor for containment. In West African countries, outbreak surveillance is a paper-based process with significant delays in forwarding outbreak information, which affects the ability to react adequately to situational changes. Our objective therefore was to develop a tool that improves data collection, situation assessment, and coordination of response measures in outbreak surveillance processes for a better containment. Methods: We have developed the Surveillance and Outbreak Response Management System (SORMAS) based on findings from Nigeria's 2014 Ebola outbreak. We conducted a thorough requirements engineering and defined personas and processes. We also defined a data schema with specific variables to measure in outbreak situations. We designed our system to be a cloud application that consists of interfaces for both mobile devices and desktop computers to support all stakeholders in the process. In the field, health workers collect data on the outbreak situation via mobile applications and directly transmit it to control centers. At the control centers, health workers access SORMAS via desktop computers, receive instant updates on critical situations, react immediately on emergencies, and coordinate the implementation of control measures with SORMAS. Results: We have tested SORMAS in multiple workshops and a field study in July 2015. Results from workshops confirmed derived requirements and implemented features, but also led to further iterations on the systems regarding usability. Results from the field study are currently under assessment. General feedback showed high enthusiasm about the system and stressed its benefits for an effective outbreak containment of infectious diseases. Conclusions: SORMAS is a software tool to support health workers in efficiently handling outbreak situations of infectious diseases, such as Ebola. Our tool enables a bi-directional exchange of situational data between individual stakeholders in outbreak containment. This allows instant and seamless collection of data from the field and its instantaneous analysis in operational centers. By that, SORMAS accelerates the implementation of control measures, which is crucial for a successful outbreak containment.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"1 1","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2018-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79895993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper surveys the haptic technologies deployed in cars and their uses to enhance drivers' safety during manual driving. These technologies enable to deliver haptic (tactile or kinesthetic) feedback at various areas of the car, such as the steering wheel, the seat or the pedal. The paper explores two main uses of the haptic modality to fulfill the safety objective: to provide driving assistance and warning. Driving assistance concerns the transmission of information usually conveyed with other modalities for controlling the car's functions, maneuvering support and guidance. Warning concerns the prevention of accidents using emergency warnings, increasing the awareness of surroundings and preventing collisions, lane departures and speeding. This paper discusses how haptic feedback has been introduced so far for these purposes, and provides perspectives regarding the present and future of haptic cars meant to increase driver's safety.
{"title":"The Use of Haptic and Tactile Information in the Car to Improve Driving Safety: A Review of Current Technologies","authors":"Yoren Gaffary, A. Lécuyer","doi":"10.3389/fict.2018.00005","DOIUrl":"https://doi.org/10.3389/fict.2018.00005","url":null,"abstract":"This paper surveys the haptic technologies deployed in cars and their uses to enhance drivers' safety during manual driving. These technologies enable to deliver haptic (tactile or kinesthetic) feedback at various areas of the car, such as the steering wheel, the seat or the pedal. The paper explores two main uses of the haptic modality to fulfill the safety objective: to provide driving assistance and warning. Driving assistance concerns the transmission of information usually conveyed with other modalities for controlling the car's functions, maneuvering support and guidance. Warning concerns the prevention of accidents using emergency warnings, increasing the awareness of surroundings and preventing collisions, lane departures and speeding. This paper discusses how haptic feedback has been introduced so far for these purposes, and provides perspectives regarding the present and future of haptic cars meant to increase driver's safety.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"6 1","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88313069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we develop a real-time mobile phone-based gaze tracking and eye-blink detection system on Android platform. Our eye-blink detection scheme is developed based on the time difference between two open eye states. We develop our system by finding the greatest circle – pupil of an eye. So we combine the both Haar classifier and Normalized Summation of Square of Difference template matching method. We define the eyeball area that is extracted from the eye-region as the region of interest (ROI). The ROI helps to differentiate between the open state and closed state of the eyes. The output waveform of the scheme is analogous to binary trend, which alludes the blink detection distinctly. We categorize short, medium and long blink, depending on the degree of closure and blink duration. Our analysis is operated on medium blink under 15frames/sec. This combined solution for gaze tracking and eye-blink detection system has high detection accuracy and low time-consumption. We obtain 98% accuracy at zero degree angles for blink detection from both eyes. The system is also extensively experimented with various environments and setups, including variations in illuminations, subjects, gender, angles, processing speed, RAM capacity, and distance. We found that the system performs satisfactorily under varied conditions in real-time for both single eye and two eyes detection. These concepts can be exploited in different applications, e.g., to detect drowsiness of a driver, or to operate the computer cursor to develop an eye-operated mouse for disabled people.
{"title":"Mobile-Based Eye-Blink Detection Performance Analysis on Android Platform","authors":"Md. Talal Bin Noman, Md Atiqur Rahman Ahad","doi":"10.3389/fict.2018.00004","DOIUrl":"https://doi.org/10.3389/fict.2018.00004","url":null,"abstract":"In this paper, we develop a real-time mobile phone-based gaze tracking and eye-blink detection system on Android platform. Our eye-blink detection scheme is developed based on the time difference between two open eye states. We develop our system by finding the greatest circle – pupil of an eye. So we combine the both Haar classifier and Normalized Summation of Square of Difference template matching method. We define the eyeball area that is extracted from the eye-region as the region of interest (ROI). The ROI helps to differentiate between the open state and closed state of the eyes. The output waveform of the scheme is analogous to binary trend, which alludes the blink detection distinctly. We categorize short, medium and long blink, depending on the degree of closure and blink duration. Our analysis is operated on medium blink under 15frames/sec. This combined solution for gaze tracking and eye-blink detection system has high detection accuracy and low time-consumption. We obtain 98% accuracy at zero degree angles for blink detection from both eyes. The system is also extensively experimented with various environments and setups, including variations in illuminations, subjects, gender, angles, processing speed, RAM capacity, and distance. We found that the system performs satisfactorily under varied conditions in real-time for both single eye and two eyes detection. These concepts can be exploited in different applications, e.g., to detect drowsiness of a driver, or to operate the computer cursor to develop an eye-operated mouse for disabled people.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"62 1","pages":"4"},"PeriodicalIF":0.0,"publicationDate":"2018-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76024187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With advances in digital technology, research-validated self-paced learning tools can play an increasingly important role in helping students with diverse backgrounds become good problem solvers and independent learners. Thus, it is important to ensure that all students engage with self-paced learning tools effectively in order to learn the content deeply, develop good problem-solving skills, and transfer their learning from one context to another. Here we first provide an overview of a holistic framework for engaging students with self-paced learning tools so that they can transfer their learning to solve novel problems. The framework not only takes into account the features of the self-paced learning tools but also how those tools are implemented, the extent to which the tools take into account student characteristics, and whether factors related to students’ social environments are accounted for appropriately in the implementation of those tools. We then describe an investigation in which we interpret the findings using the framework. In this study, a research-validated self-paced physics tutorial was implemented in both controlled one-on-one interviews and in large enrollment, introductory calculus-based physics courses as a self-paced learning tool. We find that students who used the tutorial in a controlled one-on-one interview situation performed significantly better on transfer problems than those who used it as a self-paced learning tool in the large-scale implementation. The findings suggest that critically examining and taking into account how the self-paced tools are implemented and incentivized, student characteristics including their self-regulation and time-management skills, and social and environmental factors can greatly impact the extent and manner in which students engage with these learning tools. Motivating and getting buy-in from students of the value of these tools and providing appropriate support while implementing them is critical for ensuring that students, who otherwise may be constrained by motivational, social, and environmental factors, engage effectively with the tools in order to learn deeply and transfer their learning.
{"title":"Challenge of Helping Introductory Physics Students Transfer Their Learning by Engaging with a Self-Paced Learning Tutorial","authors":"E. Marshman, Seth DeVore, C. Singh","doi":"10.3389/fict.2018.00003","DOIUrl":"https://doi.org/10.3389/fict.2018.00003","url":null,"abstract":"With advances in digital technology, research-validated self-paced learning tools can play an increasingly important role in helping students with diverse backgrounds become good problem solvers and independent learners. Thus, it is important to ensure that all students engage with self-paced learning tools effectively in order to learn the content deeply, develop good problem-solving skills, and transfer their learning from one context to another. Here we first provide an overview of a holistic framework for engaging students with self-paced learning tools so that they can transfer their learning to solve novel problems. The framework not only takes into account the features of the self-paced learning tools but also how those tools are implemented, the extent to which the tools take into account student characteristics, and whether factors related to students’ social environments are accounted for appropriately in the implementation of those tools. We then describe an investigation in which we interpret the findings using the framework. In this study, a research-validated self-paced physics tutorial was implemented in both controlled one-on-one interviews and in large enrollment, introductory calculus-based physics courses as a self-paced learning tool. We find that students who used the tutorial in a controlled one-on-one interview situation performed significantly better on transfer problems than those who used it as a self-paced learning tool in the large-scale implementation. The findings suggest that critically examining and taking into account how the self-paced tools are implemented and incentivized, student characteristics including their self-regulation and time-management skills, and social and environmental factors can greatly impact the extent and manner in which students engage with these learning tools. Motivating and getting buy-in from students of the value of these tools and providing appropriate support while implementing them is critical for ensuring that students, who otherwise may be constrained by motivational, social, and environmental factors, engage effectively with the tools in order to learn deeply and transfer their learning.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"39 1","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84536330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. J. Lera, Vicente Matellán Olivera, Jesús Balsa-Comerón, Ángel Manuel Guerrero Higueras, C. F. Llamas
In human-robot interaction situations, robot sensors collect huge amounts of data from the environment in order to characterize the situation. Some of the gathered data ought to be treated as private, such as medical data (i.e., medication guidelines), personal and safety information (i.e., images of children, home habits, alarm codes, etc.). However, most robotic software development frameworks are not designed for securely managing this information. This paper analyzes the scenario of hardening one of the most widely used robotic middlewares, Robot Operating System (ROS). The study investigates a robot’s performance when ciphering the messages interchanged between ROS nodes under the publish/subscribe paradigm. In particular, this research focuses on the nodes which manage cameras and LIDAR sensors, which are two of the most extended sensing solutions in mobile robotics, and analyzes the collateral effects on the robot's achievement under different computing capabilities and encryption algorithms (3DES, AES and Blowfish) to robot performance. The findings present empirical evidence that simple encryption algorithms are lightweight enough to provide cyber-security even in low-powered robots when carefully designed and implemented. Nevertheless, these techniques come with a number of serious drawbacks regarding robot autonomy and performance if they are applied randomly. To avoid these issues, we define a taxonomy that links the type of ROS message, computational units, and the encryption methods. As a result, we present a model to select the optimal options for hardening a mobile robot using ROS.
{"title":"Message Encryption in Robot Operating System: Collateral Effects of Hardening Mobile Robots","authors":"F. J. Lera, Vicente Matellán Olivera, Jesús Balsa-Comerón, Ángel Manuel Guerrero Higueras, C. F. Llamas","doi":"10.3389/fict.2018.00002","DOIUrl":"https://doi.org/10.3389/fict.2018.00002","url":null,"abstract":"In human-robot interaction situations, robot sensors collect huge amounts of data from the environment in order to characterize the situation. Some of the gathered data ought to be treated as private, such as medical data (i.e., medication guidelines), personal and safety information (i.e., images of children, home habits, alarm codes, etc.). However, most robotic software development frameworks are not designed for securely managing this information. This paper analyzes the scenario of hardening one of the most widely used robotic middlewares, Robot Operating System (ROS). The study investigates a robot’s performance when ciphering the messages interchanged between ROS nodes under the publish/subscribe paradigm. In particular, this research focuses on the nodes which manage cameras and LIDAR sensors, which are two of the most extended sensing solutions in mobile robotics, and analyzes the collateral effects on the robot's achievement under different computing capabilities and encryption algorithms (3DES, AES and Blowfish) to robot performance. The findings present empirical evidence that simple encryption algorithms are lightweight enough to provide cyber-security even in low-powered robots when carefully designed and implemented. Nevertheless, these techniques come with a number of serious drawbacks regarding robot autonomy and performance if they are applied randomly. To avoid these issues, we define a taxonomy that links the type of ROS message, computational units, and the encryption methods. As a result, we present a model to select the optimal options for hardening a mobile robot using ROS.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"78 1","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2018-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83637943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wearable Sensors for Assisted Living in Elderly People","authors":"H. Saner","doi":"10.3389/fict.2018.00001","DOIUrl":"https://doi.org/10.3389/fict.2018.00001","url":null,"abstract":"","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"7 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2018-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73879557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Serino, Jean-Paul Noel, Robin Mange, Elisa Canzoneri, Elisa Pellencin, J. B. Ruiz, F. Bernasconi, O. Blanke, B. Herbelin
Human-environment interactions normally occur in the physical milieu, and thus by medium of the body and within the space immediately adjacent to and surrounding the body; the peri-personal space (PPS). However, human interactions increasingly occur with or within virtual environments, and hence novel approaches and metrics must be developed to index human-environment interactions in virtual reality (VR). Here we present a multisensory task that measures the spatial extent of human PPS in real, virtual, and augmented realities. We validated it in a mixed reality ecosystem in which real environment and virtual objects are blended together in order to administer and control visual, auditory, and tactile stimuli in ecologically valid conditions. Within this mixed-reality environment, participants are asked to respond as fast as possible to tactile stimuli on their body, while task-irrelevant visual or audio-visual stimuli approach their body. Results demonstrate that, in analogy with observations derived from monkey electrophysiology and in real environmental surroundings, tactile detection is enhanced when visual or auditory stimuli are close to the body, and not when far from it. We then calculate the location where this multisensory facilitation occurs as a proxy of the boundary of PPS. We observe that mapping of PPS via audio-visual, as opposed to visual alone, looming stimuli results in sigmoidal fits – allowing for the bifurcation between near and far space – with greater goodness of fit. In sum, our approach is able to capture the boundaries of PPS on a spatial continuum, at the individual-subject level, and within a fully controlled and previously laboratory-validated setup, while maintaining the richness and ecological validity of real-life events. The task can therefore be applied to study the properties of peri-personal space in humans and to index the features governing human-environment interactions in virtual or mixed reality. We propose PPS as an ecologically valid and neurophysiologically established metric in the study of the impact of VR and related technologies on society and individuals.
{"title":"Peripersonal Space: An Index of Multisensory Body–Environment Interactions in Real, Virtual, and Mixed Realities","authors":"A. Serino, Jean-Paul Noel, Robin Mange, Elisa Canzoneri, Elisa Pellencin, J. B. Ruiz, F. Bernasconi, O. Blanke, B. Herbelin","doi":"10.3389/fict.2017.00031","DOIUrl":"https://doi.org/10.3389/fict.2017.00031","url":null,"abstract":"Human-environment interactions normally occur in the physical milieu, and thus by medium of the body and within the space immediately adjacent to and surrounding the body; the peri-personal space (PPS). However, human interactions increasingly occur with or within virtual environments, and hence novel approaches and metrics must be developed to index human-environment interactions in virtual reality (VR). Here we present a multisensory task that measures the spatial extent of human PPS in real, virtual, and augmented realities. We validated it in a mixed reality ecosystem in which real environment and virtual objects are blended together in order to administer and control visual, auditory, and tactile stimuli in ecologically valid conditions. Within this mixed-reality environment, participants are asked to respond as fast as possible to tactile stimuli on their body, while task-irrelevant visual or audio-visual stimuli approach their body. Results demonstrate that, in analogy with observations derived from monkey electrophysiology and in real environmental surroundings, tactile detection is enhanced when visual or auditory stimuli are close to the body, and not when far from it. We then calculate the location where this multisensory facilitation occurs as a proxy of the boundary of PPS. We observe that mapping of PPS via audio-visual, as opposed to visual alone, looming stimuli results in sigmoidal fits – allowing for the bifurcation between near and far space – with greater goodness of fit. In sum, our approach is able to capture the boundaries of PPS on a spatial continuum, at the individual-subject level, and within a fully controlled and previously laboratory-validated setup, while maintaining the richness and ecological validity of real-life events. The task can therefore be applied to study the properties of peri-personal space in humans and to index the features governing human-environment interactions in virtual or mixed reality. We propose PPS as an ecologically valid and neurophysiologically established metric in the study of the impact of VR and related technologies on society and individuals.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"9 1","pages":"31"},"PeriodicalIF":0.0,"publicationDate":"2018-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83349110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The flipped classroom as a form of active pedagogy in post-secondary chemistry has been developed during the last 10 years and has been gaining popularity with instructors and students ever since. In the current paradigm in STEM (science, technology, engineering and mathematics) education, it is widely recognized that active learning has significant positive effects on students’ grades. Post-secondary organic chemistry is a difficult course for students and the traditional way of teaching does not foster students’ active involvement. Implementation of active pedagogy could increase students’ achievement in this course. However, few quantitative data are available on the impact of active pedagogy in general, or flipped classrooms in particular, on learning in organic chemistry at a post-secondary level. Thus, in the present study, we evaluated the gain on final grade scores in organic chemistry after implementing a flipped classroom approach to promote active learning in this course. We encouraged students to be active by having them watch educational videos prior to each class and then having them work during class time on problems that focused on applying the concepts presented in the videos. Exams were the same as those completed by students in the traditional classrooms of our college. In an a posteriori analysis of our students’ grades, we compared final grades in traditional classrooms (control group, N = 66) and in flipped classrooms (experimental group, N = 151). The sample was stratified in three categories depending on students’ academic ability in college, from low-achieving to high-achieving students. Our results show that students in the experimental group have significantly higher final grades in organic chemistry than those in the control group, that is 77% for students in the active classroom vs. 73% in the traditional classroom (p < 0.05). The effect was the greatest for low-achieving students, with final scores of 70% in the active classroom compared to 60% in the traditional one (p < 0.001). This difference in performance is likely due to students spending more time solving problems in a flipped classroom rather than having the questions assigned to them as homework.
{"title":"Flipped Classroom in Organic Chemistry Has Significant Effect on Students’ Grades","authors":"Caroline Cormier, Bruno Voisard","doi":"10.3389/fict.2017.00030","DOIUrl":"https://doi.org/10.3389/fict.2017.00030","url":null,"abstract":"The flipped classroom as a form of active pedagogy in post-secondary chemistry has been developed during the last 10 years and has been gaining popularity with instructors and students ever since. In the current paradigm in STEM (science, technology, engineering and mathematics) education, it is widely recognized that active learning has significant positive effects on students’ grades. Post-secondary organic chemistry is a difficult course for students and the traditional way of teaching does not foster students’ active involvement. Implementation of active pedagogy could increase students’ achievement in this course. However, few quantitative data are available on the impact of active pedagogy in general, or flipped classrooms in particular, on learning in organic chemistry at a post-secondary level. Thus, in the present study, we evaluated the gain on final grade scores in organic chemistry after implementing a flipped classroom approach to promote active learning in this course. We encouraged students to be active by having them watch educational videos prior to each class and then having them work during class time on problems that focused on applying the concepts presented in the videos. Exams were the same as those completed by students in the traditional classrooms of our college. In an a posteriori analysis of our students’ grades, we compared final grades in traditional classrooms (control group, N = 66) and in flipped classrooms (experimental group, N = 151). The sample was stratified in three categories depending on students’ academic ability in college, from low-achieving to high-achieving students. Our results show that students in the experimental group have significantly higher final grades in organic chemistry than those in the control group, that is 77% for students in the active classroom vs. 73% in the traditional classroom (p < 0.05). The effect was the greatest for low-achieving students, with final scores of 70% in the active classroom compared to 60% in the traditional one (p < 0.001). This difference in performance is likely due to students spending more time solving problems in a flipped classroom rather than having the questions assigned to them as homework.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"145 1","pages":"30"},"PeriodicalIF":0.0,"publicationDate":"2018-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89032060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}