R. Jordan, Steven Gray, A. Sorensen, Samantha Pasewark, Suparna Sinha, C. Hmelo‐Silver
In response to recent educational imperatives in the United States, modeling and systems thinking have been identified as being critical for science learning. In this paper, we investigate models in the classroom from two important perspectives: (1) from the teacher perspective to understand how teachers perceive models and use models in the classroom and (2) from the student perspective to understand how student use model-based reasoning to represent their understanding in a classroom setting. Qualitative data collected from 19 teachers who attended a professional development workshop in the northeastern United States indicate that while teachers see the value in teaching to think with models (i.e., during inquiry practices), they tend to use models mostly as communication tools in the classroom. Quantitative data collected about the modeling practices of 42 middle school students who worked collaboratively in small groups (4-5 students) using a computer modeling program indicated that students tended to engage in more mechanistic and function-related thinking with time as they reasoned about a complex system. Further, students had a typified trajectory of first adding and then next paring down ideas in their models. Implications for science education are discussed.
{"title":"Modeling with a Conceptual Representation: Is It Necessary? Does It Work?","authors":"R. Jordan, Steven Gray, A. Sorensen, Samantha Pasewark, Suparna Sinha, C. Hmelo‐Silver","doi":"10.3389/fict.2017.00007","DOIUrl":"https://doi.org/10.3389/fict.2017.00007","url":null,"abstract":"In response to recent educational imperatives in the United States, modeling and systems thinking have been identified as being critical for science learning. In this paper, we investigate models in the classroom from two important perspectives: (1) from the teacher perspective to understand how teachers perceive models and use models in the classroom and (2) from the student perspective to understand how student use model-based reasoning to represent their understanding in a classroom setting. Qualitative data collected from 19 teachers who attended a professional development workshop in the northeastern United States indicate that while teachers see the value in teaching to think with models (i.e., during inquiry practices), they tend to use models mostly as communication tools in the classroom. Quantitative data collected about the modeling practices of 42 middle school students who worked collaboratively in small groups (4-5 students) using a computer modeling program indicated that students tended to engage in more mechanistic and function-related thinking with time as they reasoned about a complex system. Further, students had a typified trajectory of first adding and then next paring down ideas in their models. Implications for science education are discussed.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"39 1","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2017-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79198130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gamified lectures are defined as lectures formatted as games, for the purposes of this research. This paper presents an example of a traditional instruction-based lecture that was redesigned using a game-like design. First, confrontations specific to gaming situations were considered, to derive goals for students in a classroom. Students fought using a game system in these experiments. The teacher acted as a game administrator, and controlled all the game materials. He also became an interface between the game system and students. Redesigned lectures were compared with traditional instruction-based lectures for their effects on relieving student dissatisfaction with the classroom. The achievement levels of students showed no improvement in the gamified design compared to the traditional instruction-based format.
{"title":"Gamified Lecture Courses Improve Student Evaluations but Not Exam Scores","authors":"Toyohisa Nakada","doi":"10.3389/fict.2017.00005","DOIUrl":"https://doi.org/10.3389/fict.2017.00005","url":null,"abstract":"Gamified lectures are defined as lectures formatted as games, for the purposes of this research. This paper presents an example of a traditional instruction-based lecture that was redesigned using a game-like design. First, confrontations specific to gaming situations were considered, to derive goals for students in a classroom. Students fought using a game system in these experiments. The teacher acted as a game administrator, and controlled all the game materials. He also became an interface between the game system and students. Redesigned lectures were compared with traditional instruction-based lectures for their effects on relieving student dissatisfaction with the classroom. The achievement levels of students showed no improvement in the gamified design compared to the traditional instruction-based format.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"20 1","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2017-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81853097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Simas, M. Ficek, A. Díaz-Guilera, Pere Obrador, Pablo R. Rodriguez
In this manuscript we propose, analyse, and discuss a possible new principle behind traditional cuisine: the Food-bridging hypothesis and its comparison with the food-pairing hypothesis using the same dataset and graphical models employed in the food-pairing study by Ahn et al. [Scientific Reports,1:196,2011]. The Food-bridging hypothesis assumes that if two ingredients do not share a strong molecular or empirical affinity, they may become affine through a chain of pairwise affinities. That is, in a graphical model as employed by Ahn et al., a chain represents a path that joints the two ingredients, the shortest path represents the strongest pairwise chain of affinities between the two ingredients. Food-pairing and Food-bridging are different hypotheses that may describe possible mechanisms behind the recipes of traditional cuisines. Food-pairing intensifies flavour by mixing ingredients in a recipe with similar chemical compounds, and food-bridging smoothes contrast between ingredients. Both food-pairing and food-bridging are observed in traditional cuisines, as shown in this work. We observed four classes of cuisines according to food-pairing and food-bridging: (1) East Asian cuisines, at one extreme, tend to avoid food-pairing as well as food-bridging; and (4) Latin American cuisines, at the other extreme, follow both principles. For the two middle classes: (2) Southeastern Asian cuisines, avoid food-pairing and follow food-bridging; and (3) Western cuisines, follow food-pairing and avoid food-bridging.
{"title":"Food-Bridging: A New Network Construction to Unveil the Principles of Cooking","authors":"T. Simas, M. Ficek, A. Díaz-Guilera, Pere Obrador, Pablo R. Rodriguez","doi":"10.3389/fict.2017.00014","DOIUrl":"https://doi.org/10.3389/fict.2017.00014","url":null,"abstract":"In this manuscript we propose, analyse, and discuss a possible new principle behind traditional cuisine: the Food-bridging hypothesis and its comparison with the food-pairing hypothesis using the same dataset and graphical models employed in the food-pairing study by Ahn et al. [Scientific Reports,1:196,2011]. The Food-bridging hypothesis assumes that if two ingredients do not share a strong molecular or empirical affinity, they may become affine through a chain of pairwise affinities. That is, in a graphical model as employed by Ahn et al., a chain represents a path that joints the two ingredients, the shortest path represents the strongest pairwise chain of affinities between the two ingredients. Food-pairing and Food-bridging are different hypotheses that may describe possible mechanisms behind the recipes of traditional cuisines. Food-pairing intensifies flavour by mixing ingredients in a recipe with similar chemical compounds, and food-bridging smoothes contrast between ingredients. Both food-pairing and food-bridging are observed in traditional cuisines, as shown in this work. We observed four classes of cuisines according to food-pairing and food-bridging: (1) East Asian cuisines, at one extreme, tend to avoid food-pairing as well as food-bridging; and (4) Latin American cuisines, at the other extreme, follow both principles. For the two middle classes: (2) Southeastern Asian cuisines, avoid food-pairing and follow food-bridging; and (3) Western cuisines, follow food-pairing and avoid food-bridging.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"24 1","pages":"14"},"PeriodicalIF":0.0,"publicationDate":"2017-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73654837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social robots should be able to automatically understand and respond to human touch. The meaning of touch does not only depend on the form of touch but also on the context in which the touch takes place. To gain more insight into the factors that are relevant to interpret the meaning of touch within a social context we elicited touch behaviors by letting participants interact with a robot pet companion in the context of different affective scenarios. In a contextualized lab setting participants (n= 31) acted as if they were coming home in different emotional states (i.e. stressed, depressed, relaxed and excited) without being given specific instructions on the kinds of behaviors that they should display. Based on video footage of the interactions and interviews we explored the use of touch behaviors, the expressed social messages and the expected robot pet responses. Results show that emotional state influenced the social messages that were communicated to the robot pet as well as the expected responses. Furthermore, it was found that multimodal cues were used to communicate with the robot pet, that is, participants often talked to the robot pet while touching it and making eye contact. Additionally, the findings of this study indicate that the categorization of touch behaviors into discrete touch gesture categories based on dictionary definitions is not a suitable approach to capture the complex nature of touch behaviors in less controlled settings. These findings can inform the design of a behavioral model for robot pet companions and future directions to interpret touch behaviors in less controlled settings are discussed.
{"title":"A First Step toward the Automatic Understanding of Social Touch for Naturalistic Human–Robot Interaction","authors":"Merel M. Jung, M. Poel, D. Reidsma, D. Heylen","doi":"10.3389/fict.2017.00003","DOIUrl":"https://doi.org/10.3389/fict.2017.00003","url":null,"abstract":"Social robots should be able to automatically understand and respond to human touch. The meaning of touch does not only depend on the form of touch but also on the context in which the touch takes place. To gain more insight into the factors that are relevant to interpret the meaning of touch within a social context we elicited touch behaviors by letting participants interact with a robot pet companion in the context of different affective scenarios. In a contextualized lab setting participants (n= 31) acted as if they were coming home in different emotional states (i.e. stressed, depressed, relaxed and excited) without being given specific instructions on the kinds of behaviors that they should display. Based on video footage of the interactions and interviews we explored the use of touch behaviors, the expressed social messages and the expected robot pet responses. Results show that emotional state influenced the social messages that were communicated to the robot pet as well as the expected responses. Furthermore, it was found that multimodal cues were used to communicate with the robot pet, that is, participants often talked to the robot pet while touching it and making eye contact. Additionally, the findings of this study indicate that the categorization of touch behaviors into discrete touch gesture categories based on dictionary definitions is not a suitable approach to capture the complex nature of touch behaviors in less controlled settings. These findings can inform the design of a behavioral model for robot pet companions and future directions to interpret touch behaviors in less controlled settings are discussed.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"14 1","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2017-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77000170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Soleymani, Frank Villaro-Dixon, T. Pun, G. Chanel
Physiological response is an important component of an emotional episode. In this paper, we introduce a Toolbox for Emotional feAture Extraction from Physiological signals (TEAP). This open source toolbox can preprocess and calculate emotionally relevant features from multiple physiological signals, namely, electroencephalogram (EEG), galvanic skin response (GSR), electromyogram (EMG), skin temperature, respiration pattern and blood volume pulse. The features from this toolbox are tested on two publicly available databases, i.e., MAHNOB-HCI and DEAP. We demonstrated that we achieve similar performance to the original work with the features from this toolbox. The toolbox is implemented in MATLAB and is also compatible with Octave. We hope this toolbox to be further developed and accelerate research in affective physiological signal analysis.
{"title":"Toolbox for Emotional feAture extraction from Physiological signals (TEAP)","authors":"M. Soleymani, Frank Villaro-Dixon, T. Pun, G. Chanel","doi":"10.3389/fict.2017.00001","DOIUrl":"https://doi.org/10.3389/fict.2017.00001","url":null,"abstract":"Physiological response is an important component of an emotional episode. In this paper, we introduce a Toolbox for Emotional feAture Extraction from Physiological signals (TEAP). This open source toolbox can preprocess and calculate emotionally relevant features from multiple physiological signals, namely, electroencephalogram (EEG), galvanic skin response (GSR), electromyogram (EMG), skin temperature, respiration pattern and blood volume pulse. The features from this toolbox are tested on two publicly available databases, i.e., MAHNOB-HCI and DEAP. We demonstrated that we achieve similar performance to the original work with the features from this toolbox. The toolbox is implemented in MATLAB and is also compatible with Octave. We hope this toolbox to be further developed and accelerate research in affective physiological signal analysis.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"10 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2017-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82888715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahdi Nabiyouni, S. Scerbo, D. Bowman, Tobias Höllerer
In Augmented Reality (AR), virtual objects and information are overlaid onto the user’s view of the physical world and can appear to become part of the real world. Accurate registration of virtual objects is a key requirement for an effective and natural AR system, but misregistration can break the illusion of virtual objects being part of the real world and disrupt immersion. End-to-end system latency severely impacts the quality of AR registration. In this research, we present a controlled study that aims at a deeper understanding of the effects of latency on virtual and real world imagery and its influences on task performance in an AR training task. We utilize an AR Simulation approach, in which an outdoor AR training task is simulated in a high-fidelity VR system. The real and augmented portions of the AR training scenarios are simulated in VR, affording us detailed control over a variety of immersion parameters and the ability to explore the effects of different types of simulated latency. We utilized a representative task inspired by outdoor AR military training systems to compare various AR system configurations, including optical see-through and video see-through setups with both matched and unmatched levels of real and virtual objects latency. Our findings indicate that users are able to perform significantly better when virtual and real-world latencies are matched (as in the case of simulated video-see-through AR with perfect augmentation-to-real-world registration). Unequal levels of latency led to reduction in performance, even when overall latency levels were lower compared to the matched case. The relative results hold up with increased overall latency.
{"title":"Relative Effects of Real-world and Virtual-World Latency on an Augmented Reality Training Task: An AR Simulation Experiment","authors":"Mahdi Nabiyouni, S. Scerbo, D. Bowman, Tobias Höllerer","doi":"10.3389/fict.2016.00034","DOIUrl":"https://doi.org/10.3389/fict.2016.00034","url":null,"abstract":"In Augmented Reality (AR), virtual objects and information are overlaid onto the user’s view of the physical world and can appear to become part of the real world. Accurate registration of virtual objects is a key requirement for an effective and natural AR system, but misregistration can break the illusion of virtual objects being part of the real world and disrupt immersion. End-to-end system latency severely impacts the quality of AR registration. In this research, we present a controlled study that aims at a deeper understanding of the effects of latency on virtual and real world imagery and its influences on task performance in an AR training task. We utilize an AR Simulation approach, in which an outdoor AR training task is simulated in a high-fidelity VR system. The real and augmented portions of the AR training scenarios are simulated in VR, affording us detailed control over a variety of immersion parameters and the ability to explore the effects of different types of simulated latency. We utilized a representative task inspired by outdoor AR military training systems to compare various AR system configurations, including optical see-through and video see-through setups with both matched and unmatched levels of real and virtual objects latency. Our findings indicate that users are able to perform significantly better when virtual and real-world latencies are matched (as in the case of simulated video-see-through AR with perfect augmentation-to-real-world registration). Unequal levels of latency led to reduction in performance, even when overall latency levels were lower compared to the matched case. The relative results hold up with increased overall latency.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"5 1","pages":"34"},"PeriodicalIF":0.0,"publicationDate":"2017-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90824281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marianne Allanic, P. Hervé, C. Pham, Myriam Lekkal, A. Durupt, Thierry Brial, Arthur Grioche, N. Matta, P. Boutinaud, B. Eynard, M. Joliot
The data management needs of the neuroimaging community are currently addressed by several specialized software platforms, which automate repetitive data import, archiving and processing tasks. The BIOMIST (BIOMedical Imaging SemanTic data management) project aims at creating such a framework, yet with a radically different approach: the key insight behind it is the realisation that the data management needs of the neuroimaging community – organizing the secure and convenient storage of large amounts of large files, bringing together data from different scientific domains, managing workflows and access policies, ensuring traceability and sharing data across different labs – are actually strikingly similar to those already expressed by the manufacturing industry. The BIOMIST neuroimaging data management framework is built around the same systems as those that were designed in order to meet the requirements of the industry. Product Lifecycle Management (PLM) systems rely on an object-oriented data model and allow the traceability of data and workflows throughout the life of a product, from its design to its manufacturing, maintenance and end of life, while guaranteeing data consistency and security. The BioMedical Imaging – Lifecycle Management (BMI-LM) data model was designed to handle the specificities of neuroimaging data in PLM systems, throughout the lifecycle of a scientific study. This data model is both flexible and scalable, thanks to the combination of generic objects and domain-specific classes sourced from publicly available ontologies. The Data Integrated Management and Processing (DIMP) method was then designed to handle workflows of processing chains in PLM. Following these principles, workflows are parameterised and launched from the PLM platform onto a computer cluster, and the results automatically return to the PLM where they are archived along with their provenance information. Third, to transform the PLM into a full-fledged neuroimaging framework, we developed a series of external modules: DICOM import, XML form data import web-services, flexible graphical querying interface, and SQL export to spreadsheets. Overall, the BIOMIST platform is well suited for the management of neuroimaging cohorts, and it is currently used for the management of the BIL&GIN dataset (300 participants) and the ongoing MRI-Share cohort acquisition of 2000 participants.
{"title":"BIOMIST: A Platform for Biomedical Data Lifecycle Management of Neuroimaging Cohorts","authors":"Marianne Allanic, P. Hervé, C. Pham, Myriam Lekkal, A. Durupt, Thierry Brial, Arthur Grioche, N. Matta, P. Boutinaud, B. Eynard, M. Joliot","doi":"10.3389/fict.2016.00035","DOIUrl":"https://doi.org/10.3389/fict.2016.00035","url":null,"abstract":"The data management needs of the neuroimaging community are currently addressed by several specialized software platforms, which automate repetitive data import, archiving and processing tasks. The BIOMIST (BIOMedical Imaging SemanTic data management) project aims at creating such a framework, yet with a radically different approach: the key insight behind it is the realisation that the data management needs of the neuroimaging community – organizing the secure and convenient storage of large amounts of large files, bringing together data from different scientific domains, managing workflows and access policies, ensuring traceability and sharing data across different labs – are actually strikingly similar to those already expressed by the manufacturing industry. The BIOMIST neuroimaging data management framework is built around the same systems as those that were designed in order to meet the requirements of the industry. Product Lifecycle Management (PLM) systems rely on an object-oriented data model and allow the traceability of data and workflows throughout the life of a product, from its design to its manufacturing, maintenance and end of life, while guaranteeing data consistency and security. The BioMedical Imaging – Lifecycle Management (BMI-LM) data model was designed to handle the specificities of neuroimaging data in PLM systems, throughout the lifecycle of a scientific study. This data model is both flexible and scalable, thanks to the combination of generic objects and domain-specific classes sourced from publicly available ontologies. The Data Integrated Management and Processing (DIMP) method was then designed to handle workflows of processing chains in PLM. Following these principles, workflows are parameterised and launched from the PLM platform onto a computer cluster, and the results automatically return to the PLM where they are archived along with their provenance information. Third, to transform the PLM into a full-fledged neuroimaging framework, we developed a series of external modules: DICOM import, XML form data import web-services, flexible graphical querying interface, and SQL export to spreadsheets. Overall, the BIOMIST platform is well suited for the management of neuroimaging cohorts, and it is currently used for the management of the BIL&GIN dataset (300 participants) and the ongoing MRI-Share cohort acquisition of 2000 participants.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"9 1","pages":"35"},"PeriodicalIF":0.0,"publicationDate":"2017-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87421956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social touch forms an important aspect of the human non-verbal communication repertoire, but is often overlooked in human-robot interaction. In this study, we investigated whether robot-initiated touches can induce physiological, emotional, and behavioral responses similar to those reported for human touches. 39 Participants were invited to watch a scary movie together with a robot that spoke soothing words. In the Touch condition, these words were accompanied by a touch on the shoulder. We hypothesized that this touch – as compared with no touch – could (H1) attenuate physiological (heart rate (variability), skin conductance, cortisol, and respiration rate) and subjective stress responses that were caused by the movie. Moreover, we expected that a touch could (H2) decrease aversion towards the movie, (H3) increase positive perceptions of the robot (e.g., its appearance and one’s attitude towards it), and (H4) increase compliance to the robot’s request to make a monetary donation. Although the movie did increase arousal as intended, none of the hypotheses could be confirmed. Our findings suggest that merely simulating a human touching action with the robot’s limbs is insufficient to elicit physiological, emotional, and behavioral responses in this specific context and with this amount of participants. To inform future research on the opportunities and limitations of robot-initiated touch, we reflect on our methodology and identify dimensions that may play a role in physical human-robot interactions: e.g., the robot’s touching behavior, its appearance and behavior, the user’s personality, the body location where the touch is applied, and the (social) context of the interaction. Social touch can only become an integral and effective part of a robot’s non-verbal communication repertoire, when we better understand if, and under which boundary conditions such touches can elicit responses in humans.
{"title":"Affective and Behavioral Responses to Robot-Initiated Social Touch: Toward Understanding the Opportunities and Limitations of Physical Contact in Human–Robot Interaction","authors":"Christian J. A. M. Willemse, A. Toet, J. V. Erp","doi":"10.3389/fict.2017.00012","DOIUrl":"https://doi.org/10.3389/fict.2017.00012","url":null,"abstract":"Social touch forms an important aspect of the human non-verbal communication repertoire, but is often overlooked in human-robot interaction. In this study, we investigated whether robot-initiated touches can induce physiological, emotional, and behavioral responses similar to those reported for human touches. 39 Participants were invited to watch a scary movie together with a robot that spoke soothing words. In the Touch condition, these words were accompanied by a touch on the shoulder. We hypothesized that this touch – as compared with no touch – could (H1) attenuate physiological (heart rate (variability), skin conductance, cortisol, and respiration rate) and subjective stress responses that were caused by the movie. Moreover, we expected that a touch could (H2) decrease aversion towards the movie, (H3) increase positive perceptions of the robot (e.g., its appearance and one’s attitude towards it), and (H4) increase compliance to the robot’s request to make a monetary donation. Although the movie did increase arousal as intended, none of the hypotheses could be confirmed. Our findings suggest that merely simulating a human touching action with the robot’s limbs is insufficient to elicit physiological, emotional, and behavioral responses in this specific context and with this amount of participants. To inform future research on the opportunities and limitations of robot-initiated touch, we reflect on our methodology and identify dimensions that may play a role in physical human-robot interactions: e.g., the robot’s touching behavior, its appearance and behavior, the user’s personality, the body location where the touch is applied, and the (social) context of the interaction. Social touch can only become an integral and effective part of a robot’s non-verbal communication repertoire, when we better understand if, and under which boundary conditions such touches can elicit responses in humans.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"75 1","pages":"12"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77073590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A foodshed is a geographic area from which a population derives its food supply, but a method to determine boundaries of foodsheds has not been formalized. Drawing on the food-water energy nexus, we propose a formal network science definition of foodsheds by using data from virtual water flows, i.e. water that is virtually embedded in food. In particular we use spectral graph partitioning for directed graphs. If foodsheds turn out to be geographically compact, it suggests the food system is local and therefore reduces energy and externality costs of food transport. Using our proposed method we compute foodshed boundaries at the global-scale, and at the national-scale in the case of two of the largest agricultural countries: India and the United States. Based on our determination of foodshed boundaries, we are able to better understand commodity flows and whether foodsheds are contiguous and compact, and other factors that impact environmental sustainability. The formal method we propose may be used more broadly to study commodity flows and their impact on environmental sustainability.
{"title":"Foodsheds in Virtual Water Flow Networks: A Spectral Graph Theory Approach","authors":"Nina Kshetry, L. Varshney","doi":"10.3389/fict.2017.00017","DOIUrl":"https://doi.org/10.3389/fict.2017.00017","url":null,"abstract":"A foodshed is a geographic area from which a population derives its food supply, but a method to determine boundaries of foodsheds has not been formalized. Drawing on the food-water energy nexus, we propose a formal network science definition of foodsheds by using data from virtual water flows, i.e. water that is virtually embedded in food. In particular we use spectral graph partitioning for directed graphs. If foodsheds turn out to be geographically compact, it suggests the food system is local and therefore reduces energy and externality costs of food transport. Using our proposed method we compute foodshed boundaries at the global-scale, and at the national-scale in the case of two of the largest agricultural countries: India and the United States. Based on our determination of foodshed boundaries, we are able to better understand commodity flows and whether foodsheds are contiguous and compact, and other factors that impact environmental sustainability. The formal method we propose may be used more broadly to study commodity flows and their impact on environmental sustainability.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"19 1","pages":"17"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77990485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thessa T. J. P. Kockelkorn, Rui Ramos, José Ramos, P. A. Jong, C. Schaefer-Prokop, R. Wittenberg, A. Tiehuis, J. Grutters, M. Viergever, B. Ginneken
For computerized analysis of textures in interstitial lung disease, manual annotations of lung tissue are necessary. Since making these annotations is labor-intensive, we previously proposed an interactive annotation framework. In this framework, observers iteratively trained a classifier to distinguish the different texture types by correcting its classification errors. In this work, we investigated three ways to extend this approach, in order to decrease the amount of user interaction required to annotate all lung tissue in a CT scan. First, we conducted automatic classification experiments to test how data from previously annotated scans can be used for classification of the scan under consideration. We compared the performance of a classifier trained on data from one observer, a classifier trained on data from multiple observers, a classifier trained on consensus training data, and an ensemble of classifiers, each trained on data from different sources. Experiments were conducted without and with texture selection. In the former case, training data from all 8 textures was used. In the latter, only training data from the texture types present in the scan were used, and the observer would have to indicate textures contained in the scan to be analyzed. Second, we simulated interactive annotation to test the effects of (1) asking observers to perform texture selection before the start of annotation, (2) the use of a classifier trained on data from previously annotated scans at the start of annotation, when the interactive classifier is untrained, and (3) allowing observers to choose which interactive or automatic classification results they wanted to correct. Finally, various strategies for selecting the classification results that were presented to the observer were considered. Classification accuracies for all possible interactive annotation scenarios were compared. Using the best performing protocol, in which observers select the textures that should be distinguished in the scan and in which they can choose which classification results to use for correction, a median accuracy of 88% was reached. The results obtained using this protocol were significantly better than results obtained with other interactive or automatic classification protocols.
{"title":"Optimization Strategies for Interactive Classification of Interstitial Lung Disease Textures","authors":"Thessa T. J. P. Kockelkorn, Rui Ramos, José Ramos, P. A. Jong, C. Schaefer-Prokop, R. Wittenberg, A. Tiehuis, J. Grutters, M. Viergever, B. Ginneken","doi":"10.3389/fict.2016.00033","DOIUrl":"https://doi.org/10.3389/fict.2016.00033","url":null,"abstract":"For computerized analysis of textures in interstitial lung disease, manual annotations of lung tissue are necessary. Since making these annotations is labor-intensive, we previously proposed an interactive annotation framework. In this framework, observers iteratively trained a classifier to distinguish the different texture types by correcting its classification errors. In this work, we investigated three ways to extend this approach, in order to decrease the amount of user interaction required to annotate all lung tissue in a CT scan. First, we conducted automatic classification experiments to test how data from previously annotated scans can be used for classification of the scan under consideration. We compared the performance of a classifier trained on data from one observer, a classifier trained on data from multiple observers, a classifier trained on consensus training data, and an ensemble of classifiers, each trained on data from different sources. Experiments were conducted without and with texture selection. In the former case, training data from all 8 textures was used. In the latter, only training data from the texture types present in the scan were used, and the observer would have to indicate textures contained in the scan to be analyzed. Second, we simulated interactive annotation to test the effects of (1) asking observers to perform texture selection before the start of annotation, (2) the use of a classifier trained on data from previously annotated scans at the start of annotation, when the interactive classifier is untrained, and (3) allowing observers to choose which interactive or automatic classification results they wanted to correct. Finally, various strategies for selecting the classification results that were presented to the observer were considered. Classification accuracies for all possible interactive annotation scenarios were compared. Using the best performing protocol, in which observers select the textures that should be distinguished in the scan and in which they can choose which classification results to use for correction, a median accuracy of 88% was reached. The results obtained using this protocol were significantly better than results obtained with other interactive or automatic classification protocols.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"11 1","pages":"33"},"PeriodicalIF":0.0,"publicationDate":"2016-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91145587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}