首页 > 最新文献

Frontiers in ICT最新文献

英文 中文
ArchiMed: A Data Management System for Clinical Research in Imaging 影像临床研究的数据管理系统
Q1 Computer Science Pub Date : 2016-12-20 DOI: 10.3389/fict.2016.00031
E. Micard, Damien Husson, J. Felblinger
Context: There is a great need in clinical research with imaging to collect, to store, to organize and to process large amount of varied data according to legal requirements and research obligations. In practice, many laboratories or clinical research centers working in imaging domain have to manage innumerous images and their associated data without having sufficient IT (Information Technology) skills and resources to develop and to maintain a robust software solution. Since conventional infrastructure and data storage systems for medical image such as “Picture Archiving and Communication System” (PACS) may not be compatible with research needs, we propose a solution: ArchiMed, a complete storage and visualization solution developed for clinical research. Material and methods: ArchiMed is a service oriented server application written in Java EETM which is integrated into local clinical environments (imaging devices, post-processing workstations, others devices...) and allows to safely collect data from other collaborative centers. It ensures all kinds of imaging data storage with a “study centered” approach, quality control and interfacing with mainstream image analysis research tools. Results: With more than 10 millions of archived files for about 4TB stored with 116 studies, ArchiMed, in function for 5 years at CIC-IT of Nancy-France, is used every day by about 60 persons, among whom are engineers, researchers, clinicians and clinical trial project managers.
背景:临床影像学研究需要根据法律要求和研究义务对大量不同的数据进行收集、存储、组织和处理。在实践中,许多在成像领域工作的实验室或临床研究中心不得不管理大量的图像及其相关数据,而没有足够的IT(信息技术)技能和资源来开发和维护一个健壮的软件解决方案。由于传统的医学图像基础设施和数据存储系统(如“图像存档和通信系统”(PACS))可能无法满足研究需求,我们提出了一种解决方案:ArchiMed,为临床研究开发的完整存储和可视化解决方案。材料和方法:ArchiMed是一个用Java EETM编写的面向服务的服务器应用程序,它集成到本地临床环境(成像设备、后处理工作站、其他设备……)中,并允许安全地从其他协作中心收集数据。它以“以研究为中心”的方法,质量控制和与主流图像分析研究工具的接口来确保各种成像数据的存储。结果:ArchiMed已在法国法国的CIC-IT公司运行了5年,存档文件超过1000万份,约4TB,包含116项研究,每天约有60人使用,其中包括工程师、研究人员、临床医生和临床试验项目经理。
{"title":"ArchiMed: A Data Management System for Clinical Research in Imaging","authors":"E. Micard, Damien Husson, J. Felblinger","doi":"10.3389/fict.2016.00031","DOIUrl":"https://doi.org/10.3389/fict.2016.00031","url":null,"abstract":"Context: There is a great need in clinical research with imaging to collect, to store, to organize and to process large amount of varied data according to legal requirements and research obligations. In practice, many laboratories or clinical research centers working in imaging domain have to manage innumerous images and their associated data without having sufficient IT (Information Technology) skills and resources to develop and to maintain a robust software solution. Since conventional infrastructure and data storage systems for medical image such as “Picture Archiving and Communication System” (PACS) may not be compatible with research needs, we propose a solution: ArchiMed, a complete storage and visualization solution developed for clinical research. Material and methods: ArchiMed is a service oriented server application written in Java EETM which is integrated into local clinical environments (imaging devices, post-processing workstations, others devices...) and allows to safely collect data from other collaborative centers. It ensures all kinds of imaging data storage with a “study centered” approach, quality control and interfacing with mainstream image analysis research tools. Results: With more than 10 millions of archived files for about 4TB stored with 116 studies, ArchiMed, in function for 5 years at CIC-IT of Nancy-France, is used every day by about 60 persons, among whom are engineers, researchers, clinicians and clinical trial project managers.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"7 1","pages":"31"},"PeriodicalIF":0.0,"publicationDate":"2016-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75459978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
The Influence of Annotation, Corpus Design, and Evaluation on the Outcome of Automatic Classification of Human Emotions 标注、语料库设计与评价对人类情感自动分类结果的影响
Q1 Computer Science Pub Date : 2016-11-30 DOI: 10.3389/fict.2016.00027
Markus Kächele, Martin Schels, F. Schwenker
The integration of emotions into human computer interaction applications promises a more natural dialog between the user and the technical system he operates. In order to construct such machinery, continuous measuring of the affective state of the user becomes essential. While basic research that is aimed to capture and classify affective signals has progressed, many issues are still prevailing that hinder easy integration of affective signals into human-computer interaction. In this paper, we identify and investigate pitfalls in three steps of the work-flow of affective classification studies. It starts with the process of collecting affective data for the purpose of training suitable classifiers. Emotional data has to be created in which the target emotions are present. Therefore, human participants have to be stimulated suitably. We discuss the nature of these stimuli, their relevance to human-computer interaction and the repeatability of the data recording setting. Second, aspects of annotation procedures are investigated, which include the variances of individual raters, annotation delay, the impact of the used annotation tool and how individual ratings are combined to a unified label. Finally, the evaluation protocol is examined which includes, amongst others, the impact of the performance measure on the accuracy of a classification model. We hereby focus especially on the evaluation of classifier outputs against continuously annotated dimensions. Alongside the discussed problems and pitfalls and the ways how they affect the outcome, we provide solutions and alternatives to overcome these issues. As a final part of the paper we sketch a recording scenario and a set of supporting technologies that can contribute to solve many of the issues mentioned above.
将情感集成到人机交互应用程序中,保证了用户与其操作的技术系统之间更自然的对话。为了构建这样的机器,持续测量用户的情感状态变得至关重要。虽然旨在捕捉和分类情感信号的基础研究取得了进展,但许多问题仍然普遍存在,阻碍了情感信号与人机交互的轻松整合。在本文中,我们识别和研究了情感分类研究工作流程的三个步骤中的陷阱。它从收集情感数据的过程开始,以训练合适的分类器。必须创建目标情绪所包含的情绪数据。因此,人类参与者必须受到适当的刺激。我们讨论了这些刺激的性质,它们与人机交互的相关性以及数据记录设置的可重复性。其次,研究了标注过程的各个方面,包括个体评分者的差异、标注延迟、所使用的标注工具的影响以及如何将个体评分合并到一个统一的标签中。最后,对评估协议进行了检查,其中包括性能度量对分类模型准确性的影响。因此,我们特别关注针对连续注释维度的分类器输出的评估。除了讨论的问题和陷阱以及它们如何影响结果之外,我们还提供了克服这些问题的解决方案和替代方案。作为本文的最后一部分,我们概述了一个记录场景和一组支持技术,可以帮助解决上面提到的许多问题。
{"title":"The Influence of Annotation, Corpus Design, and Evaluation on the Outcome of Automatic Classification of Human Emotions","authors":"Markus Kächele, Martin Schels, F. Schwenker","doi":"10.3389/fict.2016.00027","DOIUrl":"https://doi.org/10.3389/fict.2016.00027","url":null,"abstract":"The integration of emotions into human computer interaction applications promises a more natural dialog between the user and the technical system he operates. In order to construct such machinery, continuous measuring of the affective state of the user becomes essential. While basic research that is aimed to capture and classify affective signals has progressed, many issues are still prevailing that hinder easy integration of affective signals into human-computer interaction. In this paper, we identify and investigate pitfalls in three steps of the work-flow of affective classification studies. It starts with the process of collecting affective data for the purpose of training suitable classifiers. Emotional data has to be created in which the target emotions are present. Therefore, human participants have to be stimulated suitably. We discuss the nature of these stimuli, their relevance to human-computer interaction and the repeatability of the data recording setting. Second, aspects of annotation procedures are investigated, which include the variances of individual raters, annotation delay, the impact of the used annotation tool and how individual ratings are combined to a unified label. Finally, the evaluation protocol is examined which includes, amongst others, the impact of the performance measure on the accuracy of a classification model. We hereby focus especially on the evaluation of classifier outputs against continuously annotated dimensions. Alongside the discussed problems and pitfalls and the ways how they affect the outcome, we provide solutions and alternatives to overcome these issues. As a final part of the paper we sketch a recording scenario and a set of supporting technologies that can contribute to solve many of the issues mentioned above.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"45 1","pages":"27"},"PeriodicalIF":0.0,"publicationDate":"2016-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86541806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Making Tangible the Intangible: Hybridization of the Real and the Virtual to Enhance Learning of Abstract Phenomena 化无形为有形:实与虚的融合促进抽象现象的学习
Q1 Computer Science Pub Date : 2016-11-28 DOI: 10.3389/fict.2016.00030
Stéphanie Fleck, M. Hachet
Interactive systems based on Augmented Reality (AR) and Tangible User Interfaces (TUI) hold great promise for enhancing the learning and understanding of abstract phenomena. In particular, they enable to take advantage of numerical simulation and pedagogical supports, while keeping the learner involved in true physical experimentations. In this paper, we present three examples based on AR and TUI, where the concepts to be learnt are difficult to perceive. The first one, Helios, targets K-12 learners in the field of astronomy. The second one, Hobit is dedicated to experiments in wave optics. Finally, the third one, Teegi, allows one to get to know more about brain activity. These three hybrid interfaces have emerged from a common basis that jointly combines research and development work in the fields of Instructional Design and Human-Computer Interaction, from theoretical to practical aspects. On the basis of investigations carried out in real context of use and on the grounding works in education and HCI which corroborate the design choices that were made, we formalize how and why the hybridization of the real and the virtual enables to leverage the way learners understand intangible phenomena in Sciences education.
基于增强现实(AR)和有形用户界面(TUI)的交互系统在增强对抽象现象的学习和理解方面具有很大的前景。特别是,它们能够利用数值模拟和教学支持,同时使学习者参与真正的物理实验。在本文中,我们提出了三个基于AR和TUI的例子,其中要学习的概念很难感知。第一个是太阳神,目标是天文学领域的K-12学习者。第二个,霍比特致力于波光学实验。最后,第三个是Teegi,它可以让我们更多地了解大脑活动。这三种混合界面是在一个共同的基础上产生的,它将教学设计和人机交互领域的研究和开发工作从理论到实践结合在一起。基于在真实使用背景下进行的调查,以及在教育和HCI领域的基础工作(这些工作证实了所做的设计选择),我们正式确定了真实和虚拟的混合如何以及为什么能够利用学习者理解科学教育中无形现象的方式。
{"title":"Making Tangible the Intangible: Hybridization of the Real and the Virtual to Enhance Learning of Abstract Phenomena","authors":"Stéphanie Fleck, M. Hachet","doi":"10.3389/fict.2016.00030","DOIUrl":"https://doi.org/10.3389/fict.2016.00030","url":null,"abstract":"Interactive systems based on Augmented Reality (AR) and Tangible User Interfaces (TUI) hold great promise for enhancing the learning and understanding of abstract phenomena. In particular, they enable to take advantage of numerical simulation and pedagogical supports, while keeping the learner involved in true physical experimentations. In this paper, we present three examples based on AR and TUI, where the concepts to be learnt are difficult to perceive. The first one, Helios, targets K-12 learners in the field of astronomy. The second one, Hobit is dedicated to experiments in wave optics. Finally, the third one, Teegi, allows one to get to know more about brain activity. These three hybrid interfaces have emerged from a common basis that jointly combines research and development work in the fields of Instructional Design and Human-Computer Interaction, from theoretical to practical aspects. On the basis of investigations carried out in real context of use and on the grounding works in education and HCI which corroborate the design choices that were made, we formalize how and why the hybridization of the real and the virtual enables to leverage the way learners understand intangible phenomena in Sciences education.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"142 1","pages":"30"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80449059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
OpenVX-Based Python Framework for Real-time Cross-Platform Acceleration of Embedded Computer Vision Applications 基于openx的嵌入式计算机视觉应用实时跨平台加速Python框架
Q1 Computer Science Pub Date : 2016-11-21 DOI: 10.3389/fict.2016.00028
Ori Heimlich, Elishai Ezra Tsur
Embedded real-time vision applications are being rapidly deployed in a large realm of consumer electronics, ranging from automotive safety to surveillance systems. However, the relatively limited computational power of embedded platforms is considered as a bottleneck for many vision applications, necessitating optimization. OpenVX is a standardized interface, released in late 2014, in an attempt to provide both system and kernel level optimization to vision applications. With OpenVX, Vision processing are modeled with coarse-grained data flow graphs, which can be optimized and accelerated by the platform implementer. Current full implementations of OpenVX are given in the programming language C, which does not support advanced programming paradigms such as object-oriented, imperative and functional programming, nor does it have runtime or type-checking. Here we present a python-based full Implementation of OpenVX, which eliminates much of the discrepancies between the object-oriented paradigm used by many modern applications and the native C implementations. Our open-source implementation can be used for rapid development of OpenVX applications in embedded platforms. Demonstration includes static and real-time image acquisition and processing using a Raspberry Pi and a GoPro camera. Code is given as supplementary information. Code project and linked deployable virtual machine are located on GitHub: https://github.com/NBEL-lab/PythonOpenVX.
嵌入式实时视觉应用正在从汽车安全到监控系统等大量消费电子产品中迅速部署。然而,嵌入式平台相对有限的计算能力被认为是许多视觉应用的瓶颈,需要优化。OpenVX是一个标准化的接口,于2014年底发布,旨在为视觉应用程序提供系统和内核级别的优化。在OpenVX中,视觉处理使用粗粒度数据流图建模,平台实现者可以对其进行优化和加速。目前OpenVX的完整实现是用编程语言C给出的,它不支持高级编程范式,如面向对象、命令式和函数式编程,也没有运行时或类型检查。在这里,我们提出了一个基于python的OpenVX的完整实现,它消除了许多现代应用程序使用的面向对象范式与本机C实现之间的许多差异。我们的开源实现可以用于在嵌入式平台上快速开发OpenVX应用程序。演示包括使用树莓派和GoPro相机进行静态和实时图像采集和处理。代码是作为补充资料。代码项目和链接的可部署虚拟机位于GitHub: https://github.com/NBEL-lab/PythonOpenVX。
{"title":"OpenVX-Based Python Framework for Real-time Cross-Platform Acceleration of Embedded Computer Vision Applications","authors":"Ori Heimlich, Elishai Ezra Tsur","doi":"10.3389/fict.2016.00028","DOIUrl":"https://doi.org/10.3389/fict.2016.00028","url":null,"abstract":"Embedded real-time vision applications are being rapidly deployed in a large realm of consumer electronics, ranging from automotive safety to surveillance systems. However, the relatively limited computational power of embedded platforms is considered as a bottleneck for many vision applications, necessitating optimization. OpenVX is a standardized interface, released in late 2014, in an attempt to provide both system and kernel level optimization to vision applications. With OpenVX, Vision processing are modeled with coarse-grained data flow graphs, which can be optimized and accelerated by the platform implementer. Current full implementations of OpenVX are given in the programming language C, which does not support advanced programming paradigms such as object-oriented, imperative and functional programming, nor does it have runtime or type-checking. Here we present a python-based full Implementation of OpenVX, which eliminates much of the discrepancies between the object-oriented paradigm used by many modern applications and the native C implementations. Our open-source implementation can be used for rapid development of OpenVX applications in embedded platforms. Demonstration includes static and real-time image acquisition and processing using a Raspberry Pi and a GoPro camera. Code is given as supplementary information. Code project and linked deployable virtual machine are located on GitHub: https://github.com/NBEL-lab/PythonOpenVX.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"1 1","pages":"28"},"PeriodicalIF":0.0,"publicationDate":"2016-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90857406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
AFFECT: Altered-Fidelity Framework for Enhancing Cognition and Training 影响:增强认知和训练的改变保真度框架
Q1 Computer Science Pub Date : 2016-11-17 DOI: 10.3389/fict.2016.00029
Ryan P. McMahan, Nicolas S. Herrera
In this paper, we present a new framework for analyzing and designing virtual reality (VR) techniques. This framework is based on two concepts—system fidelity (i.e., the degree with which real-world experiences are reproduced by a system) and memory (i.e., the formation and activation of perceptual, cognitive, and motor networks of neurons). The premise of the framework is to manipulate an aspect of system fidelity in order to assist a stage of memory. We call it the Altered-Fidelity Framework for Enhancing Cognition and Training (AFFECT). AFFECT provides nine categories of approaches to altering system fidelity to positively affect learning or training. These categories are based on the intersections of three aspects of system fidelity (interaction fidelity, scenario fidelity, and display fidelity) and three stages of memory (encoding, implicit retrieval, and explicit retrieval). In addition to discussing the details of our new framework, we show how AFFECT can be used as a tool for analyzing and categorizing VR techniques designed to facilitate learning or training. We also demonstrate how AFFECT can be used as a design space for creating new VR techniques intended for educational and training systems.
本文提出了一种分析和设计虚拟现实技术的新框架。这个框架基于两个概念——系统保真度(即系统再现现实世界经验的程度)和记忆(即神经元的感知、认知和运动网络的形成和激活)。该框架的前提是操纵系统保真度的一个方面,以协助记忆的一个阶段。我们称之为增强认知和训练的改变保真度框架(AFFECT)。AFFECT提供了九种改变系统保真度以积极影响学习或训练的方法。这些分类是基于系统保真度的三个方面(交互保真度、场景保真度和显示保真度)和记忆的三个阶段(编码、内隐检索和外显检索)的交叉。除了讨论我们新框架的细节外,我们还展示了如何将AFFECT用作分析和分类VR技术的工具,以促进学习或培训。我们还演示了如何使用AFFECT作为设计空间来创建用于教育和培训系统的新VR技术。
{"title":"AFFECT: Altered-Fidelity Framework for Enhancing Cognition and Training","authors":"Ryan P. McMahan, Nicolas S. Herrera","doi":"10.3389/fict.2016.00029","DOIUrl":"https://doi.org/10.3389/fict.2016.00029","url":null,"abstract":"In this paper, we present a new framework for analyzing and designing virtual reality (VR) techniques. This framework is based on two concepts—system fidelity (i.e., the degree with which real-world experiences are reproduced by a system) and memory (i.e., the formation and activation of perceptual, cognitive, and motor networks of neurons). The premise of the framework is to manipulate an aspect of system fidelity in order to assist a stage of memory. We call it the Altered-Fidelity Framework for Enhancing Cognition and Training (AFFECT). AFFECT provides nine categories of approaches to altering system fidelity to positively affect learning or training. These categories are based on the intersections of three aspects of system fidelity (interaction fidelity, scenario fidelity, and display fidelity) and three stages of memory (encoding, implicit retrieval, and explicit retrieval). In addition to discussing the details of our new framework, we show how AFFECT can be used as a tool for analyzing and categorizing VR techniques designed to facilitate learning or training. We also demonstrate how AFFECT can be used as a design space for creating new VR techniques intended for educational and training systems.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"9 1","pages":"29"},"PeriodicalIF":0.0,"publicationDate":"2016-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82126029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
The Effect of Environmental Features, Self-Avatar, and Immersion on Object Location Memory in Virtual Environments 环境特征、自我化身和沉浸感对虚拟环境中物体位置记忆的影响
Q1 Computer Science Pub Date : 2016-11-03 DOI: 10.3389/fict.2016.00024
María Murcia-López, A. Steed
One potential application for virtual environments (VEs) is the training of spatial knowledge. A critical question is what features the VE should have in order to facilitate this training. Previous research has shown that people rely on environmental features, such as sockets and wall decorations, when learning object locations. The aim of this study is to explore the effect of varied environmental feature fidelity of VEs, the use of self-avatars and the level of immersion on object location learning and recall. Following a between-subjects experimental design, participants were asked to learn the location of three identical objects by navigating one of three environments: a physical laboratory, or low and high detail VE replicas of this laboratory. Participants who experienced the VEs could use either a head-mounted display (HMD) or a desktop computer. Half of the participants learning in the HMD and desktop systems were assigned a virtual body. Participants were then asked to place physical versions of the three objects in the physical laboratory in the same configuration. We tracked participant movement, measured object placement, and administered a questionnaire related to aspects of the experience. HMD learning resulted in statistically significant higher performance than desktop learning. Results indicate that, when learning in low detail VEs, there is no difference in performance between participants using HMD and desktop systems. Overall, providing the participant with a virtual body had a negative impact on performance. Preliminary inspection of navigation data indicates that spatial learning strategies are different in systems with varying levels of immersion.
虚拟环境的一个潜在应用是空间知识的培训。一个关键的问题是,VE应该具备哪些功能来促进这种培训。先前的研究表明,人们在学习物体位置时依赖于环境特征,比如插座和墙壁装饰。本研究的目的是探讨不同的环境特征保真度、自我形象的使用和沉浸程度对目标位置学习和回忆的影响。遵循受试者之间的实验设计,参与者被要求通过导航三种环境中的一种来了解三个相同物体的位置:一个物理实验室,或者这个实验室的低细节和高细节VE复制品。体验虚拟现实的参与者可以使用头戴式显示器(HMD)或台式电脑。在HMD和桌面系统中学习的一半参与者被分配了一个虚拟身体。然后,参与者被要求在物理实验室中以相同的配置放置这三个物体的物理版本。我们跟踪了参与者的运动,测量了物体的放置,并管理了一份与体验相关的问卷。HMD学习的成绩显著高于桌面学习。结果表明,当学习低细节的情景时,使用HMD和桌面系统的参与者的表现没有差异。总的来说,为参与者提供虚拟身体对他们的表现有负面影响。对导航数据的初步研究表明,在不同沉浸程度的系统中,空间学习策略不同。
{"title":"The Effect of Environmental Features, Self-Avatar, and Immersion on Object Location Memory in Virtual Environments","authors":"María Murcia-López, A. Steed","doi":"10.3389/fict.2016.00024","DOIUrl":"https://doi.org/10.3389/fict.2016.00024","url":null,"abstract":"One potential application for virtual environments (VEs) is the training of spatial knowledge. A critical question is what features the VE should have in order to facilitate this training. Previous research has shown that people rely on environmental features, such as sockets and wall decorations, when learning object locations. The aim of this study is to explore the effect of varied environmental feature fidelity of VEs, the use of self-avatars and the level of immersion on object location learning and recall. Following a between-subjects experimental design, participants were asked to learn the location of three identical objects by navigating one of three environments: a physical laboratory, or low and high detail VE replicas of this laboratory. Participants who experienced the VEs could use either a head-mounted display (HMD) or a desktop computer. Half of the participants learning in the HMD and desktop systems were assigned a virtual body. Participants were then asked to place physical versions of the three objects in the physical laboratory in the same configuration. We tracked participant movement, measured object placement, and administered a questionnaire related to aspects of the experience. HMD learning resulted in statistically significant higher performance than desktop learning. Results indicate that, when learning in low detail VEs, there is no difference in performance between participants using HMD and desktop systems. Overall, providing the participant with a virtual body had a negative impact on performance. Preliminary inspection of navigation data indicates that spatial learning strategies are different in systems with varying levels of immersion.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"3 1","pages":"24"},"PeriodicalIF":0.0,"publicationDate":"2016-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86094848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Breaking Bad Behaviors: A New Tool for Learning Classroom Management Using Virtual Reality 打破不良行为:利用虚拟现实学习课堂管理的新工具
Q1 Computer Science Pub Date : 2016-11-01 DOI: 10.3389/fict.2016.00026
Jean-Luc Lugrin, Marc Erich Latoschik, Michael Habel, D. Roth, Christian Seufert, Silke Grafe
This article presents an immersive Virtual Reality (VR) system for training classroom management skills, with a specific focus on learning to manage disruptive student behaviour in face-to-face, one-to-many teaching scenarios. The core of the system is a real-time 3D virtual simulation of a classroom, populated by twenty-four semi-autonomous virtual students. The system has been designed as a companion tool for classroom management seminars in a syllabus for primary and secondary school teachers. Whereby, it will allow lecturers to link theory with practice, using the medium of VR. The system is therefore designed for two users: a trainee teacher and an instructor supervising the training session. The teacher is immersed in a real-time 3D simulation of a classroom by means of a head-mounted display and headphone. The instructor operates a graphical desktop console which renders a view of the class and the teacher, whose avatar movements are captured by a marker-less tracking system. This console includes a 2D graphics menu with convenient behaviour and feedback control mechanisms to provide human-guided training sessions. The system is built using low-cost consumer hardware and software. Its architecture and technical design are described in detail. A first evaluation confirms its conformance to critical usability requirements (i.e., safety and comfort, believability, simplicity, acceptability, extensibility, affordability and mobility). Our initial results are promising, and constitute the necessary first step toward a possible investigation of the efficiency and effectiveness of such a system in terms of learning outcomes and experience.
本文介绍了一个沉浸式虚拟现实(VR)系统,用于培训课堂管理技能,特别侧重于学习如何在面对面、一对多的教学场景中管理破坏性的学生行为。该系统的核心是一个教室的实时三维虚拟模拟,由24名半自主的虚拟学生组成。该系统被设计为中小学教师教学大纲中课堂管理研讨会的配套工具。因此,它将允许讲师使用VR媒介将理论与实践联系起来。因此,该系统是为两个用户设计的:一名实习教师和一名监督培训课程的教员。通过头戴式显示器和耳机,教师沉浸在教室的实时3D模拟中。教师操作图形桌面控制台,呈现班级和教师的视图,教师的化身动作由无标记跟踪系统捕获。这个控制台包括一个2D图形菜单,方便的行为和反馈控制机制,以提供人工指导的培训课程。该系统采用低成本的消费级硬件和软件。详细介绍了系统的体系结构和技术设计。第一次评估确认其符合关键的可用性需求(即安全性和舒适性、可靠性、简单性、可接受性、可扩展性、可负担性和可移动性)。我们的初步结果是有希望的,并构成了必要的第一步,可能调查这种系统在学习成果和经验方面的效率和有效性。
{"title":"Breaking Bad Behaviors: A New Tool for Learning Classroom Management Using Virtual Reality","authors":"Jean-Luc Lugrin, Marc Erich Latoschik, Michael Habel, D. Roth, Christian Seufert, Silke Grafe","doi":"10.3389/fict.2016.00026","DOIUrl":"https://doi.org/10.3389/fict.2016.00026","url":null,"abstract":"This article presents an immersive Virtual Reality (VR) system for training classroom management skills, with a specific focus on learning to manage disruptive student behaviour in face-to-face, one-to-many teaching scenarios. The core of the system is a real-time 3D virtual simulation of a classroom, populated by twenty-four semi-autonomous virtual students. The system has been designed as a companion tool for classroom management seminars in a syllabus for primary and secondary school teachers. Whereby, it will allow lecturers to link theory with practice, using the medium of VR. The system is therefore designed for two users: a trainee teacher and an instructor supervising the training session. The teacher is immersed in a real-time 3D simulation of a classroom by means of a head-mounted display and headphone. The instructor operates a graphical desktop console which renders a view of the class and the teacher, whose avatar movements are captured by a marker-less tracking system. This console includes a 2D graphics menu with convenient behaviour and feedback control mechanisms to provide human-guided training sessions. The system is built using low-cost consumer hardware and software. Its architecture and technical design are described in detail. A first evaluation confirms its conformance to critical usability requirements (i.e., safety and comfort, believability, simplicity, acceptability, extensibility, affordability and mobility). Our initial results are promising, and constitute the necessary first step toward a possible investigation of the efficiency and effectiveness of such a system in terms of learning outcomes and experience.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"140 1","pages":"26"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86254896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
The Effects of the Use of Serious Game in Eco-Driving Training 严肃游戏在生态驾驶训练中的应用效果
Q1 Computer Science Pub Date : 2016-10-27 DOI: 10.3389/fict.2016.00022
H. Hrimech, Sabrina Beloufa, F. Mérienne, J. Boucheix, Fabrice Cauchard, Joël Vedrenne, A. Kemeny
Serious games present a promising approach to training and learning. The player is engaged in a virtual environment for a purpose beyond pure entertainment, all while having fun. In this paper, we investigate the effects of the use of serious game in eco-driving training. An approach has been developed in order to improve players’ practical skills in term of eco driving. This approach is based on the development of driving simulation based on serious game, integrating a multisensorial guidance system with metaphors including visual messages (information on fuel consumption, ideal speed area, gearbox management…) and sounds (spatialized sounds, voice messages…). The results demonstrate that the serious game influence positively the behavior of inexperienced drivers in ecological driving, leading to a significant reduction (up to 10%) of their CO2 emission. This work brings also some guidelines for the design process. The experiences lead to a determination of the best eco-driving rules allowing a significant reduction of CO2 emission.
严肃游戏是一种很有前途的训练和学习方法。玩家沉浸在虚拟环境中是为了超越纯粹的娱乐,同时获得乐趣。本文研究了在生态驾驶训练中运用严肃游戏的效果。为了提高玩家在生态驾驶方面的实际技能,开发者开发了一种方法。该方法基于基于严肃游戏的驾驶模拟的发展,集成了一个多感官制导系统,包括视觉信息(油耗信息、理想速度区域、变速箱管理等)和声音(空间化声音、语音信息等)。结果表明,严重博弈对无经验驾驶员的生态驾驶行为产生积极影响,导致其CO2排放量显著降低(最高可达10%)。这项工作也为设计过程带来了一些指导方针。这些经验导致了最佳生态驾驶规则的确定,从而大大减少了二氧化碳的排放。
{"title":"The Effects of the Use of Serious Game in Eco-Driving Training","authors":"H. Hrimech, Sabrina Beloufa, F. Mérienne, J. Boucheix, Fabrice Cauchard, Joël Vedrenne, A. Kemeny","doi":"10.3389/fict.2016.00022","DOIUrl":"https://doi.org/10.3389/fict.2016.00022","url":null,"abstract":"Serious games present a promising approach to training and learning. The player is engaged in a virtual environment for a purpose beyond pure entertainment, all while having fun. In this paper, we investigate the effects of the use of serious game in eco-driving training. An approach has been developed in order to improve players’ practical skills in term of eco driving. This approach is based on the development of driving simulation based on serious game, integrating a multisensorial guidance system with metaphors including visual messages (information on fuel consumption, ideal speed area, gearbox management…) and sounds (spatialized sounds, voice messages…). The results demonstrate that the serious game influence positively the behavior of inexperienced drivers in ecological driving, leading to a significant reduction (up to 10%) of their CO2 emission. This work brings also some guidelines for the design process. The experiences lead to a determination of the best eco-driving rules allowing a significant reduction of CO2 emission.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"45 1","pages":"22"},"PeriodicalIF":0.0,"publicationDate":"2016-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75701076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Shanoir: Applying the Software as a Service Distribution Model to Manage Brain Imaging Research Repositories Shanoir:应用软件即服务分布模型管理脑成像研究知识库
Q1 Computer Science Pub Date : 2016-10-20 DOI: 10.3389/fict.2016.00025
C. Barillot, E. Bannier, O. Commowick, I. Corouge, Anthony Baire, I. Fakhfakh, Justine Guillaumont, Yao Yao, Michael Kain
Two of the major concerns of researchers and clinicians performing neuroimaging experiments are managing the huge quantity and diversity of data and the ability to compare their experiments and the programs they develop with those of their peers. In this context, we introduce Shanoir, which uses a type of cloud computing known as software as a service (SaaS) to manage neuroimaging data used in the clinical neurosciences. Thanks to a formal model of medical imaging data (an ontology), Shanoir provides an open source neuroinformatics environment designed to structure, manage, archive, visualize and share neuroimaging data with an emphasis on managing multi-institutional, collaborative research projects. This article covers how images are accessed through the Shanoir Data Management System and describes the data repositories that are hosted and managed by the Shanoir environment in different contexts.
进行神经成像实验的研究人员和临床医生的两个主要关注点是管理大量和多样化的数据,以及将他们的实验和他们开发的程序与同行进行比较的能力。在这种情况下,我们介绍Shanoir,它使用一种被称为软件即服务(SaaS)的云计算来管理临床神经科学中使用的神经成像数据。由于医学成像数据的正式模型(本体),Shanoir提供了一个开源的神经信息学环境,旨在构建、管理、存档、可视化和共享神经成像数据,重点是管理多机构合作研究项目。本文介绍了如何通过Shanoir数据管理系统访问映像,并描述了在不同上下文中由Shanoir环境托管和管理的数据存储库。
{"title":"Shanoir: Applying the Software as a Service Distribution Model to Manage Brain Imaging Research Repositories","authors":"C. Barillot, E. Bannier, O. Commowick, I. Corouge, Anthony Baire, I. Fakhfakh, Justine Guillaumont, Yao Yao, Michael Kain","doi":"10.3389/fict.2016.00025","DOIUrl":"https://doi.org/10.3389/fict.2016.00025","url":null,"abstract":"Two of the major concerns of researchers and clinicians performing neuroimaging experiments are managing the huge quantity and diversity of data and the ability to compare their experiments and the programs they develop with those of their peers. In this context, we introduce Shanoir, which uses a type of cloud computing known as software as a service (SaaS) to manage neuroimaging data used in the clinical neurosciences. Thanks to a formal model of medical imaging data (an ontology), Shanoir provides an open source neuroinformatics environment designed to structure, manage, archive, visualize and share neuroimaging data with an emphasis on managing multi-institutional, collaborative research projects. This article covers how images are accessed through the Shanoir Data Management System and describes the data repositories that are hosted and managed by the Shanoir environment in different contexts.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"11 1","pages":"25"},"PeriodicalIF":0.0,"publicationDate":"2016-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78703424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Monocular, Boundary-Preserving Joint Recovery of Scene Flow and Depth 场景流和深度的单目、保边界联合恢复
Q1 Computer Science Pub Date : 2016-09-30 DOI: 10.3389/fict.2016.00021
Y. Mathlouthi, A. Mitiche, Ismail Ben Ayed
Variational joint recovery of scene flow and depth from a single image sequence, rather than from a stereo sequence as others required, was investigated in Mitiche et al. (2015) using an integral functional with a term of conformity of scene flow and depth to the image sequence spatiotemporal variations, and L2 regularization terms for smooth depth field and scene flow. The resulting scheme was analogous to the Horn and Schunck optical flow estimation method except that the unknowns were depth and scene flow rather than optical flow. Several examples were given to show the basic potency of the method: It was able to recover good depth and motion, except at their boundaries because L2 regularization is blind to discontinuities which it smooths indiscriminately. The method we study in this paper generalizes to L1 regularization the formulation of Mitiche et al. (2015) so that it computes boundary preserving estimates of both depth and scene flow. The image derivatives, which appear as data in the functional, are computed from the recorded image sequence also by a variational method which uses L1 regularization to preserve their discontinuities. Although L1 regularization yields nonlinear Euler-Lagrange equations for the minimization of the objective functional, these can be solved efficiently. The advantages of the generalization, namely sharper computed depth and three-dimensional motion, are put in evidence in experimentation with real and synthetic images which shows the results of L1 versus L2 regularization of depth and motion, as well as the results using L1 rather than L2 regularization of image derivatives.
Mitiche et al.(2015)研究了场景流和深度从单个图像序列(而不是像其他人要求的那样从立体序列)中进行变分联合恢复,使用了包含场景流和深度与图像序列时空变化一致性项的积分泛函,以及用于平滑深度场和场景流的L2正则化项。所得到的方案与Horn和Schunck光流估计方法类似,只是未知因素是深度和场景流而不是光流。给出了几个例子来显示该方法的基本效力:它能够恢复良好的深度和运动,除了在它们的边界,因为L2正则化对它不加选择地平滑的不连续是盲目的。我们在本文中研究的方法将Mitiche等人(2015)的公式推广到L1正则化,以便计算深度和场景流的边界保持估计。作为函数中数据的图像导数也通过变分方法从记录的图像序列中计算,该变分方法使用L1正则化来保持它们的不连续性。虽然L1正则化为目标泛函的最小化产生非线性欧拉-拉格朗日方程,但这些方程可以有效地求解。在真实图像和合成图像的实验中,证明了这种泛化的优点,即更清晰的计算深度和三维运动,这些实验显示了L1与L2正则化深度和运动的结果,以及使用L1而不是L2正则化图像导数的结果。
{"title":"Monocular, Boundary-Preserving Joint Recovery of Scene Flow and Depth","authors":"Y. Mathlouthi, A. Mitiche, Ismail Ben Ayed","doi":"10.3389/fict.2016.00021","DOIUrl":"https://doi.org/10.3389/fict.2016.00021","url":null,"abstract":"Variational joint recovery of scene flow and depth from a single image sequence, rather than from a stereo sequence as others required, was investigated in Mitiche et al. (2015) using an integral functional with a term of conformity of scene flow and depth to the image sequence spatiotemporal variations, and L2 regularization terms for smooth depth field and scene flow. The resulting scheme was analogous to the Horn and Schunck optical flow estimation method except that the unknowns were depth and scene flow rather than optical flow. Several examples were given to show the basic potency of the method: It was able to recover good depth and motion, except at their boundaries because L2 regularization is blind to discontinuities which it smooths indiscriminately. The method we study in this paper generalizes to L1 regularization the formulation of Mitiche et al. (2015) so that it computes boundary preserving estimates of both depth and scene flow. The image derivatives, which appear as data in the functional, are computed from the recorded image sequence also by a variational method which uses L1 regularization to preserve their discontinuities. Although L1 regularization yields nonlinear Euler-Lagrange equations for the minimization of the objective functional, these can be solved efficiently. The advantages of the generalization, namely sharper computed depth and three-dimensional motion, are put in evidence in experimentation with real and synthetic images which shows the results of L1 versus L2 regularization of depth and motion, as well as the results using L1 rather than L2 regularization of image derivatives.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"68 1","pages":"21"},"PeriodicalIF":0.0,"publicationDate":"2016-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85386813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in ICT
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1