Context: There is a great need in clinical research with imaging to collect, to store, to organize and to process large amount of varied data according to legal requirements and research obligations. In practice, many laboratories or clinical research centers working in imaging domain have to manage innumerous images and their associated data without having sufficient IT (Information Technology) skills and resources to develop and to maintain a robust software solution. Since conventional infrastructure and data storage systems for medical image such as “Picture Archiving and Communication System” (PACS) may not be compatible with research needs, we propose a solution: ArchiMed, a complete storage and visualization solution developed for clinical research. Material and methods: ArchiMed is a service oriented server application written in Java EETM which is integrated into local clinical environments (imaging devices, post-processing workstations, others devices...) and allows to safely collect data from other collaborative centers. It ensures all kinds of imaging data storage with a “study centered” approach, quality control and interfacing with mainstream image analysis research tools. Results: With more than 10 millions of archived files for about 4TB stored with 116 studies, ArchiMed, in function for 5 years at CIC-IT of Nancy-France, is used every day by about 60 persons, among whom are engineers, researchers, clinicians and clinical trial project managers.
{"title":"ArchiMed: A Data Management System for Clinical Research in Imaging","authors":"E. Micard, Damien Husson, J. Felblinger","doi":"10.3389/fict.2016.00031","DOIUrl":"https://doi.org/10.3389/fict.2016.00031","url":null,"abstract":"Context: There is a great need in clinical research with imaging to collect, to store, to organize and to process large amount of varied data according to legal requirements and research obligations. In practice, many laboratories or clinical research centers working in imaging domain have to manage innumerous images and their associated data without having sufficient IT (Information Technology) skills and resources to develop and to maintain a robust software solution. Since conventional infrastructure and data storage systems for medical image such as “Picture Archiving and Communication System” (PACS) may not be compatible with research needs, we propose a solution: ArchiMed, a complete storage and visualization solution developed for clinical research. Material and methods: ArchiMed is a service oriented server application written in Java EETM which is integrated into local clinical environments (imaging devices, post-processing workstations, others devices...) and allows to safely collect data from other collaborative centers. It ensures all kinds of imaging data storage with a “study centered” approach, quality control and interfacing with mainstream image analysis research tools. Results: With more than 10 millions of archived files for about 4TB stored with 116 studies, ArchiMed, in function for 5 years at CIC-IT of Nancy-France, is used every day by about 60 persons, among whom are engineers, researchers, clinicians and clinical trial project managers.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"7 1","pages":"31"},"PeriodicalIF":0.0,"publicationDate":"2016-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75459978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The integration of emotions into human computer interaction applications promises a more natural dialog between the user and the technical system he operates. In order to construct such machinery, continuous measuring of the affective state of the user becomes essential. While basic research that is aimed to capture and classify affective signals has progressed, many issues are still prevailing that hinder easy integration of affective signals into human-computer interaction. In this paper, we identify and investigate pitfalls in three steps of the work-flow of affective classification studies. It starts with the process of collecting affective data for the purpose of training suitable classifiers. Emotional data has to be created in which the target emotions are present. Therefore, human participants have to be stimulated suitably. We discuss the nature of these stimuli, their relevance to human-computer interaction and the repeatability of the data recording setting. Second, aspects of annotation procedures are investigated, which include the variances of individual raters, annotation delay, the impact of the used annotation tool and how individual ratings are combined to a unified label. Finally, the evaluation protocol is examined which includes, amongst others, the impact of the performance measure on the accuracy of a classification model. We hereby focus especially on the evaluation of classifier outputs against continuously annotated dimensions. Alongside the discussed problems and pitfalls and the ways how they affect the outcome, we provide solutions and alternatives to overcome these issues. As a final part of the paper we sketch a recording scenario and a set of supporting technologies that can contribute to solve many of the issues mentioned above.
{"title":"The Influence of Annotation, Corpus Design, and Evaluation on the Outcome of Automatic Classification of Human Emotions","authors":"Markus Kächele, Martin Schels, F. Schwenker","doi":"10.3389/fict.2016.00027","DOIUrl":"https://doi.org/10.3389/fict.2016.00027","url":null,"abstract":"The integration of emotions into human computer interaction applications promises a more natural dialog between the user and the technical system he operates. In order to construct such machinery, continuous measuring of the affective state of the user becomes essential. While basic research that is aimed to capture and classify affective signals has progressed, many issues are still prevailing that hinder easy integration of affective signals into human-computer interaction. In this paper, we identify and investigate pitfalls in three steps of the work-flow of affective classification studies. It starts with the process of collecting affective data for the purpose of training suitable classifiers. Emotional data has to be created in which the target emotions are present. Therefore, human participants have to be stimulated suitably. We discuss the nature of these stimuli, their relevance to human-computer interaction and the repeatability of the data recording setting. Second, aspects of annotation procedures are investigated, which include the variances of individual raters, annotation delay, the impact of the used annotation tool and how individual ratings are combined to a unified label. Finally, the evaluation protocol is examined which includes, amongst others, the impact of the performance measure on the accuracy of a classification model. We hereby focus especially on the evaluation of classifier outputs against continuously annotated dimensions. Alongside the discussed problems and pitfalls and the ways how they affect the outcome, we provide solutions and alternatives to overcome these issues. As a final part of the paper we sketch a recording scenario and a set of supporting technologies that can contribute to solve many of the issues mentioned above.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"45 1","pages":"27"},"PeriodicalIF":0.0,"publicationDate":"2016-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86541806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interactive systems based on Augmented Reality (AR) and Tangible User Interfaces (TUI) hold great promise for enhancing the learning and understanding of abstract phenomena. In particular, they enable to take advantage of numerical simulation and pedagogical supports, while keeping the learner involved in true physical experimentations. In this paper, we present three examples based on AR and TUI, where the concepts to be learnt are difficult to perceive. The first one, Helios, targets K-12 learners in the field of astronomy. The second one, Hobit is dedicated to experiments in wave optics. Finally, the third one, Teegi, allows one to get to know more about brain activity. These three hybrid interfaces have emerged from a common basis that jointly combines research and development work in the fields of Instructional Design and Human-Computer Interaction, from theoretical to practical aspects. On the basis of investigations carried out in real context of use and on the grounding works in education and HCI which corroborate the design choices that were made, we formalize how and why the hybridization of the real and the virtual enables to leverage the way learners understand intangible phenomena in Sciences education.
{"title":"Making Tangible the Intangible: Hybridization of the Real and the Virtual to Enhance Learning of Abstract Phenomena","authors":"Stéphanie Fleck, M. Hachet","doi":"10.3389/fict.2016.00030","DOIUrl":"https://doi.org/10.3389/fict.2016.00030","url":null,"abstract":"Interactive systems based on Augmented Reality (AR) and Tangible User Interfaces (TUI) hold great promise for enhancing the learning and understanding of abstract phenomena. In particular, they enable to take advantage of numerical simulation and pedagogical supports, while keeping the learner involved in true physical experimentations. In this paper, we present three examples based on AR and TUI, where the concepts to be learnt are difficult to perceive. The first one, Helios, targets K-12 learners in the field of astronomy. The second one, Hobit is dedicated to experiments in wave optics. Finally, the third one, Teegi, allows one to get to know more about brain activity. These three hybrid interfaces have emerged from a common basis that jointly combines research and development work in the fields of Instructional Design and Human-Computer Interaction, from theoretical to practical aspects. On the basis of investigations carried out in real context of use and on the grounding works in education and HCI which corroborate the design choices that were made, we formalize how and why the hybridization of the real and the virtual enables to leverage the way learners understand intangible phenomena in Sciences education.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"142 1","pages":"30"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80449059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Embedded real-time vision applications are being rapidly deployed in a large realm of consumer electronics, ranging from automotive safety to surveillance systems. However, the relatively limited computational power of embedded platforms is considered as a bottleneck for many vision applications, necessitating optimization. OpenVX is a standardized interface, released in late 2014, in an attempt to provide both system and kernel level optimization to vision applications. With OpenVX, Vision processing are modeled with coarse-grained data flow graphs, which can be optimized and accelerated by the platform implementer. Current full implementations of OpenVX are given in the programming language C, which does not support advanced programming paradigms such as object-oriented, imperative and functional programming, nor does it have runtime or type-checking. Here we present a python-based full Implementation of OpenVX, which eliminates much of the discrepancies between the object-oriented paradigm used by many modern applications and the native C implementations. Our open-source implementation can be used for rapid development of OpenVX applications in embedded platforms. Demonstration includes static and real-time image acquisition and processing using a Raspberry Pi and a GoPro camera. Code is given as supplementary information. Code project and linked deployable virtual machine are located on GitHub: https://github.com/NBEL-lab/PythonOpenVX.
{"title":"OpenVX-Based Python Framework for Real-time Cross-Platform Acceleration of Embedded Computer Vision Applications","authors":"Ori Heimlich, Elishai Ezra Tsur","doi":"10.3389/fict.2016.00028","DOIUrl":"https://doi.org/10.3389/fict.2016.00028","url":null,"abstract":"Embedded real-time vision applications are being rapidly deployed in a large realm of consumer electronics, ranging from automotive safety to surveillance systems. However, the relatively limited computational power of embedded platforms is considered as a bottleneck for many vision applications, necessitating optimization. OpenVX is a standardized interface, released in late 2014, in an attempt to provide both system and kernel level optimization to vision applications. With OpenVX, Vision processing are modeled with coarse-grained data flow graphs, which can be optimized and accelerated by the platform implementer. Current full implementations of OpenVX are given in the programming language C, which does not support advanced programming paradigms such as object-oriented, imperative and functional programming, nor does it have runtime or type-checking. Here we present a python-based full Implementation of OpenVX, which eliminates much of the discrepancies between the object-oriented paradigm used by many modern applications and the native C implementations. Our open-source implementation can be used for rapid development of OpenVX applications in embedded platforms. Demonstration includes static and real-time image acquisition and processing using a Raspberry Pi and a GoPro camera. Code is given as supplementary information. Code project and linked deployable virtual machine are located on GitHub: https://github.com/NBEL-lab/PythonOpenVX.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"1 1","pages":"28"},"PeriodicalIF":0.0,"publicationDate":"2016-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90857406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a new framework for analyzing and designing virtual reality (VR) techniques. This framework is based on two concepts—system fidelity (i.e., the degree with which real-world experiences are reproduced by a system) and memory (i.e., the formation and activation of perceptual, cognitive, and motor networks of neurons). The premise of the framework is to manipulate an aspect of system fidelity in order to assist a stage of memory. We call it the Altered-Fidelity Framework for Enhancing Cognition and Training (AFFECT). AFFECT provides nine categories of approaches to altering system fidelity to positively affect learning or training. These categories are based on the intersections of three aspects of system fidelity (interaction fidelity, scenario fidelity, and display fidelity) and three stages of memory (encoding, implicit retrieval, and explicit retrieval). In addition to discussing the details of our new framework, we show how AFFECT can be used as a tool for analyzing and categorizing VR techniques designed to facilitate learning or training. We also demonstrate how AFFECT can be used as a design space for creating new VR techniques intended for educational and training systems.
{"title":"AFFECT: Altered-Fidelity Framework for Enhancing Cognition and Training","authors":"Ryan P. McMahan, Nicolas S. Herrera","doi":"10.3389/fict.2016.00029","DOIUrl":"https://doi.org/10.3389/fict.2016.00029","url":null,"abstract":"In this paper, we present a new framework for analyzing and designing virtual reality (VR) techniques. This framework is based on two concepts—system fidelity (i.e., the degree with which real-world experiences are reproduced by a system) and memory (i.e., the formation and activation of perceptual, cognitive, and motor networks of neurons). The premise of the framework is to manipulate an aspect of system fidelity in order to assist a stage of memory. We call it the Altered-Fidelity Framework for Enhancing Cognition and Training (AFFECT). AFFECT provides nine categories of approaches to altering system fidelity to positively affect learning or training. These categories are based on the intersections of three aspects of system fidelity (interaction fidelity, scenario fidelity, and display fidelity) and three stages of memory (encoding, implicit retrieval, and explicit retrieval). In addition to discussing the details of our new framework, we show how AFFECT can be used as a tool for analyzing and categorizing VR techniques designed to facilitate learning or training. We also demonstrate how AFFECT can be used as a design space for creating new VR techniques intended for educational and training systems.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"9 1","pages":"29"},"PeriodicalIF":0.0,"publicationDate":"2016-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82126029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One potential application for virtual environments (VEs) is the training of spatial knowledge. A critical question is what features the VE should have in order to facilitate this training. Previous research has shown that people rely on environmental features, such as sockets and wall decorations, when learning object locations. The aim of this study is to explore the effect of varied environmental feature fidelity of VEs, the use of self-avatars and the level of immersion on object location learning and recall. Following a between-subjects experimental design, participants were asked to learn the location of three identical objects by navigating one of three environments: a physical laboratory, or low and high detail VE replicas of this laboratory. Participants who experienced the VEs could use either a head-mounted display (HMD) or a desktop computer. Half of the participants learning in the HMD and desktop systems were assigned a virtual body. Participants were then asked to place physical versions of the three objects in the physical laboratory in the same configuration. We tracked participant movement, measured object placement, and administered a questionnaire related to aspects of the experience. HMD learning resulted in statistically significant higher performance than desktop learning. Results indicate that, when learning in low detail VEs, there is no difference in performance between participants using HMD and desktop systems. Overall, providing the participant with a virtual body had a negative impact on performance. Preliminary inspection of navigation data indicates that spatial learning strategies are different in systems with varying levels of immersion.
{"title":"The Effect of Environmental Features, Self-Avatar, and Immersion on Object Location Memory in Virtual Environments","authors":"María Murcia-López, A. Steed","doi":"10.3389/fict.2016.00024","DOIUrl":"https://doi.org/10.3389/fict.2016.00024","url":null,"abstract":"One potential application for virtual environments (VEs) is the training of spatial knowledge. A critical question is what features the VE should have in order to facilitate this training. Previous research has shown that people rely on environmental features, such as sockets and wall decorations, when learning object locations. The aim of this study is to explore the effect of varied environmental feature fidelity of VEs, the use of self-avatars and the level of immersion on object location learning and recall. Following a between-subjects experimental design, participants were asked to learn the location of three identical objects by navigating one of three environments: a physical laboratory, or low and high detail VE replicas of this laboratory. Participants who experienced the VEs could use either a head-mounted display (HMD) or a desktop computer. Half of the participants learning in the HMD and desktop systems were assigned a virtual body. Participants were then asked to place physical versions of the three objects in the physical laboratory in the same configuration. We tracked participant movement, measured object placement, and administered a questionnaire related to aspects of the experience. HMD learning resulted in statistically significant higher performance than desktop learning. Results indicate that, when learning in low detail VEs, there is no difference in performance between participants using HMD and desktop systems. Overall, providing the participant with a virtual body had a negative impact on performance. Preliminary inspection of navigation data indicates that spatial learning strategies are different in systems with varying levels of immersion.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"3 1","pages":"24"},"PeriodicalIF":0.0,"publicationDate":"2016-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86094848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jean-Luc Lugrin, Marc Erich Latoschik, Michael Habel, D. Roth, Christian Seufert, Silke Grafe
This article presents an immersive Virtual Reality (VR) system for training classroom management skills, with a specific focus on learning to manage disruptive student behaviour in face-to-face, one-to-many teaching scenarios. The core of the system is a real-time 3D virtual simulation of a classroom, populated by twenty-four semi-autonomous virtual students. The system has been designed as a companion tool for classroom management seminars in a syllabus for primary and secondary school teachers. Whereby, it will allow lecturers to link theory with practice, using the medium of VR. The system is therefore designed for two users: a trainee teacher and an instructor supervising the training session. The teacher is immersed in a real-time 3D simulation of a classroom by means of a head-mounted display and headphone. The instructor operates a graphical desktop console which renders a view of the class and the teacher, whose avatar movements are captured by a marker-less tracking system. This console includes a 2D graphics menu with convenient behaviour and feedback control mechanisms to provide human-guided training sessions. The system is built using low-cost consumer hardware and software. Its architecture and technical design are described in detail. A first evaluation confirms its conformance to critical usability requirements (i.e., safety and comfort, believability, simplicity, acceptability, extensibility, affordability and mobility). Our initial results are promising, and constitute the necessary first step toward a possible investigation of the efficiency and effectiveness of such a system in terms of learning outcomes and experience.
{"title":"Breaking Bad Behaviors: A New Tool for Learning Classroom Management Using Virtual Reality","authors":"Jean-Luc Lugrin, Marc Erich Latoschik, Michael Habel, D. Roth, Christian Seufert, Silke Grafe","doi":"10.3389/fict.2016.00026","DOIUrl":"https://doi.org/10.3389/fict.2016.00026","url":null,"abstract":"This article presents an immersive Virtual Reality (VR) system for training classroom management skills, with a specific focus on learning to manage disruptive student behaviour in face-to-face, one-to-many teaching scenarios. The core of the system is a real-time 3D virtual simulation of a classroom, populated by twenty-four semi-autonomous virtual students. The system has been designed as a companion tool for classroom management seminars in a syllabus for primary and secondary school teachers. Whereby, it will allow lecturers to link theory with practice, using the medium of VR. The system is therefore designed for two users: a trainee teacher and an instructor supervising the training session. The teacher is immersed in a real-time 3D simulation of a classroom by means of a head-mounted display and headphone. The instructor operates a graphical desktop console which renders a view of the class and the teacher, whose avatar movements are captured by a marker-less tracking system. This console includes a 2D graphics menu with convenient behaviour and feedback control mechanisms to provide human-guided training sessions. The system is built using low-cost consumer hardware and software. Its architecture and technical design are described in detail. A first evaluation confirms its conformance to critical usability requirements (i.e., safety and comfort, believability, simplicity, acceptability, extensibility, affordability and mobility). Our initial results are promising, and constitute the necessary first step toward a possible investigation of the efficiency and effectiveness of such a system in terms of learning outcomes and experience.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"140 1","pages":"26"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86254896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Hrimech, Sabrina Beloufa, F. Mérienne, J. Boucheix, Fabrice Cauchard, Joël Vedrenne, A. Kemeny
Serious games present a promising approach to training and learning. The player is engaged in a virtual environment for a purpose beyond pure entertainment, all while having fun. In this paper, we investigate the effects of the use of serious game in eco-driving training. An approach has been developed in order to improve players’ practical skills in term of eco driving. This approach is based on the development of driving simulation based on serious game, integrating a multisensorial guidance system with metaphors including visual messages (information on fuel consumption, ideal speed area, gearbox management…) and sounds (spatialized sounds, voice messages…). The results demonstrate that the serious game influence positively the behavior of inexperienced drivers in ecological driving, leading to a significant reduction (up to 10%) of their CO2 emission. This work brings also some guidelines for the design process. The experiences lead to a determination of the best eco-driving rules allowing a significant reduction of CO2 emission.
{"title":"The Effects of the Use of Serious Game in Eco-Driving Training","authors":"H. Hrimech, Sabrina Beloufa, F. Mérienne, J. Boucheix, Fabrice Cauchard, Joël Vedrenne, A. Kemeny","doi":"10.3389/fict.2016.00022","DOIUrl":"https://doi.org/10.3389/fict.2016.00022","url":null,"abstract":"Serious games present a promising approach to training and learning. The player is engaged in a virtual environment for a purpose beyond pure entertainment, all while having fun. In this paper, we investigate the effects of the use of serious game in eco-driving training. An approach has been developed in order to improve players’ practical skills in term of eco driving. This approach is based on the development of driving simulation based on serious game, integrating a multisensorial guidance system with metaphors including visual messages (information on fuel consumption, ideal speed area, gearbox management…) and sounds (spatialized sounds, voice messages…). The results demonstrate that the serious game influence positively the behavior of inexperienced drivers in ecological driving, leading to a significant reduction (up to 10%) of their CO2 emission. This work brings also some guidelines for the design process. The experiences lead to a determination of the best eco-driving rules allowing a significant reduction of CO2 emission.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"45 1","pages":"22"},"PeriodicalIF":0.0,"publicationDate":"2016-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75701076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Barillot, E. Bannier, O. Commowick, I. Corouge, Anthony Baire, I. Fakhfakh, Justine Guillaumont, Yao Yao, Michael Kain
Two of the major concerns of researchers and clinicians performing neuroimaging experiments are managing the huge quantity and diversity of data and the ability to compare their experiments and the programs they develop with those of their peers. In this context, we introduce Shanoir, which uses a type of cloud computing known as software as a service (SaaS) to manage neuroimaging data used in the clinical neurosciences. Thanks to a formal model of medical imaging data (an ontology), Shanoir provides an open source neuroinformatics environment designed to structure, manage, archive, visualize and share neuroimaging data with an emphasis on managing multi-institutional, collaborative research projects. This article covers how images are accessed through the Shanoir Data Management System and describes the data repositories that are hosted and managed by the Shanoir environment in different contexts.
{"title":"Shanoir: Applying the Software as a Service Distribution Model to Manage Brain Imaging Research Repositories","authors":"C. Barillot, E. Bannier, O. Commowick, I. Corouge, Anthony Baire, I. Fakhfakh, Justine Guillaumont, Yao Yao, Michael Kain","doi":"10.3389/fict.2016.00025","DOIUrl":"https://doi.org/10.3389/fict.2016.00025","url":null,"abstract":"Two of the major concerns of researchers and clinicians performing neuroimaging experiments are managing the huge quantity and diversity of data and the ability to compare their experiments and the programs they develop with those of their peers. In this context, we introduce Shanoir, which uses a type of cloud computing known as software as a service (SaaS) to manage neuroimaging data used in the clinical neurosciences. Thanks to a formal model of medical imaging data (an ontology), Shanoir provides an open source neuroinformatics environment designed to structure, manage, archive, visualize and share neuroimaging data with an emphasis on managing multi-institutional, collaborative research projects. This article covers how images are accessed through the Shanoir Data Management System and describes the data repositories that are hosted and managed by the Shanoir environment in different contexts.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"11 1","pages":"25"},"PeriodicalIF":0.0,"publicationDate":"2016-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78703424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Variational joint recovery of scene flow and depth from a single image sequence, rather than from a stereo sequence as others required, was investigated in Mitiche et al. (2015) using an integral functional with a term of conformity of scene flow and depth to the image sequence spatiotemporal variations, and L2 regularization terms for smooth depth field and scene flow. The resulting scheme was analogous to the Horn and Schunck optical flow estimation method except that the unknowns were depth and scene flow rather than optical flow. Several examples were given to show the basic potency of the method: It was able to recover good depth and motion, except at their boundaries because L2 regularization is blind to discontinuities which it smooths indiscriminately. The method we study in this paper generalizes to L1 regularization the formulation of Mitiche et al. (2015) so that it computes boundary preserving estimates of both depth and scene flow. The image derivatives, which appear as data in the functional, are computed from the recorded image sequence also by a variational method which uses L1 regularization to preserve their discontinuities. Although L1 regularization yields nonlinear Euler-Lagrange equations for the minimization of the objective functional, these can be solved efficiently. The advantages of the generalization, namely sharper computed depth and three-dimensional motion, are put in evidence in experimentation with real and synthetic images which shows the results of L1 versus L2 regularization of depth and motion, as well as the results using L1 rather than L2 regularization of image derivatives.
Mitiche et al.(2015)研究了场景流和深度从单个图像序列(而不是像其他人要求的那样从立体序列)中进行变分联合恢复,使用了包含场景流和深度与图像序列时空变化一致性项的积分泛函,以及用于平滑深度场和场景流的L2正则化项。所得到的方案与Horn和Schunck光流估计方法类似,只是未知因素是深度和场景流而不是光流。给出了几个例子来显示该方法的基本效力:它能够恢复良好的深度和运动,除了在它们的边界,因为L2正则化对它不加选择地平滑的不连续是盲目的。我们在本文中研究的方法将Mitiche等人(2015)的公式推广到L1正则化,以便计算深度和场景流的边界保持估计。作为函数中数据的图像导数也通过变分方法从记录的图像序列中计算,该变分方法使用L1正则化来保持它们的不连续性。虽然L1正则化为目标泛函的最小化产生非线性欧拉-拉格朗日方程,但这些方程可以有效地求解。在真实图像和合成图像的实验中,证明了这种泛化的优点,即更清晰的计算深度和三维运动,这些实验显示了L1与L2正则化深度和运动的结果,以及使用L1而不是L2正则化图像导数的结果。
{"title":"Monocular, Boundary-Preserving Joint Recovery of Scene Flow and Depth","authors":"Y. Mathlouthi, A. Mitiche, Ismail Ben Ayed","doi":"10.3389/fict.2016.00021","DOIUrl":"https://doi.org/10.3389/fict.2016.00021","url":null,"abstract":"Variational joint recovery of scene flow and depth from a single image sequence, rather than from a stereo sequence as others required, was investigated in Mitiche et al. (2015) using an integral functional with a term of conformity of scene flow and depth to the image sequence spatiotemporal variations, and L2 regularization terms for smooth depth field and scene flow. The resulting scheme was analogous to the Horn and Schunck optical flow estimation method except that the unknowns were depth and scene flow rather than optical flow. Several examples were given to show the basic potency of the method: It was able to recover good depth and motion, except at their boundaries because L2 regularization is blind to discontinuities which it smooths indiscriminately. The method we study in this paper generalizes to L1 regularization the formulation of Mitiche et al. (2015) so that it computes boundary preserving estimates of both depth and scene flow. The image derivatives, which appear as data in the functional, are computed from the recorded image sequence also by a variational method which uses L1 regularization to preserve their discontinuities. Although L1 regularization yields nonlinear Euler-Lagrange equations for the minimization of the objective functional, these can be solved efficiently. The advantages of the generalization, namely sharper computed depth and three-dimensional motion, are put in evidence in experimentation with real and synthetic images which shows the results of L1 versus L2 regularization of depth and motion, as well as the results using L1 rather than L2 regularization of image derivatives.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"68 1","pages":"21"},"PeriodicalIF":0.0,"publicationDate":"2016-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85386813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}