There are expectations that stem cell therapy (SCT) will treat many currently untreatable diseases. The Internet is widely used by patients seeking information about new treatments, and hence, analyzing websites is a representative sample of the information available to the public. Our aim was to understand what information the public would find when searching for information on SCT on Google, as this would inform us on how lay people form their knowledge about SCT. We analyzed the content and information quality of the first 200 websites returned by a Google.com search on SCT. Most websites returned were from treatment centers (TC, 44%) followed by news and medical professional websites. The specialty most mentioned in non-TC websites was “neurological” (67%), followed by “cardiovascular” (42%), while the most frequent indication for which SCT is offered by TCs was musculoskeletal (89%) followed by neurological (47%). 45% of the centers specialized in treating one specialty, 10% two and 45% offered between three and 18 different specialties. Of the 78 treatment centers, 65% were in the USA, 23% in Asia and 8% in Latin America. None of the centers offered SCT based on embryonic cells. Health information quality (JAMA score, measuring trustworthiness) was lowest for TCs and commercial websites and highest for scientific journals and health portals. This study shows a disconnection between information about SCT and what is actually offered by TCs. The study also shows that TCs, potentially acting in a regulatory grey area, have a high visibility on the Internet.
{"title":"Stem Cell Therapy on the Internet: Information Quality and Content Analysis of English Language Web Pages Returned by Google","authors":"Douglas Meehan, I. Bizzi, P. Ghezzi","doi":"10.3389/fict.2017.00028","DOIUrl":"https://doi.org/10.3389/fict.2017.00028","url":null,"abstract":"There are expectations that stem cell therapy (SCT) will treat many currently untreatable diseases. The Internet is widely used by patients seeking information about new treatments, and hence, analyzing websites is a representative sample of the information available to the public. Our aim was to understand what information the public would find when searching for information on SCT on Google, as this would inform us on how lay people form their knowledge about SCT. We analyzed the content and information quality of the first 200 websites returned by a Google.com search on SCT. Most websites returned were from treatment centers (TC, 44%) followed by news and medical professional websites. The specialty most mentioned in non-TC websites was “neurological” (67%), followed by “cardiovascular” (42%), while the most frequent indication for which SCT is offered by TCs was musculoskeletal (89%) followed by neurological (47%). 45% of the centers specialized in treating one specialty, 10% two and 45% offered between three and 18 different specialties. Of the 78 treatment centers, 65% were in the USA, 23% in Asia and 8% in Latin America. None of the centers offered SCT based on embryonic cells. Health information quality (JAMA score, measuring trustworthiness) was lowest for TCs and commercial websites and highest for scientific journals and health portals. This study shows a disconnection between information about SCT and what is actually offered by TCs. The study also shows that TCs, potentially acting in a regulatory grey area, have a high visibility on the Internet.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"135 1","pages":"28"},"PeriodicalIF":0.0,"publicationDate":"2017-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86299650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study discusses the concept of virtual selves created in the virtual spaces (e.g. Social Network Services or Virtual Reality). It analyzes the activities in the different virtual spaces and claims that experience gained there can be transferred to real life. In respect to that, the effects of the Virtual Reality treatment on the self, as well as the concept of creating a life story are analyzed as interconnected. The research question which arises from these considerations is how to look at psychological trauma in order to explain the effectiveness of the usage of Virtual Reality for treatment of traumatic disorders. The proposal in the study is to see trauma as a shift in the normal storyline of the narrative people create. With this concept in mind, it might be possible to support the claim that reliving traumatic events, regaining control over one’s life narrative and creating new stories in the Virtual Reality aids the treatment process in the search for meaning and resolution in life events. Considering the findings of researchers who argue in the field of self-narrative and traumatic treatment, as well as researchers on virtual selves, virtual spaces and Virtual Reality, this study discusses the virtual as a possible medium to experience narratives and utilize those narratives as better explanatory stories to facilitate the therapeutic process of recovery and self-recreation. This study supports the idea that Virtual Reality can be used to visualize patients’ narratives and help them perceive themselves as active authors of their life’s story by retelling traumatic episodes with additional explanation. This experience in the Virtual Reality is utilized to form healthier narratives and coping techniques for robust therapeutic results that are transferred to real life.
{"title":"Trauma and Self-Narrative in Virtual Reality: Toward Recreating a Healthier Mind","authors":"Iva Georgieva","doi":"10.3389/fict.2017.00027","DOIUrl":"https://doi.org/10.3389/fict.2017.00027","url":null,"abstract":"This study discusses the concept of virtual selves created in the virtual spaces (e.g. Social Network Services or Virtual Reality). It analyzes the activities in the different virtual spaces and claims that experience gained there can be transferred to real life. In respect to that, the effects of the Virtual Reality treatment on the self, as well as the concept of creating a life story are analyzed as interconnected. The research question which arises from these considerations is how to look at psychological trauma in order to explain the effectiveness of the usage of Virtual Reality for treatment of traumatic disorders. The proposal in the study is to see trauma as a shift in the normal storyline of the narrative people create. With this concept in mind, it might be possible to support the claim that reliving traumatic events, regaining control over one’s life narrative and creating new stories in the Virtual Reality aids the treatment process in the search for meaning and resolution in life events. Considering the findings of researchers who argue in the field of self-narrative and traumatic treatment, as well as researchers on virtual selves, virtual spaces and Virtual Reality, this study discusses the virtual as a possible medium to experience narratives and utilize those narratives as better explanatory stories to facilitate the therapeutic process of recovery and self-recreation. This study supports the idea that Virtual Reality can be used to visualize patients’ narratives and help them perceive themselves as active authors of their life’s story by retelling traumatic episodes with additional explanation. This experience in the Virtual Reality is utilized to form healthier narratives and coping techniques for robust therapeutic results that are transferred to real life.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"133 1","pages":"27"},"PeriodicalIF":0.0,"publicationDate":"2017-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74664740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos M. Duarte, Simon Desart, David Costa, Bruno Dumas
While mobile devices have experienced important accessibility advances in the past years, people with visual impairments still face important barriers, especially in specific contexts when both their hands are not free to hold the mobile device, like when walking outside. By resorting to a multimodal combination of body based gestures and voice, we aim to achieve full hands and vision free interaction with mobile devices. In this paper, we describe this vision and present the design of a prototype, inspired by that vision, of a text messaging application. The paper also presents a user study where the suitability of the proposed approach was assessed, and a performance comparison between our prototype and existing SMS applications was conducted. Study participants received positively the prototype, which also supported better performance in tasks that involved text editing.
{"title":"Designing Multimodal Mobile Interaction for a Text Messaging Application for Visually Impaired Users","authors":"Carlos M. Duarte, Simon Desart, David Costa, Bruno Dumas","doi":"10.3389/fict.2017.00026","DOIUrl":"https://doi.org/10.3389/fict.2017.00026","url":null,"abstract":"While mobile devices have experienced important accessibility advances in the past years, people with visual impairments still face important barriers, especially in specific contexts when both their hands are not free to hold the mobile device, like when walking outside. By resorting to a multimodal combination of body based gestures and voice, we aim to achieve full hands and vision free interaction with mobile devices. In this paper, we describe this vision and present the design of a prototype, inspired by that vision, of a text messaging application. The paper also presents a user study where the suitability of the proposed approach was assessed, and a performance comparison between our prototype and existing SMS applications was conducted. Study participants received positively the prototype, which also supported better performance in tasks that involved text editing.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"110 2 Pt 1 1","pages":"26"},"PeriodicalIF":0.0,"publicationDate":"2017-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89749275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Students’ transition to tertiary education plays a critical role in their overall post-secondary experience. Even though educational institutions have designed and implemented various transition support programs, most of them still struggle to collect detailed information and provide tailored and timely support to students. With the high adoption rate of smart phones among university students, mobile applications can be used as a platform to provide personalized support throughout the transition, which has the potential to address the shortcomings of existing programs. Moreover, the use of mobile applications to support the transition to tertiary education can benefit from emerging techniques to design applications to support individuals through transition processes. In this paper, we present the design and development process of myUniMate, a mobile application that allows students to track and reflect on information from multiple aspects of their university lives. The paper describes the user-centred design (UCD) approach used in the design, the implementation process, and how the initial version evolved based on our previous study. We conducted a four-week field trial with first year university students to validate our design.
{"title":"Evolving the Design of a Mobile Application to Support Transition to Tertiary Education","authors":"Yu Zhao, A. Pardo","doi":"10.3389/fict.2017.00025","DOIUrl":"https://doi.org/10.3389/fict.2017.00025","url":null,"abstract":"Students’ transition to tertiary education plays a critical role in their overall post-secondary experience. Even though educational institutions have designed and implemented various transition support programs, most of them still struggle to collect detailed information and provide tailored and timely support to students. With the high adoption rate of smart phones among university students, mobile applications can be used as a platform to provide personalized support throughout the transition, which has the potential to address the shortcomings of existing programs. Moreover, the use of mobile applications to support the transition to tertiary education can benefit from emerging techniques to design applications to support individuals through transition processes. In this paper, we present the design and development process of myUniMate, a mobile application that allows students to track and reflect on information from multiple aspects of their university lives. The paper describes the user-centred design (UCD) approach used in the design, the implementation process, and how the initial version evolved based on our previous study. We conducted a four-week field trial with first year university students to validate our design.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"51 1","pages":"25"},"PeriodicalIF":0.0,"publicationDate":"2017-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85815484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Quigley, Conor McNamara, Jonathan L. Ostwald, T. Sumner
Scientific models represent ideas, processes, and phenomena by describing important components, characteristics, and interactions. Models are constructed across a variety of scientific disciplines, such as the food web in biology, the water cycle in Earth science, or the structure of the solar system in astronomy. Models are central for scientists to understand phenomena, construct explanations, and communicate theories. Constructing and using models to explain scientific phenomena is also an essential practice in contemporary science classrooms. Our research explores new techniques for understanding scientific modeling and engagement with modeling practices. We work with students in secondary biology classrooms as they use a web-based software tool - EcoSurvey - to characterize organisms and their interrelationships found in their local ecosystem. We use learning analytics and machine learning techniques to answer the following questions: 1) How can we automatically measure the extent to which students’ scientific models support complete explanations of phenomena? 2) How does the design of student modeling tools influence the complexity and completeness of students’ models? 3) How do clickstreams reflect and differentiate student engagement with modeling practices? We analyzed EcoSurvey usage data collected from two different deployments with over 1000 secondary students across a large urban school district. We observe large variations in the completeness and complexity of student models, and large variations in their iterative refinement processes. These differences reveal that certain key model features are highly predictive of other aspects of the model. We also observe large differences in student modeling practices across different classrooms and teachers. We can predict a student’s teacher based on the observed modeling practices with a high degree of accuracy without significant tuning of the predictive model. These results highlight the value of this approach for extending our understanding of student engagement with scientific modeling, an important contemporary science practice, as well as the potential value of analytics for identifying critical differences in classroom implementation.
{"title":"Using Learning Analytics to Understand Scientific Modeling in the Classroom","authors":"David Quigley, Conor McNamara, Jonathan L. Ostwald, T. Sumner","doi":"10.3389/fict.2017.00024","DOIUrl":"https://doi.org/10.3389/fict.2017.00024","url":null,"abstract":"Scientific models represent ideas, processes, and phenomena by describing important components, characteristics, and interactions. Models are constructed across a variety of scientific disciplines, such as the food web in biology, the water cycle in Earth science, or the structure of the solar system in astronomy. Models are central for scientists to understand phenomena, construct explanations, and communicate theories. Constructing and using models to explain scientific phenomena is also an essential practice in contemporary science classrooms. Our research explores new techniques for understanding scientific modeling and engagement with modeling practices. We work with students in secondary biology classrooms as they use a web-based software tool - EcoSurvey - to characterize organisms and their interrelationships found in their local ecosystem. We use learning analytics and machine learning techniques to answer the following questions: 1) How can we automatically measure the extent to which students’ scientific models support complete explanations of phenomena? 2) How does the design of student modeling tools influence the complexity and completeness of students’ models? 3) How do clickstreams reflect and differentiate student engagement with modeling practices? We analyzed EcoSurvey usage data collected from two different deployments with over 1000 secondary students across a large urban school district. We observe large variations in the completeness and complexity of student models, and large variations in their iterative refinement processes. These differences reveal that certain key model features are highly predictive of other aspects of the model. We also observe large differences in student modeling practices across different classrooms and teachers. We can predict a student’s teacher based on the observed modeling practices with a high degree of accuracy without significant tuning of the predictive model. These results highlight the value of this approach for extending our understanding of student engagement with scientific modeling, an important contemporary science practice, as well as the potential value of analytics for identifying critical differences in classroom implementation.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"76 1","pages":"24"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80996561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electronic travel aids (ETAs) can potentially increase the safety and comfort of blind users by detecting and displaying obstacles outside the range of the white cane. In a series of experiments, we aim to balance the amount of information displayed and the comprehensibility of the information taking into account the risk of information overload. In Experiment 1, we investigate perception of compound signals displayed on a tactile vest while walking. The results confirm that the threat of information overload is clear and present. Tactile coding parameters that are sufficiently discriminable in isolation may not be so in compound signals and while walking and using the white cane. Horizontal tactor location is a strong coding parameter, and temporal pattern is the preferred secondary coding parameter. Vertical location is also possible as coding parameter but it requires additional tactors and makes the display hardware more complex and expensive and less user friendly. In Experiment 2, we investigate how we can off-load the tactile modality by mitigating part of the information to an auditory display. Off-loading the tactile modality through auditory presentation is possible, but this off-loading is limited and may result in a new threat of auditory overload. In addition, taxing the auditory channel may in turn interfere with other auditory cues from the environment. In Experiment 3, we off-load the tactile sense by reducing the amount of displayed information using several filter rules. The resulting design was evaluated in Experiment 4 with visually impaired users. Although they acknowledge the potential of the display, the added of the ETA as a whole also depends on its sensor and object recognition capabilities. We recommend to use not more than two coding parameters in a tactile compound message and apply filter rules to reduce the amount of obstacles to be displayed in an obstacle avoidance ETA.
{"title":"Obstacle Detection Display for Visually Impaired: Coding of Direction, Distance, and Height on a Vibrotactile Waist Band","authors":"J. V. Erp, L. Kroon, T. Mioch, K. I. Paul","doi":"10.3389/fict.2017.00023","DOIUrl":"https://doi.org/10.3389/fict.2017.00023","url":null,"abstract":"Electronic travel aids (ETAs) can potentially increase the safety and comfort of blind users by detecting and displaying obstacles outside the range of the white cane. In a series of experiments, we aim to balance the amount of information displayed and the comprehensibility of the information taking into account the risk of information overload. In Experiment 1, we investigate perception of compound signals displayed on a tactile vest while walking. The results confirm that the threat of information overload is clear and present. Tactile coding parameters that are sufficiently discriminable in isolation may not be so in compound signals and while walking and using the white cane. Horizontal tactor location is a strong coding parameter, and temporal pattern is the preferred secondary coding parameter. Vertical location is also possible as coding parameter but it requires additional tactors and makes the display hardware more complex and expensive and less user friendly. In Experiment 2, we investigate how we can off-load the tactile modality by mitigating part of the information to an auditory display. Off-loading the tactile modality through auditory presentation is possible, but this off-loading is limited and may result in a new threat of auditory overload. In addition, taxing the auditory channel may in turn interfere with other auditory cues from the environment. In Experiment 3, we off-load the tactile sense by reducing the amount of displayed information using several filter rules. The resulting design was evaluated in Experiment 4 with visually impaired users. Although they acknowledge the potential of the display, the added of the ETA as a whole also depends on its sensor and object recognition capabilities. We recommend to use not more than two coding parameters in a tactile compound message and apply filter rules to reduce the amount of obstacles to be displayed in an obstacle avoidance ETA.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"273 1","pages":"23"},"PeriodicalIF":0.0,"publicationDate":"2017-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3389/fict.2017.00023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72431997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Big Data are expected to exert profound impacts on medicine. High-throughput technologies, electronic medical records, high-resolution imaging, multiplexed omics, are progressing at a fast pace. Because they all yield complex heterogeneous data types, the main difficulty consists in interpreting the results. In light of the emerging Precision Medicine paradigm, oncology is influenced by digital phenotypes characterizing disease expression, In particular, digital biomarkers could become critical for the evaluation of clinical endpoints. Currently, integrative approaches are conceived for the analysis of multi-evidenced data, i.e. data generated from multiple sources, such as cells, organs, individual lifestyle and social habits, environment, population dynamics etc. The granularity, the scales of measurement, the model prediction accuracy, these are factors justifying an ongoing progressive differentiation from evidence-based medicine, typically based on a relatively small and unique scale of the experiments, thus well-assimilated by a mathematical or statistical model. A premise of precision medicine is the N-of-1 paradigm, inspired by a focus on individualization. However, diversity, amount and complexity of input data points that are needed for individual assessments, suggest centrality of systems inference principles. In turn, a revised paradigm is acquiring relevance, say (N-of-1)c, where the exponent c indicates connectivity. What makes connectivity such a key factor? For instance the synergy embedded but often latent in the data layers, namely signatures, profiles etc., which can lead to many stratified directions.Reference then goes to the biological and medical insights due to data integration, here discussed in view of the current oncologic trends.
{"title":"Precision Oncology: The Promise of Big Data and the Legacy of Small Data","authors":"E. Capobianco","doi":"10.3389/fict.2017.00022","DOIUrl":"https://doi.org/10.3389/fict.2017.00022","url":null,"abstract":"Big Data are expected to exert profound impacts on medicine. High-throughput technologies, electronic medical records, high-resolution imaging, multiplexed omics, are progressing at a fast pace. Because they all yield complex heterogeneous data types, the main difficulty consists in interpreting the results. In light of the emerging Precision Medicine paradigm, oncology is influenced by digital phenotypes characterizing disease expression, In particular, digital biomarkers could become critical for the evaluation of clinical endpoints. Currently, integrative approaches are conceived for the analysis of multi-evidenced data, i.e. data generated from multiple sources, such as cells, organs, individual lifestyle and social habits, environment, population dynamics etc. The granularity, the scales of measurement, the model prediction accuracy, these are factors justifying an ongoing progressive differentiation from evidence-based medicine, typically based on a relatively small and unique scale of the experiments, thus well-assimilated by a mathematical or statistical model. A premise of precision medicine is the N-of-1 paradigm, inspired by a focus on individualization. However, diversity, amount and complexity of input data points that are needed for individual assessments, suggest centrality of systems inference principles. In turn, a revised paradigm is acquiring relevance, say (N-of-1)c, where the exponent c indicates connectivity. What makes connectivity such a key factor? For instance the synergy embedded but often latent in the data layers, namely signatures, profiles etc., which can lead to many stratified directions.Reference then goes to the biological and medical insights due to data integration, here discussed in view of the current oncologic trends.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"288 1","pages":"22"},"PeriodicalIF":0.0,"publicationDate":"2017-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83132752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As journalists experiment with developing immersive journalism—first-person, interactive experiences of news events—guidelines are needed to help bridge a disconnect between the requirements of journalism and the capabilities of emerging technologies. Many journalists need to better understand the fundamental concepts of immersion and the capabilities and limitations of common immersive technologies. Similarly, developers of immersive journalism works need to know the fundamentals that define journalistic professionalism and excellence and the key requirements of various types of journalistic stories. To address these gaps, we have developed FIJI—a Framework for the Immersion-Journalism Intersection. In FIJI, we have identified four domains of knowledge that intersect to define the key requirements of immersive journalism: the fundamentals of immersion, common immersive technologies, the fundamentals of journalism, and the major types of journalistic stories. Based on these key requirements, we have formally defined four types of immersive journalism that are appropriate for public dissemination. In this paper, we discuss the history of immersive journalism, present the four domains and key intersection of FIJI, and provide a number of guidelines for journalists new to creating immersive experiences.
{"title":"FIJI: A Framework for the Immersion-Journalism Intersection","authors":"Gary M. Hardee, Ryan P. McMahan","doi":"10.3389/fict.2017.00021","DOIUrl":"https://doi.org/10.3389/fict.2017.00021","url":null,"abstract":"As journalists experiment with developing immersive journalism—first-person, interactive experiences of news events—guidelines are needed to help bridge a disconnect between the requirements of journalism and the capabilities of emerging technologies. Many journalists need to better understand the fundamental concepts of immersion and the capabilities and limitations of common immersive technologies. Similarly, developers of immersive journalism works need to know the fundamentals that define journalistic professionalism and excellence and the key requirements of various types of journalistic stories. To address these gaps, we have developed FIJI—a Framework for the Immersion-Journalism Intersection. In FIJI, we have identified four domains of knowledge that intersect to define the key requirements of immersive journalism: the fundamentals of immersion, common immersive technologies, the fundamentals of journalism, and the major types of journalistic stories. Based on these key requirements, we have formally defined four types of immersive journalism that are appropriate for public dissemination. In this paper, we discuss the history of immersive journalism, present the four domains and key intersection of FIJI, and provide a number of guidelines for journalists new to creating immersive experiences.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"22 1","pages":"21"},"PeriodicalIF":0.0,"publicationDate":"2017-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82415552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Osama Mazhar, Ahmad Zawawi Jamaluddin, Cansen Jiang, D. Fofi, R. Seulin, O. Morel
It has been observed in the nature that all creatures have evolved highly exclusive sensory organs depending on their habitat and the form of resources availability for their survival. In this project, a novel omnidirectional camera rig, inspired from natural vision sensors, is proposed. It is exclusively designed to operate for highly specified tasks in the field of mobile robotics. Navigation problems on uneven terrains and detection of the moving objects while the robot is itself in motion are the core problems that omnidirectional systems tackle. The proposed omnidirectional system is a compact and a rigid vision system with dioptric cameras that provide a 360 degrees field-of-view in horizontal and vertical, with no blind spot in their site combined with a high-resolution stereo camera to monitor anterior field-of-view for a more accurate perception with depth information of the scene. Structure-from-motion algorithm is adapted and implemented to prove the design validity of the proposed camera rig and a toolbox is developed to calibrate similar systems.
{"title":"Design and Calibration of a Specialized Polydioptric Camera Rig","authors":"Osama Mazhar, Ahmad Zawawi Jamaluddin, Cansen Jiang, D. Fofi, R. Seulin, O. Morel","doi":"10.3389/fict.2017.00019","DOIUrl":"https://doi.org/10.3389/fict.2017.00019","url":null,"abstract":"It has been observed in the nature that all creatures have evolved highly exclusive sensory organs depending on their habitat and the form of resources availability for their survival. In this project, a novel omnidirectional camera rig, inspired from natural vision sensors, is proposed. It is exclusively designed to operate for highly specified tasks in the field of mobile robotics. Navigation problems on uneven terrains and detection of the moving objects while the robot is itself in motion are the core problems that omnidirectional systems tackle. The proposed omnidirectional system is a compact and a rigid vision system with dioptric cameras that provide a 360 degrees field-of-view in horizontal and vertical, with no blind spot in their site combined with a high-resolution stereo camera to monitor anterior field-of-view for a more accurate perception with depth information of the scene. Structure-from-motion algorithm is adapted and implemented to prove the design validity of the proposed camera rig and a toolbox is developed to calibrate similar systems.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"29 1","pages":"19"},"PeriodicalIF":0.0,"publicationDate":"2017-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88906643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ivan Deriu, F. D’Amico, K. Tsiamis, E. Gervasini, A. Cardoso
Building and managing large datasets of alien species is crucial to research, management and control of biological invasions. To this end, the European Alien Species Information Network (EASIN) platform aggregates, integrates and harmonizes spatio-temporal data regarding alien species in Europe, including both invasive and non-invasive alien species. These data are stored in the EASIN Geodatabase after their harvesting from relevant sources in the frame of a global and European databases partnership and scientific literature. The ownership of the data remains with its source, which is properly cited and linked. The process of data harvesting is performed through the EASIN Data Broker system, which retrieves the information related to alien species data in Europe and stores them in a normalized database structure. Data are subsequently refined through validation, cleansing and standardization processes and finally stored in the EASIN Geodatabase. All data are finally visualized and shown in occurrence maps at different levels of spatial visualization. Analysis of the data contained in the EASIN Geodatabase through flexible web services offered by the system has already provided useful input in scientific works and policies on biological invasions. Data from European Union (EU) member state official surveillance systems, within the framework of the EU Regulation 1143/2014 on Invasive Alien Species, are expected to contribute to the update of the EASIN Geodatabase. In addition, data from citizen science initiatives will further enrich the Geodatabase after appropriate validation. In this article we describe and discuss the technical aspects, data flow and capabilities of the EASIN Geodatabase.
建立和管理外来物种的大型数据集对于生物入侵的研究、管理和控制至关重要。为此,欧洲外来物种信息网络(EASIN)平台汇集、整合和协调了欧洲外来物种的时空数据,包括入侵和非入侵外来物种。这些数据在全球和欧洲数据库伙伴关系以及科学文献的框架内从相关来源收集后存储在EASIN地理数据库中。数据的所有权仍然属于它的来源,它被适当地引用和链接。数据收集过程通过EASIN data Broker系统进行,该系统检索欧洲外来物种数据的相关信息,并将其存储在规范化的数据库结构中。数据随后通过验证、清理和标准化过程进行细化,最后存储在EASIN Geodatabase中。所有数据最终被可视化,并显示在不同层次的空间可视化发生图中。通过该系统提供的灵活网络服务,对EASIN地理数据库内的数据进行分析,已经为有关生物入侵的科学工作和政策提供了有用的投入。来自欧盟(EU)成员国官方监测系统的数据,在欧盟法规1143/2014关于外来入侵物种的框架内,预计将有助于EASIN地理数据库的更新。此外,来自公民科学项目的数据将在适当的验证后进一步丰富Geodatabase。在本文中,我们描述和讨论了EASIN地理数据库的技术方面、数据流和功能。
{"title":"Handling Big Data of Alien Species in Europe: The European Alien Species Information Network Geodatabase","authors":"Ivan Deriu, F. D’Amico, K. Tsiamis, E. Gervasini, A. Cardoso","doi":"10.3389/fict.2017.00020","DOIUrl":"https://doi.org/10.3389/fict.2017.00020","url":null,"abstract":"Building and managing large datasets of alien species is crucial to research, management and control of biological invasions. To this end, the European Alien Species Information Network (EASIN) platform aggregates, integrates and harmonizes spatio-temporal data regarding alien species in Europe, including both invasive and non-invasive alien species. These data are stored in the EASIN Geodatabase after their harvesting from relevant sources in the frame of a global and European databases partnership and scientific literature. The ownership of the data remains with its source, which is properly cited and linked. The process of data harvesting is performed through the EASIN Data Broker system, which retrieves the information related to alien species data in Europe and stores them in a normalized database structure. Data are subsequently refined through validation, cleansing and standardization processes and finally stored in the EASIN Geodatabase. All data are finally visualized and shown in occurrence maps at different levels of spatial visualization. Analysis of the data contained in the EASIN Geodatabase through flexible web services offered by the system has already provided useful input in scientific works and policies on biological invasions. Data from European Union (EU) member state official surveillance systems, within the framework of the EU Regulation 1143/2014 on Invasive Alien Species, are expected to contribute to the update of the EASIN Geodatabase. In addition, data from citizen science initiatives will further enrich the Geodatabase after appropriate validation. In this article we describe and discuss the technical aspects, data flow and capabilities of the EASIN Geodatabase.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"49 1","pages":"20"},"PeriodicalIF":0.0,"publicationDate":"2017-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78021633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}