Said-Magomed Sadulaev, R. Lapeer, Zelimkhan Gerikhanov, Edward Morris
A virtual reality program has been developed to assess the strength and flexibility of a computer based model of a term fetus or newborn baby's neck. The software has a haptic/force feedback user interface which allows clinical experts to adjust the mechanical properties, including range of motion and mechanical stiffness of a newborn neck model, at runtime. The developed software was assessed by ten clinical experts in obstetrics. The empirically obtained stiffness and range of motion values corresponded well with values reported in the literature.
{"title":"A Haptic User Interface to Assess the Mobility of the Newborn's Neck","authors":"Said-Magomed Sadulaev, R. Lapeer, Zelimkhan Gerikhanov, Edward Morris","doi":"10.1109/iV.2017.48","DOIUrl":"https://doi.org/10.1109/iV.2017.48","url":null,"abstract":"A virtual reality program has been developed to assess the strength and flexibility of a computer based model of a term fetus or newborn baby's neck. The software has a haptic/force feedback user interface which allows clinical experts to adjust the mechanical properties, including range of motion and mechanical stiffness of a newborn neck model, at runtime. The developed software was assessed by ten clinical experts in obstetrics. The empirically obtained stiffness and range of motion values corresponded well with values reported in the literature.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115225423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interactive infographics are a powerful tool to represent and communicate complex information. In datadriven journalism journalists use interactive infographics to explain new insights and facts while telling complex stories on the basis of retrieved data. However, readers of online news are still unexperienced while using interactive infographics. The results of a user survey among readers of online newspapers show how readers use and interact with interactive infographics in online newspapers. To improve the acceptance among users and to identify success factors of their utilization the results of a usability study of interactive infographics are presented.
{"title":"Acceptance and Usability of Interactive Infographics in Online Newspapers","authors":"S. Zwinger, Julia Langer, M. Zeiller","doi":"10.1109/iV.2017.65","DOIUrl":"https://doi.org/10.1109/iV.2017.65","url":null,"abstract":"Interactive infographics are a powerful tool to represent and communicate complex information. In datadriven journalism journalists use interactive infographics to explain new insights and facts while telling complex stories on the basis of retrieved data. However, readers of online news are still unexperienced while using interactive infographics. The results of a user survey among readers of online newspapers show how readers use and interact with interactive infographics in online newspapers. To improve the acceptance among users and to identify success factors of their utilization the results of a usability study of interactive infographics are presented.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121243056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joshua Brown, M. Witkowski, James Mardell, K. Wittenburg, R. Spence
Riffling the pages of a book, perhaps in the search for a specific image, is an example of Rapid Serial Visual Presentation (RSVP). Even at a pace of 10 images per second, successful search is often possible. Interest in RSVP arises because a digital embodiment of RSVP has many applications.There are many possible 'modes' of RSVP. However, a mode can be especially helpful if, after the appearance of an image, and without delaying the arrival of other images, it can remain in view for a second or two to allow a user to confirm that a desired image has been found. Moreover, if a collection of images is presented in such a way as to be perceived as moving in 3D space, it is thought that the search for an individual image can thereby be enhanced by comparison with a 2D presentation.To test this conjecture we devise and use the "Deep-Flat" visual illusion whereby a column of moving images magnifying in size is perceived as approaching the viewer as in a 3D space. When the images are presented in an equivalent way horizontally as a row, the viewer tends to see this as images growing in size, but now on a flat (2D) plane. We tested comparable RSVP designs in these two illusions to ascertain the relative effects of 2D and 3D style presentation under precisely controlled conditions. Elicited data included both performance measures (e.g., recognition success), and user preferences and opinions.We established the effectiveness of RSVP using the illusion. When tested under directly comparable conditions, we concluded that performance is not significantly affected by the illusion of depth, but that the inclusion of certain background cues can have a significantly detrimental effect on performance.
{"title":"The Role of Perspective Cues in RSVP","authors":"Joshua Brown, M. Witkowski, James Mardell, K. Wittenburg, R. Spence","doi":"10.1109/iV.2017.52","DOIUrl":"https://doi.org/10.1109/iV.2017.52","url":null,"abstract":"Riffling the pages of a book, perhaps in the search for a specific image, is an example of Rapid Serial Visual Presentation (RSVP). Even at a pace of 10 images per second, successful search is often possible. Interest in RSVP arises because a digital embodiment of RSVP has many applications.There are many possible 'modes' of RSVP. However, a mode can be especially helpful if, after the appearance of an image, and without delaying the arrival of other images, it can remain in view for a second or two to allow a user to confirm that a desired image has been found. Moreover, if a collection of images is presented in such a way as to be perceived as moving in 3D space, it is thought that the search for an individual image can thereby be enhanced by comparison with a 2D presentation.To test this conjecture we devise and use the \"Deep-Flat\" visual illusion whereby a column of moving images magnifying in size is perceived as approaching the viewer as in a 3D space. When the images are presented in an equivalent way horizontally as a row, the viewer tends to see this as images growing in size, but now on a flat (2D) plane. We tested comparable RSVP designs in these two illusions to ascertain the relative effects of 2D and 3D style presentation under precisely controlled conditions. Elicited data included both performance measures (e.g., recognition success), and user preferences and opinions.We established the effectiveness of RSVP using the illusion. When tested under directly comparable conditions, we concluded that performance is not significantly affected by the illusion of depth, but that the inclusion of certain background cues can have a significantly detrimental effect on performance.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124103735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Prisco, A. Esposito, N. Lettieri, Delfina Malandrino, Donato Pirozzi, Gianluca Zaccagnino, R. Zaccagnino
The plagiarism is a debated topic in different fields and in particular in music, given the huge amount of money that music is able to generate. Moreover, it is controversial aspect in the law's field given the subjectivity of the judges that have to pronounce on a suspicious case. Automatic detection of music plagiarism is fundamental to overcome these limits by representing an useful support for judges during their pronouncements and an important result to avoid musicians to spend more time in court than on composing and playing music.In this paper we address this issue by defining a new metric to discover pop music similarity and we study whether visualization can assist domain experts in judging suspicious cases. We describe a user study in which subjects performed different tasks on a song collection using different visual representations to investigate which one is best in terms of intuitiveness and accuracy. Results provided us with positive feedback about our choices and some useful suggestions for future directions.
{"title":"Music Plagiarism at a Glance: Metrics of Similarity and Visualizations","authors":"R. Prisco, A. Esposito, N. Lettieri, Delfina Malandrino, Donato Pirozzi, Gianluca Zaccagnino, R. Zaccagnino","doi":"10.1109/iV.2017.49","DOIUrl":"https://doi.org/10.1109/iV.2017.49","url":null,"abstract":"The plagiarism is a debated topic in different fields and in particular in music, given the huge amount of money that music is able to generate. Moreover, it is controversial aspect in the law's field given the subjectivity of the judges that have to pronounce on a suspicious case. Automatic detection of music plagiarism is fundamental to overcome these limits by representing an useful support for judges during their pronouncements and an important result to avoid musicians to spend more time in court than on composing and playing music.In this paper we address this issue by defining a new metric to discover pop music similarity and we study whether visualization can assist domain experts in judging suspicious cases. We describe a user study in which subjects performed different tasks on a song collection using different visual representations to investigate which one is best in terms of intuitiveness and accuracy. Results provided us with positive feedback about our choices and some useful suggestions for future directions.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125995787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper examines the application of a deep learning approach to converting night-time images to day-time images. In particular, we show that a convolutional neural network enables the simulation of artificial and ambient light on images. In this paper, we illustrate the design of the deep neural network and some preliminary results on a real indoor environment and two virtual environments rendered with a 3D graphics engine. The experimental results are encouraging and confirm that a convolutional neural network is an interesting approach in the fields of photo-editing and digital image postprocessing.
{"title":"Converting Night-Time Images to Day-Time Images through a Deep Learning Approach","authors":"N. Capece, U. Erra, Raffaele Scolamiero","doi":"10.1109/iV.2017.16","DOIUrl":"https://doi.org/10.1109/iV.2017.16","url":null,"abstract":"This paper examines the application of a deep learning approach to converting night-time images to day-time images. In particular, we show that a convolutional neural network enables the simulation of artificial and ambient light on images. In this paper, we illustrate the design of the deep neural network and some preliminary results on a real indoor environment and two virtual environments rendered with a 3D graphics engine. The experimental results are encouraging and confirm that a convolutional neural network is an interesting approach in the fields of photo-editing and digital image postprocessing.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133977804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Perháč, Wei Zeng, Shiho Asada, S. Arisona, S. Schubiger-Banz, R. Burkhard, Bernhard Klein
This paper presents a framework which allows urban planners to navigate and interact with large datasets fused with social feeds in real-time, enhanced by a virtual reality (VR) capability, which further promotes the knowledge discovery process and allows to interact with urban data in natural yet immersive way. A challenge in urban planning is making decisions based on datasets which are many times ambiguous, together with effective use of newly available yet unstructured sources of information like social media. Providing expert users with novel ways of representing knowledge can be beneficial for decision making. Game engines have evolved into capable testbeds for novel visualization and interaction techniques. We therefore explore the possibility of using a modern game engine as a platform for knowledge representation in urban planning and how it can be used to model ambiguity. We also investigate how urban planners can benefit from immersion when it comes to data exploration and knowledge discovery. We apply the concept of using primitives to publicly available transportation datasets and social feeds of New York city, we discuss a gesture-based VR extension of our framework and lastly, we conclude the paper with feedback from expert users in urban planning and with an outlook of future challenges.
{"title":"Urban Fusion: Visualizing Urban Data Fused with Social Feeds via a Game Engine","authors":"J. Perháč, Wei Zeng, Shiho Asada, S. Arisona, S. Schubiger-Banz, R. Burkhard, Bernhard Klein","doi":"10.1109/iV.2017.33","DOIUrl":"https://doi.org/10.1109/iV.2017.33","url":null,"abstract":"This paper presents a framework which allows urban planners to navigate and interact with large datasets fused with social feeds in real-time, enhanced by a virtual reality (VR) capability, which further promotes the knowledge discovery process and allows to interact with urban data in natural yet immersive way. A challenge in urban planning is making decisions based on datasets which are many times ambiguous, together with effective use of newly available yet unstructured sources of information like social media. Providing expert users with novel ways of representing knowledge can be beneficial for decision making. Game engines have evolved into capable testbeds for novel visualization and interaction techniques. We therefore explore the possibility of using a modern game engine as a platform for knowledge representation in urban planning and how it can be used to model ambiguity. We also investigate how urban planners can benefit from immersion when it comes to data exploration and knowledge discovery. We apply the concept of using primitives to publicly available transportation datasets and social feeds of New York city, we discuss a gesture-based VR extension of our framework and lastly, we conclude the paper with feedback from expert users in urban planning and with an outlook of future challenges.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126603520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Game development activity using Team-based Learning (TBL) was investigated in order to identify factors contributing to the usability of the product. In this study, three teams from two different countries are compared. As the related factors, the followings were examined to analyze the relationships with usability scores: (1) learning reflection, (2) social media communications within teams, and (3) participants’ characteristics and information literacy. Usability scores were conveyed using a System Usability Scale (SUS) by evaluations from the other teams. The participants’ characteristics and information literacy were measured before starting the project as a pre-test. The discussions and communications via social media of each group were categorized as: Proposal, Permission, Encouragement, and Acknowledgment, using protocol analysis to examine their contributions towards the usability scores. After completing the study project, a learning reflection questionnaire was completed by all participants to evaluate efficacy, satisfaction and achievement of learning, and difficulties.
{"title":"Analysis of Game Development Activity Using Team-Based Learning","authors":"Akiko Teranishi, M. Nakayama, T. Wyeld, M. Eid","doi":"10.1109/iV.2017.73","DOIUrl":"https://doi.org/10.1109/iV.2017.73","url":null,"abstract":"Game development activity using Team-based Learning (TBL) was investigated in order to identify factors contributing to the usability of the product. In this study, three teams from two different countries are compared. As the related factors, the followings were examined to analyze the relationships with usability scores: (1) learning reflection, (2) social media communications within teams, and (3) participants’ characteristics and information literacy. Usability scores were conveyed using a System Usability Scale (SUS) by evaluations from the other teams. The participants’ characteristics and information literacy were measured before starting the project as a pre-test. The discussions and communications via social media of each group were categorized as: Proposal, Permission, Encouragement, and Acknowledgment, using protocol analysis to examine their contributions towards the usability scores. After completing the study project, a learning reflection questionnaire was completed by all participants to evaluate efficacy, satisfaction and achievement of learning, and difficulties.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130439912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The visualization of numeric data is becoming an important element in journalism, and new tools and platforms make the development of data visualization in the news discourse accelerate. In this paper we present an interview study investigating this development in Scandinavian newsrooms. Editorial leaders, data journalists, graphic designers, and developers in 10 major news organizations in Norway, Sweden and Denmark inform the study on a range of issues concerning visual practices and experiences in the newsrooms. Elements of tension are revealed on issues concerning the role and effect of complex, exploratory data visualizations and concerning the role of ordinary journalists in the production of simpler charts and graphs. The results presented are the first outcome of a larger ongoing study investigating visual practices in six European countries.
{"title":"Visualization Practices in Scandinavian Newsrooms: A Qualitative Study","authors":"Martin Engebretsen, H. Kennedy, Wibke Weber","doi":"10.1109/iV.2017.54","DOIUrl":"https://doi.org/10.1109/iV.2017.54","url":null,"abstract":"The visualization of numeric data is becoming an important element in journalism, and new tools and platforms make the development of data visualization in the news discourse accelerate. In this paper we present an interview study investigating this development in Scandinavian newsrooms. Editorial leaders, data journalists, graphic designers, and developers in 10 major news organizations in Norway, Sweden and Denmark inform the study on a range of issues concerning visual practices and experiences in the newsrooms. Elements of tension are revealed on issues concerning the role and effect of complex, exploratory data visualizations and concerning the role of ordinary journalists in the production of simpler charts and graphs. The results presented are the first outcome of a larger ongoing study investigating visual practices in six European countries.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124912906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we address the question of the relationships between visualization challenges and the representation components that provide solutions to these challenges. Our approach involves extracting such relationships through an identification of the context and the components of a significant number of representations and a comparison of the result to existing theoretical studies. To make such an identification possible, we rely on a characterization of the representation context based on a thoughtful aggregation of existing characterizations of the data type, the tasks and the context of use of the representations. We illustrate our approach on a use-case with examples of a relationships extraction and of a comparison of that relationships to the theory. We believe that the establishment of such relationships makes it possible to understand the mechanisms behind the representations, in order to build a representation design recommendation tool. Such a tool will enable us to recommend the components to use in a representation, given a visualization challenge to address.
{"title":"Identifying the Relationships Between the Visualization Context and Representation Components to Enable Recommendations for Designing New Visualizations","authors":"Alma Cantu, O. Grisvard, Thierry Duval, G. Coppin","doi":"10.1109/iV.2017.55","DOIUrl":"https://doi.org/10.1109/iV.2017.55","url":null,"abstract":"In this paper we address the question of the relationships between visualization challenges and the representation components that provide solutions to these challenges. Our approach involves extracting such relationships through an identification of the context and the components of a significant number of representations and a comparison of the result to existing theoretical studies. To make such an identification possible, we rely on a characterization of the representation context based on a thoughtful aggregation of existing characterizations of the data type, the tasks and the context of use of the representations. We illustrate our approach on a use-case with examples of a relationships extraction and of a comparison of that relationships to the theory. We believe that the establishment of such relationships makes it possible to understand the mechanisms behind the representations, in order to build a representation design recommendation tool. Such a tool will enable us to recommend the components to use in a representation, given a visualization challenge to address.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127102344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic Image Annotation (AIA) is a challenging problem in the field of image retrieval, and several methods have been proposed. However, visually supporting this important tasks and reducing the semantic gap between low-level image features and high-level semantic concepts still remains a key issue. In this paper, we propose a visually supporting image annotation framework based on visual features and ontologies. Our framework relies on three main components: (i) extraction and classification of features component, (ii) ontology’s building component and (iii) image annotation component. Our goal consists on improving the visual image annotation by:(1) extracting invariant and complex visual features; (2) integrating feature classification results and semantic concepts to build ontology and (3) combining both visual and semantic similarities during the image annotation process.
{"title":"Visually Supporting Image Annotation Based on Visual Features and Ontologies","authors":"Jalila Filali, Hajer Baazaoui Zghal, J. Martinet","doi":"10.1109/iV.2017.27","DOIUrl":"https://doi.org/10.1109/iV.2017.27","url":null,"abstract":"Automatic Image Annotation (AIA) is a challenging problem in the field of image retrieval, and several methods have been proposed. However, visually supporting this important tasks and reducing the semantic gap between low-level image features and high-level semantic concepts still remains a key issue. In this paper, we propose a visually supporting image annotation framework based on visual features and ontologies. Our framework relies on three main components: (i) extraction and classification of features component, (ii) ontology’s building component and (iii) image annotation component. Our goal consists on improving the visual image annotation by:(1) extracting invariant and complex visual features; (2) integrating feature classification results and semantic concepts to build ontology and (3) combining both visual and semantic similarities during the image annotation process.","PeriodicalId":410876,"journal":{"name":"2017 21st International Conference Information Visualisation (IV)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129058558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}