Pub Date : 2017-10-01DOI: 10.1016/j.jvlc.2017.08.004
Giuseppe Desolda , Carmelo Ardito , Maria Francesca Costabile , Maristella Matera
Developing interactive systems to access and manipulate data is a very tough task. In particular, the development of user interfaces (UIs) is one of the most time-consuming activities in the software lifecycle. This is even more demanding when data have to be retrieved by accessing flexibly different online resources. Indeed, software development is moving more and more toward composite applications that aggregate on the fly specific Web services and APIs. In this article, we present a mashup model that describes the integration, at the presentation layer, of UI components. The goal is to allow non-technical end users to visualize and manipulate (i.e., to perform actions on) the data displayed by the components, which thus become actionable UI components. This article shows how the model has guided the development of a mashup platform through which non-technical end users can create component-based interactive workspaces via the aggregation and manipulation of data fetched from distributed online resources. Due to the abundance of online data sources, facilitating the creation of such interactive workspaces is a very relevant need that emerges in different contexts. A utilization study has been performed in order to assess the benefits of the proposed model and of the Actionable UI Components; participants were required to perform real tasks using the mashup platform. The study results are reported and discussed.
{"title":"End-user composition of interactive applications through actionable UI components","authors":"Giuseppe Desolda , Carmelo Ardito , Maria Francesca Costabile , Maristella Matera","doi":"10.1016/j.jvlc.2017.08.004","DOIUrl":"https://doi.org/10.1016/j.jvlc.2017.08.004","url":null,"abstract":"<div><p><span>Developing interactive systems to access and manipulate data is a very tough task. In particular, the development of user interfaces (UIs) is one of the most time-consuming activities in the software lifecycle. This is even more demanding when data have to be retrieved by accessing flexibly different online resources. Indeed, software development is moving more and more toward composite applications that aggregate on the fly specific Web services and APIs. In this article, we present a mashup model that describes the integration, at the </span><em>presentation layer</em>, of UI components. The goal is to allow non-technical end users to visualize and manipulate (i.e., to perform actions on) the data displayed by the components, which thus become <em>actionable UI components</em>. This article shows how the model has guided the development of a mashup platform through which non-technical end users can create component-based interactive workspaces via the aggregation and manipulation of data fetched from distributed online resources. Due to the abundance of online data sources, facilitating the creation of such interactive workspaces is a very relevant need that emerges in different contexts. A utilization study has been performed in order to assess the benefits of the proposed model and of the Actionable UI Components; participants were required to perform real tasks using the mashup platform. The study results are reported and discussed.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"42 ","pages":"Pages 46-59"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2017.08.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72108876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1016/j.jvlc.2017.08.001
Gail M. Rodney
{"title":"Inaugural JVLC S. K. Chang Best Paper Award Winner Announcement","authors":"Gail M. Rodney","doi":"10.1016/j.jvlc.2017.08.001","DOIUrl":"https://doi.org/10.1016/j.jvlc.2017.08.001","url":null,"abstract":"","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"42 ","pages":"Pages iii-iv"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2017.08.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72070694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Association rules have been widely used for detecting relations between attribute-value pairs of categorical datasets. Existing solutions of mining interesting association rules are based on the support-confidence theory. However, it is non-trivial for the user to understand and modify the rules or the results of intermediate steps in the mining process, because the interestingness of rules might differ largely for various tasks and users. In this paper we reinforce conventional association rule mining process by mapping the entire process into a visualization assisted loop, with which the user workload for modulating parameters and mining rules is reduced, and the mining efficiency is greatly improved. A hierarchical matrix-based visualization technique is proposed for the user to explore the measure value and the intermediate results of association rules. We also design a set of visual exploration tools to support interactively inspection and manipulation of mining process. The effectiveness and usability of our approach is demonstrated with two scenarios.
{"title":"Visual analysis of user-driven association rule mining","authors":"Wei Chen , Cong Xie , Pingping Shang , Qunsheng Peng","doi":"10.1016/j.jvlc.2017.08.007","DOIUrl":"https://doi.org/10.1016/j.jvlc.2017.08.007","url":null,"abstract":"<div><p>Association rules have been widely used for detecting relations between attribute-value pairs of categorical datasets. Existing solutions of mining interesting association rules are based on the support-confidence theory. However, it is non-trivial for the user to understand and modify the rules or the results of intermediate steps in the mining process, because the interestingness of rules might differ largely for various tasks and users. In this paper we reinforce conventional association rule mining process by mapping the entire process into a visualization assisted loop, with which the user workload for modulating parameters and mining rules is reduced, and the mining efficiency is greatly improved. A hierarchical matrix-based visualization technique is proposed for the user to explore the measure value and the intermediate results of association rules. We also design a set of visual exploration tools to support interactively inspection and manipulation of mining process. The effectiveness and usability of our approach is demonstrated with two scenarios.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"42 ","pages":"Pages 76-85"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2017.08.007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72108875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1016/j.jvlc.2017.08.005
Pierfrancesco Bellini, Daniele Cenni, Paolo Nesi , Irene Paoli
Monitoring, understanding and predicting city user behaviour (hottest places, trajectories, flows, etc.) is one the major topics in the context of Smart City management. People flow surveillance provides valuable information about city conditions, useful not only for monitoring and controlling the environmental conditions, but also to optimize the deliverying of city services (security, clean, transport,..). In this context, it is mandatory to develop methods and tools for assessing people behaviour in the city. This paper presents a methodology to instrument the city via the placement of Wi-Fi Access Points, AP, and to use them as sensors to capture and understand city user behaviour with a significant precision rate (the understanding of city user behaviour is concretized with the computing of heat-maps, origin destination matrices and predicting user density). The first issue is the positioning of Wi-Fi AP in the city, thus a comparative analyses have been conducted with respect to the real data (i.e., cab traces) of the city of San Francisco. Several different positioning methodologies of APs have been proposed and compared, to minimize the cost of AP installation with the aim of producing the best origin destination matrices. In a second phase, the methodology was adopted to select suitable AP in the city of Florence (Italy), with the aim of observing city users behaviour. The obtained instrumented Firenze Wi-Fi network collected data for 6 months. The data has been analysed with data mining techniques to infer similarity patterns in AP area and related time series. The resulting model has been validated and used for predicting the number of AP accesses that is also related to number of city users. The research work described in this paper has been conducted in the scope of the EC funded Horizon 2020 project Resolute (http://www.resolute-eu.org), for early warning and city resilience.
{"title":"Wi-Fi based city users’ behaviour analysis for smart city","authors":"Pierfrancesco Bellini, Daniele Cenni, Paolo Nesi , Irene Paoli","doi":"10.1016/j.jvlc.2017.08.005","DOIUrl":"https://doi.org/10.1016/j.jvlc.2017.08.005","url":null,"abstract":"<div><p><span>Monitoring, understanding and predicting city user behaviour (hottest places, trajectories, flows, etc.) is one the major topics in the context of Smart City management. People flow surveillance provides valuable information about city conditions, useful not only for monitoring and controlling the environmental conditions, but also to optimize the deliverying of city services (security, clean, transport,..). In this context, it is mandatory to develop methods and tools for assessing people behaviour in the city. This paper presents a methodology to instrument the city via the placement of Wi-Fi Access Points, AP, and to use them as sensors to capture and understand city user behaviour with a significant precision rate (the understanding of city user behaviour is concretized with the computing of heat-maps, origin destination matrices and predicting user density). The first issue is the positioning of Wi-Fi AP in the city, thus a comparative analyses have been conducted with respect to the real data (i.e., cab traces) of the city of San Francisco. Several different positioning methodologies of APs have been proposed and compared, to minimize the cost of AP installation with the aim of producing the best origin destination matrices. In a second phase, the methodology was adopted to select suitable AP in the city of Florence (Italy), with the aim of observing city users behaviour. The obtained instrumented Firenze Wi-Fi network collected data for 6 months. The data has been analysed with data mining techniques to infer similarity patterns in AP area and related time series. The resulting model has been validated and used for predicting the number of AP accesses that is also related to number of city users. The research work described in this paper has been conducted in the scope of the EC funded Horizon 2020 project Resolute (</span><span>http://www.resolute-eu.org</span><svg><path></path></svg>), for early warning and city resilience.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"42 ","pages":"Pages 31-45"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2017.08.005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72070692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1016/j.jvlc.2017.07.001
Huang Zheng, Limke Jed, Kong Jun
Using a mobile device together with a large shared screen supports collaborative tasks and potentially prevents interference among users. In order to evaluate the usability of inter-device interaction, this paper compared two fundamental inter-device interaction styles, i.e., one-handed and two-handed interaction. The one-handed interaction style only uses one hand to select an object from a large display device while the two-handed interaction style needs the cooperation of two hands to realize a selection. A framework was developed to implement these two interaction styles. Based on the framework, a pretest-posttest, repeated-measures study was conducted to compare their differences. All participants went through eight tasks, which were differentiated by both the selection order (sequential or random order) and the density level (sparse or dense layout), using both interaction styles. During the study, both the completion time and the error rate in each task with each interaction style were recorded. In addition, the IBM Post-Study Usability Questionnaire (PSSUQ) was used to evaluate the subjective satisfaction on each interaction style. The overall PSSUQ score indicates that both interaction styles receive positive feedback with high user satisfaction. The study also revealed that the one-handed interaction took less time to complete tasks (i.e., more efficient) than the two-handed interaction, while the two-handed interaction style had a lower error rate than the one-handed interaction, and especially so in a dense layout.
{"title":"Investigating one-handed and two-handed inter-device interaction","authors":"Huang Zheng, Limke Jed, Kong Jun","doi":"10.1016/j.jvlc.2017.07.001","DOIUrl":"https://doi.org/10.1016/j.jvlc.2017.07.001","url":null,"abstract":"<div><p><span>Using a mobile device together with a large shared screen supports collaborative tasks and potentially prevents interference among users. In order to evaluate the usability of inter-device interaction, this paper compared two fundamental inter-device interaction styles, i.e., one-handed and two-handed interaction. The one-handed interaction style only uses one hand to select an object from a large display device while the two-handed interaction style needs the cooperation of two hands to realize a selection. A framework was developed to implement these two interaction styles. Based on the framework, a pretest-posttest, repeated-measures study was conducted to compare their differences. All participants went through eight tasks, which were differentiated by both the selection order (sequential or random order) and the density level (sparse or dense layout), using both interaction styles. During the study, both the completion time and the error rate in each task with each interaction style were recorded. In addition, the IBM Post-Study Usability Questionnaire (PSSUQ) was used to evaluate the </span>subjective satisfaction on each interaction style. The overall PSSUQ score indicates that both interaction styles receive positive feedback with high user satisfaction. The study also revealed that the one-handed interaction took less time to complete tasks (i.e., more efficient) than the two-handed interaction, while the two-handed interaction style had a lower error rate than the one-handed interaction, and especially so in a dense layout.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"42 ","pages":"Pages 1-12"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2017.07.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72111171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1016/j.jvlc.2016.04.001
Alan Dix
This paper explores the roots of human–computer interaction as a discipline, the various trends which have marked its development, and some of the current and future challenges for research. Human–computer interaction, like any vocational discipline, sits upon three broad foundations: theoretical principles, professional practice and a community of people. As an interdisciplinary field the theoretical roots of HCI encompass a number of other disciplines including psychology, computing, ergonomics, and social sciences; however, it also has theoretical and practical challenges of its own. The evolving internal and external context of HCI, computers, have become smaller and less costly; this has led to changes in nature of the users and uses of computers, with corresponding impact on society. The paper explores the current challenges of computers from the cloud to digital fabrication and the need to design for solitude. It suggests that HCI should not just react to the changes around it, but also shape those changes.
{"title":"Human–computer interaction, foundations and new paradigms","authors":"Alan Dix","doi":"10.1016/j.jvlc.2016.04.001","DOIUrl":"https://doi.org/10.1016/j.jvlc.2016.04.001","url":null,"abstract":"<div><p><span>This paper explores the roots of human–computer interaction as a discipline, the various trends which have marked its development, and some of the current and future challenges for research. Human–computer interaction, like any vocational discipline, sits upon three broad foundations: theoretical principles, professional practice and a community of people. As an </span>interdisciplinary field the theoretical roots of HCI encompass a number of other disciplines including psychology, computing, ergonomics, and social sciences; however, it also has theoretical and practical challenges of its own. The evolving internal and external context of HCI, computers, have become smaller and less costly; this has led to changes in nature of the users and uses of computers, with corresponding impact on society. The paper explores the current challenges of computers from the cloud to digital fabrication and the need to design for solitude. It suggests that HCI should not just react to the changes around it, but also shape those changes.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"42 ","pages":"Pages 122-134"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2016.04.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72108872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1016/j.jvlc.2016.03.001
Erhan Leblebici , Anthony Anjorin , Andy Schürr , Gabriele Taentzer
Visual languages (VLs) facilitate software development by not only supporting communication and abstraction, but also by generating various artifacts such as code and reports from the same high-level specification. VLs are thus often translated to other formalisms, in most cases with bidirectionality as a crucial requirement to, e.g., support re-engineering of software systems. Triple Graph Grammars (TGGs) are a rule-based language to specify consistency relations between two (visual) languages from which bidirectional translators are automatically derived. TGGs are formally founded but are also limited in expressiveness, i.e., not all types of consistency can be specified with TGGs. In particular, 1-to-n correspondence between elements depending on concrete input models cannot be described. In other words, a universal quantifier over certain parts of a TGG rule is missing to generalize consistency to arbitrary size. To overcome this, we transfer the well-known multi-amalgamation concept from algebraic graph transformation to TGGs, allowing us to mark certain parts of rules as repeated depending on the translation context. Our main contribution is to derive TGG-based translators that comply with this extension. Furthermore, we identify bad smells on the usage of multi-amalgamation in TGGs, prove that multi-amalgamation increases the expressiveness of TGGs, and evaluate our tool support.
{"title":"Multi-amalgamated triple graph grammars: Formal foundation and application to visual language translation","authors":"Erhan Leblebici , Anthony Anjorin , Andy Schürr , Gabriele Taentzer","doi":"10.1016/j.jvlc.2016.03.001","DOIUrl":"https://doi.org/10.1016/j.jvlc.2016.03.001","url":null,"abstract":"<div><p>Visual languages (VLs) facilitate software development by not only supporting communication and abstraction, but also by generating various artifacts such as code and reports from the same high-level specification. VLs are thus often translated to other formalisms, in most cases with bidirectionality as a crucial requirement to, e.g., support re-engineering of software systems. <em>Triple Graph Grammars</em> (<em>TGGs</em>) are a rule-based language to specify consistency relations between two (visual) languages from which bidirectional translators are automatically derived. TGGs are formally founded but are also limited in expressiveness, i.e., not all types of consistency can be specified with TGGs. In particular, 1-to-<em>n</em> correspondence between elements depending on concrete input models cannot be described. In other words, a <em>universal quantifier</em> over certain parts of a TGG rule is missing to generalize consistency to arbitrary size. To overcome this, we transfer the well-known <em>multi-amalgamation</em> concept from algebraic graph transformation to TGGs, allowing us to mark certain parts of rules as repeated depending on the translation context. Our main contribution is to derive TGG-based translators that comply with this extension. Furthermore, we identify bad smells on the usage of multi-amalgamation in TGGs, prove that multi-amalgamation increases the expressiveness of TGGs, and evaluate our tool support.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"42 ","pages":"Pages 99-121"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2016.03.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72108874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the design of an experimental multi-level slow intelligence system for visualizing personal health care, called the TDR system, consisting of interacting super-components each with different computation cycles specified by an abstract machine model. The TDR system has three major super-components: Tian (Heaven), Di (Earth) and Ren (Human), which are the essential ingredients of a human-centric psycho-physical system following the Chinese philosophy. Each super-component further consists of interacting components supported by an SIS server. This experimental TDR system provides a platform for exploring, visualizing and integrating different applications in personal health care, emergency management and social networking.
{"title":"A multi-level slow intelligence system for visualizing personal health care","authors":"Chang Shi-Kuo, Chen JunHui, Gao Wei, Lou ManSi, Yin XiYao, Zhang Qui, Zhao ZiHao","doi":"10.1016/j.jvlc.2017.08.006","DOIUrl":"https://doi.org/10.1016/j.jvlc.2017.08.006","url":null,"abstract":"<div><p>This paper describes the design of an experimental multi-level slow intelligence system for visualizing personal health care, called the TDR system, consisting of interacting super-components each with different computation cycles specified by an abstract machine model. The TDR system has three major super-components: Tian (Heaven), Di (Earth) and Ren (Human), which are the essential ingredients of a human-centric psycho-physical system following the Chinese philosophy. Each super-component further consists of interacting components supported by an SIS server. This experimental TDR system provides a platform for exploring, visualizing and integrating different applications in personal health care, emergency management and social networking.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"42 ","pages":"Pages 135-148"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2017.08.006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72111173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1016/j.jvlc.2017.03.005
Yi Zhang, Teng Liu, Kefei Li, Jiawan Zhang
With the era of data explosion coming, multidimensional visualization, as one of the most helpful data analysis technologies, is more frequently applied to the tasks of multidimensional data analysis. Correlation analysis is an efficient technique to reveal the complex relationships existing among the dimensions in multidimensional data. However, for the multidimensional data with complex dimension features,traditional correlation analysis methods are inaccurate and limited. In this paper, we introduce the improved Pearson correlation coefficient and mutual information correlation analysis respectively to detect the dimensions’ linear and non-linear correlations. For the linear case,all dimensions are classified into three groups according to their distributions. Then we correspondingly select the appropriate parameters for each group of dimensions to calculate their correlations. For the non-linear case,we cluster the data within each dimension. Then their probability distributions are calculated to analyze the dimensions’ correlations and dependencies based on the mutual information correlation analysis. Finally,we use the relationships between dimensions as the criteria for interactive ordering of axes in parallel coordinate displays.
{"title":"Improved visual correlation analysis for multidimensional data","authors":"Yi Zhang, Teng Liu, Kefei Li, Jiawan Zhang","doi":"10.1016/j.jvlc.2017.03.005","DOIUrl":"https://doi.org/10.1016/j.jvlc.2017.03.005","url":null,"abstract":"<div><p>With the era of data explosion coming, multidimensional visualization, as one of the most helpful data analysis technologies, is more frequently applied to the tasks of multidimensional data analysis. Correlation analysis is an efficient technique to reveal the complex relationships existing among the dimensions in multidimensional data. However, for the multidimensional data with complex dimension features,traditional correlation analysis methods are inaccurate and limited. In this paper, we introduce the improved Pearson correlation coefficient and mutual information correlation analysis respectively to detect the dimensions’ linear and non-linear correlations. For the linear case,all dimensions are classified into three groups according to their distributions. Then we correspondingly select the appropriate parameters for each group of dimensions to calculate their correlations. For the non-linear case,we cluster the data within each dimension. Then their probability distributions are calculated to analyze the dimensions’ correlations and dependencies based on the mutual information correlation analysis. Finally,we use the relationships between dimensions as the criteria for interactive ordering of axes in parallel coordinate displays.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"41 ","pages":"Pages 121-132"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2017.03.005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72062020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-01DOI: 10.1016/j.jvlc.2017.04.001
Jie Li , Zhao Xiao , Jun Kong
We present a new viewpoint-based approach to improving the exploration effects and efficiency of trajectory datasets. Our approach integrates novel trajectory visualization techniques with algorithms for selecting optimal viewpoints to explore the generated visualization. Both the visualization and the viewpoints will be represented in the form of KML, which can be directly rendered in most of off-the-shelf GIS platforms. By playing the viewpoint sequence and directly utilizing the components of GIS platforms to explore the visualization, the overview status, detailed information, and the time variation characteristics of the trajectories can be quickly captured. A case study and a usability experiment have been conducted on an actual public transportation dataset, justifying the effectiveness of our approach. Comparing with the basic exploration approach without viewpoints, we find our approach increases the speed of information retrieval when analyzing trajectory datasets, and enhances user experiences in 3D trajectory exploration.
{"title":"A viewpoint based approach to the visual exploration of trajectory","authors":"Jie Li , Zhao Xiao , Jun Kong","doi":"10.1016/j.jvlc.2017.04.001","DOIUrl":"https://doi.org/10.1016/j.jvlc.2017.04.001","url":null,"abstract":"<div><p>We present a new viewpoint-based approach to improving the exploration effects and efficiency of trajectory datasets. Our approach integrates novel trajectory visualization techniques with algorithms for selecting optimal viewpoints to explore the generated visualization. Both the visualization and the viewpoints will be represented in the form of KML, which can be directly rendered in most of off-the-shelf GIS platforms. By playing the viewpoint sequence and directly utilizing the components of GIS platforms to explore the visualization, the overview status, detailed information, and the time variation characteristics of the trajectories can be quickly captured. A case study and a usability experiment have been conducted on an actual public transportation dataset, justifying the effectiveness of our approach. Comparing with the basic exploration approach without viewpoints, we find our approach increases the speed of information retrieval when analyzing trajectory datasets, and enhances user experiences in 3D trajectory exploration.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"41 ","pages":"Pages 41-53"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2017.04.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72062022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}