Pub Date : 2018-12-01DOI: 10.1016/j.jvlc.2018.10.001
Matúš Sulír, Michaela Bačíková, Sergej Chodarev, Jaroslav Porubän
Source code written in textual programming languages is typically edited in integrated development environments (IDEs) or specialized code editors. These tools often display various visual items, such as icons, color highlights or more advanced graphical overlays directly in the main editable source code view. We call such visualizations source code editor augmentation.
In this paper, we present a first systematic mapping study of source code editor augmentation tools and approaches. We manually reviewed the metadata of 5553 articles published during the last twenty years in two phases – keyword search and references search. The result is a list of 103 relevant articles and a taxonomy of source code editor augmentation tools with seven dimensions, which we used to categorize the resulting list of the surveyed articles.
We also provide the definition of the term source code editor augmentation, along with a brief overview of historical development and augmentations available in current industrial IDEs.
{"title":"Visual augmentation of source code editors: A systematic mapping study","authors":"Matúš Sulír, Michaela Bačíková, Sergej Chodarev, Jaroslav Porubän","doi":"10.1016/j.jvlc.2018.10.001","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.10.001","url":null,"abstract":"<div><p>Source code written in textual programming languages is typically edited in integrated development environments (IDEs) or specialized code editors. These tools often display various visual items, such as icons, color highlights or more advanced graphical overlays directly in the main editable source code view. We call such visualizations source code editor augmentation.</p><p>In this paper, we present a first systematic mapping study of source code editor augmentation tools and approaches. We manually reviewed the metadata of 5553 articles published during the last twenty years in two phases – keyword search and references search. The result is a list of 103 relevant articles and a taxonomy of source code editor augmentation tools with seven dimensions, which we used to categorize the resulting list of the surveyed articles.</p><p>We also provide the definition of the term source code editor augmentation, along with a brief overview of historical development and augmentations available in current industrial IDEs.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"49 ","pages":"Pages 46-59"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.10.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72060150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1016/j.jvlc.2018.10.007
Andrea Rosà, Walter Binder
Reflective supertype information (RSI) is useful for many instrumentation-based type-specific analyses on the Java Virtual Machine (JVM). On the one hand, while such information can be obtained when performing the instrumentation within the same JVM process executing the instrumented program, in-process instrumentation severely limits the bytecode coverage of the analysis. On the other hand, performing the instrumentation in a separate process can achieve full bytecode coverage, but complete RSI is generally not available, often requiring the insertion of expensive runtime type checks in the instrumented program. In this article, we present a novel technique to accurately reify complete RSI in a separate instrumentation process. This is challenging, because the observed application may make use of custom classloaders and the loaded classes in one application execution are generally only known upon termination of the application. We implement our technique in an extension of the dynamic analysis framework DiSL. The resulting framework guarantees full bytecode coverage, while providing RSI. Evaluation results on a task profiler demonstrate that our technique can achieve speedups up to a factor of 6.24× wrt. resorting to runtime type checks in the instrumentation code for an analysis with full bytecode coverage.
{"title":"Optimizing type-specific instrumentation on the JVM with reflective supertype information","authors":"Andrea Rosà, Walter Binder","doi":"10.1016/j.jvlc.2018.10.007","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.10.007","url":null,"abstract":"<div><p><em>Reflective supertype information (RSI)</em><span> is useful for many instrumentation-based type-specific analyses on the Java Virtual Machine (JVM). On the one hand, while such information can be obtained when performing the instrumentation within the same JVM process executing the instrumented program, in-process instrumentation severely limits the bytecode coverage of the analysis. On the other hand, performing the instrumentation in a separate process can achieve full bytecode coverage, but complete RSI is generally not available, often requiring the insertion of expensive runtime type checks in the instrumented program. In this article, we present a novel technique to accurately reify complete RSI in a separate instrumentation process. This is challenging, because the observed application may make use of custom classloaders and the loaded classes in one application execution are generally only known upon termination of the application. We implement our technique in an extension of the dynamic analysis framework DiSL. The resulting framework guarantees full bytecode coverage, while providing RSI. Evaluation results on a task profiler demonstrate that our technique can achieve speedups up to a factor of 6.24× wrt. resorting to runtime type checks in the instrumentation code for an analysis with full bytecode coverage.</span></p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"49 ","pages":"Pages 29-45"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.10.007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72060151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1016/j.jvlc.2018.10.002
Giuseppe Della Penna, Sergio Orefice
In this paper we present PCT (Position-Connection-Time), a formalism capable of representing spatio-temporal knowledge in a qualitative fashion. This framework achieves an expressive power comparable to other classic spatial relation formalisms describing common topological and directional spatial relations. In addition, PCT introduces new classes of relations based both on the position of the objects and on their interconnections, and incorporates the notion of time within spatial relations in order to describe dynamic contexts. In this way, PCT is also able to model spatial arrangements that change over time, e.g., moving objects.
{"title":"Qualitative representation of spatio-temporal knowledge","authors":"Giuseppe Della Penna, Sergio Orefice","doi":"10.1016/j.jvlc.2018.10.002","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.10.002","url":null,"abstract":"<div><p>In this paper we present PCT (<em>Position-Connection-Time</em>), a formalism capable of representing <em>spatio-temporal knowledge</em><span> in a qualitative fashion. This framework achieves an expressive power<span> comparable to other classic spatial relation formalisms describing common topological and directional spatial relations. In addition, PCT introduces new classes of relations based both on the position of the objects and on their interconnections, and incorporates the notion of time within spatial relations in order to describe dynamic contexts. In this way, PCT is also able to model spatial arrangements that change over time, e.g., moving objects.</span></span></p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"49 ","pages":"Pages 1-16"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.10.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72060153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01DOI: 10.1016/j.jvlc.2018.10.003
Sérgio Queiroz de Medeiros , Fabio Mascarenhas
Parsing Expression Grammars (PEGs) are a formalism used to describe top-down parsers with backtracking. As PEGs do not provide a good error recovery mechanism, PEG-based parsers usually do not recover from syntax errors in the input, or recover from syntax errors using ad-hoc, implementation-specific features. The lack of proper error recovery makes PEG parsers unsuitable for use with Integrated Development Environments (IDEs), which need to build syntactic trees even for incomplete, syntactically invalid programs.
We discuss a conservative extension, based on PEGs with labeled failures, that adds a syntax error recovery mechanism for PEGs. This extension associates recovery expressionsto labels, where a label now not only reports a syntax error but also uses this recovery expression to reach a synchronization point in the input and resume parsing. We give an operational semantics of PEGs with this recovery mechanism, as well as an operational semantics for a parsing machinethat we can translate labeled PEGs with error recovery to, and prove the correctness of this translation. We use an implementation of labeled PEGs with error recovery via a parsing machine to build robust parsers, which use different recovery strategies, for the Lua language. We evaluate the effectiveness of these parsers, alone and in comparison with a Lua parser with automatic error recovery generated by ANTLR, a popular parser generator .
{"title":"Error recovery in parsing expression grammars through labeled failures and its implementation based on a parsing machine","authors":"Sérgio Queiroz de Medeiros , Fabio Mascarenhas","doi":"10.1016/j.jvlc.2018.10.003","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.10.003","url":null,"abstract":"<div><p><span>Parsing Expression Grammars (PEGs) are a formalism used to describe top-down parsers with backtracking. As PEGs do not provide a good error recovery mechanism, PEG-based parsers usually do not recover from syntax errors in the input, or recover from syntax errors using ad-hoc, implementation-specific features. The lack of proper error recovery makes PEG parsers unsuitable for use with Integrated Development Environments (IDEs), which need to build </span>syntactic trees even for incomplete, syntactically invalid programs.</p><p>We discuss a conservative extension, based on PEGs with labeled failures, that adds a syntax error recovery mechanism for PEGs. This extension associates <em>recovery expressions</em><span>to labels, where a label now not only reports a syntax error but also uses this recovery expression to reach a synchronization point<span> in the input and resume parsing. We give an operational semantics of PEGs with this recovery mechanism, as well as an operational semantics for a </span></span><em>parsing machine</em><span>that we can translate labeled PEGs with error recovery to, and prove the correctness of this translation. We use an implementation of labeled PEGs with error recovery via a parsing machine to build robust parsers, which use different recovery strategies, for the Lua language. We evaluate the effectiveness of these parsers, alone and in comparison with a Lua parser with automatic error recovery generated by ANTLR, a popular parser generator .</span></p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"49 ","pages":"Pages 17-28"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.10.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72060149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1016/j.jvlc.2018.08.002
Bingkun Chen, Hong Zhou, Xiaojun Chen
Time series analysis is an important topic in machine learning and a suitable visualization method can be used to facilitate the work of data mining. In this paper, we propose E-Embed: a novel framework to visualize time series data by projecting them into a low-dimensional space while capturing the underlying data structure. In the E-Embed framework, we use discrete distributions to model time series and measure the distances between them by using earth mover’s distance (EMD). After the distances between time series are calculated, we can visualize the data by dimensionality reduction algorithms. To combine different dimensionality reduction methods (such as Isomap) that depend on K-nearest neighbor (KNN) graph effectively, we propose an algorithm for constructing a KNN graph based on the earth mover’s distance. We evaluate our visualization framework on both univariate time series data and multivariate time series data. Experimental results demonstrate that E-Embed can provide high quality visualization with low computational cost.
{"title":"E-Embed: A time series visualization framework based on earth mover’s distance","authors":"Bingkun Chen, Hong Zhou, Xiaojun Chen","doi":"10.1016/j.jvlc.2018.08.002","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.08.002","url":null,"abstract":"<div><p>Time series analysis is an important topic in machine learning and a suitable visualization method can be used to facilitate the work of data mining. In this paper, we propose E-Embed: a novel framework to visualize time series data<span> by projecting them into a low-dimensional space while capturing the underlying data structure. In the E-Embed framework, we use discrete distributions to model time series and measure the distances between them by using earth mover’s distance (EMD). After the distances between time series are calculated, we can visualize the data by dimensionality reduction algorithms. To combine different dimensionality reduction methods (such as Isomap) that depend on K-nearest neighbor (KNN) graph effectively, we propose an algorithm for constructing a KNN graph based on the earth mover’s distance. We evaluate our visualization framework on both univariate time series data and multivariate time series data. Experimental results demonstrate that E-Embed can provide high quality visualization with low computational cost.</span></p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 110-122"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.08.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72081868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1016/j.jvlc.2018.06.004
Hong Sungin, Lee Chulhee, Chin Seongah
In order to render objects in computer graphics and video games that closely resemble real objects, it is necessary to emulate the physical characteristics of the material and determine optical parameters consisting of an absorption coefficient and a scattering coefficient, which are measured from real objects. In this study, we propose a physically based rendering technique that enables real-time rendering by extracting the optical parameters required for rendering opaque and translucent materials and then collecting the obtained information in a database (DB). For this purpose, optical parameters were extracted from the high-dynamic-range image (HDRI) of an object, which was obtained using self-produced optical imaging equipment by taking images of its upper and lower parts. Furthermore, by binding the optical parameter with the texture of the corresponding material, 122 material-rendering DB sets were established. The validity of the proposed method was verified through the evaluation of the result by 118 users.
{"title":"Physically based optical parameter database obtained from real materials for real-time material rendering","authors":"Hong Sungin, Lee Chulhee, Chin Seongah","doi":"10.1016/j.jvlc.2018.06.004","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.06.004","url":null,"abstract":"<div><p>In order to render objects in computer graphics and video games that closely resemble real objects, it is necessary to emulate the physical characteristics of the material and determine optical parameters consisting of an absorption coefficient and a scattering coefficient, which are measured from real objects. In this study, we propose a physically based rendering technique that enables real-time rendering by extracting the optical parameters required for rendering opaque and translucent materials and then collecting the obtained information in a database (DB). For this purpose, optical parameters were extracted from the high-dynamic-range image (HDRI) of an object, which was obtained using self-produced optical imaging equipment by taking images of its upper and lower parts. Furthermore, by binding the optical parameter with the texture of the corresponding material, 122 material-rendering DB sets were established. The validity of the proposed method was verified through the evaluation of the result by 118 users.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 29-39"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.06.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72036418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Identifying and analyzing the time-varying features is important for understanding the spatio-temporal datasets. While there are numerous studies on illustrative visualization, existing solutions can hardly show subtle variations in a temporal dataset. This paper introduces a novel illustrative visualization scheme that employs temporal filtering techniques to disclose desired tiny features, which are further enhanced by an adaptive temporal illustration technique. The unconcerned context can be suppressed in a similar fashion. We develop a visual exploration system that empowers users to interactively manipulate and analyze temporal features. The experimental results on a mobile calling data demonstrate the effectivity and usefulness of our method.
{"title":"Illustrative visualization of time-varying features in spatio-temporal data","authors":"Xiangyang Wu , Zixi Chen , Yuhui Gu , Weiru Chen , Mei-e Fang","doi":"10.1016/j.jvlc.2018.08.010","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.08.010","url":null,"abstract":"<div><p>Identifying and analyzing the time-varying features is important for understanding the spatio-temporal datasets. While there are numerous studies on illustrative visualization, existing solutions can hardly show subtle variations in a temporal dataset. This paper introduces a novel illustrative visualization scheme that employs temporal filtering techniques to disclose desired tiny features, which are further enhanced by an adaptive temporal illustration technique. The unconcerned context can be suppressed in a similar fashion. We develop a visual exploration system that empowers users to interactively manipulate and analyze temporal features. The experimental results on a mobile calling data demonstrate the effectivity and usefulness of our method.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 157-168"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.08.010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72036421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1016/j.jvlc.2018.03.001
Xiaoxiao Xiong, Min Fu, Min Zhu, Jing Liang
The success of Question and Answering (Q&A) communities mainly depends on the contribution of experts. However, there is a bottleneck for machine to identify these experts as soon as they participate in a community due to lack of enough activities during users’ early participation. To tackle that, we bring human’s business experience to potential expert prediction by combining machine learning and visual analytics. In this work, we propose a visual analytics system to identify potential experts semi-automatically. After the machine learning algorithm gives the result of the expert probability, analysts can locate a set of interested users whose expert probability is ambiguous and check the user information and behavior patterns of those users via the design of multi-dimension data visualization. Finally, our system models analysts’ knowledge of the community members’ identities, and then abstracts the knowledge quantificationally for machine learning algorithm. Thus, analysts can modify machine learning algorithm and the prediction process smoothly. A quantitative evaluation with real data has been studied to demonstrate the effectiveness of our system.
{"title":"Visual potential expert prediction in question and answering communities","authors":"Xiaoxiao Xiong, Min Fu, Min Zhu, Jing Liang","doi":"10.1016/j.jvlc.2018.03.001","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.03.001","url":null,"abstract":"<div><p><span>The success of Question and Answering (Q&A) communities mainly depends on the contribution of experts. However, there is a bottleneck for machine to identify these experts as soon as they participate in a community due to lack of enough activities during users’ early participation. To tackle that, we bring human’s business experience to potential expert prediction by combining machine learning and visual analytics. In this work, we propose a visual analytics system to identify potential experts semi-automatically. After the machine learning algorithm gives the result of the expert probability, analysts can locate a set of </span><em>interested users</em><span> whose expert probability is ambiguous and check the user information and behavior patterns of those users via the design of multi-dimension data visualization. Finally, our system models analysts’ knowledge of the community members’ identities, and then abstracts the knowledge quantificationally for machine learning algorithm. Thus, analysts can modify machine learning algorithm and the prediction process smoothly. A quantitative evaluation with real data has been studied to demonstrate the effectiveness of our system.</span></p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 70-80"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.03.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72036163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1016/j.jvlc.2018.06.005
Jinhua Dou , Jingyan Qin , Zanxia Jin , Zhuang Li
Intangible cultural heritage (ICH) is a precious historical and cultural resource of a country. Protection and inheritance of ICH is important to the sustainable development of national culture. There are many different intangible cultural heritage items in China. With the development of information technology, ICH database resources were built by government departments or public cultural services institutions, but most databases were widely dispersed. Certain traditional database systems are disadvantageous to storage, management and analysis of massive data. At the same time, a large quantity of data has been produced, accompanied by digital intangible cultural heritage development. The public is unable to grasp key knowledge quickly because of the massive and fragmented nature of the data. To solve these problems, we proposed the intangible cultural heritage knowledge graph to assist knowledge management and provide a service to the public. ICH domain ontology was defined with the help of intangible cultural heritage experts and knowledge engineers to regulate the concept, attribute and relationship of ICH knowledge. In this study, massive ICH data were obtained, and domain knowledge was extracted from ICH text data using the Natural Language Processing (NLP) technology. A knowledge base based on domain ontology and instances for Chinese intangible cultural heritage was constructed, and the knowledge graph was developed. The pattern and characteristics behind the intangible cultural heritage were presented based on the ICH knowledge graph. The knowledge graph for ICH could foster support for organization, management and protection of the intangible cultural heritage knowledge. The public can also obtain the ICH knowledge quickly and discover the linked knowledge. The knowledge graph is helpful for the protection and inheritance of intangible cultural heritage.
{"title":"Knowledge graph based on domain ontology and natural language processing technology for Chinese intangible cultural heritage","authors":"Jinhua Dou , Jingyan Qin , Zanxia Jin , Zhuang Li","doi":"10.1016/j.jvlc.2018.06.005","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.06.005","url":null,"abstract":"<div><p>Intangible cultural heritage (ICH) is a precious historical and cultural resource of a country. Protection and inheritance of ICH is important to the sustainable development of national culture. There are many different intangible cultural heritage items in China. With the development of information technology, ICH database resources were built by government departments or public cultural services institutions, but most databases were widely dispersed. Certain traditional database systems are disadvantageous to storage, management and analysis of massive data. At the same time, a large quantity of data has been produced, accompanied by digital intangible cultural heritage development. The public is unable to grasp key knowledge quickly because of the massive and fragmented nature of the data. To solve these problems, we proposed the intangible cultural heritage knowledge graph to assist knowledge management and provide a service to the public. ICH domain ontology was defined with the help of intangible cultural heritage experts and knowledge engineers to regulate the concept, attribute and relationship of ICH knowledge. In this study, massive ICH data were obtained, and domain knowledge was extracted from ICH text data using the Natural Language Processing (NLP) technology. A knowledge base based on domain ontology and instances for Chinese intangible cultural heritage was constructed, and the knowledge graph was developed. The pattern and characteristics behind the intangible cultural heritage were presented based on the ICH knowledge graph. The knowledge graph for ICH could foster support for organization, management and protection of the intangible cultural heritage knowledge. The public can also obtain the ICH knowledge quickly and discover the linked knowledge. The knowledge graph is helpful for the protection and inheritance of intangible cultural heritage.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 19-28"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.06.005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72036161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1016/j.jvlc.2018.08.003
Jiazhi Xia , Le Gao , Kezhi Kong , Ying Zhao , Yi Chen , Xiaoyan Kui , Yixiong Liang
Identifying patterns in 2D linear projections is important in understanding multi-dimensional datasets. However, local patterns, which are composed of partial data points, are usually obscured by noises and missed in traditional quality measure approaches that measure the whole dataset. In this paper, we propose an interactive interface to explore 2D linear projections with visual patterns on subsets. First, we propose a voting-based algorithm to recommend optimal projection, in which the identified pattern looks the most salient. Specifically, we propose three kinds of point-wise quality metrics of 2D linear projections for outliers, clusterings, and trends, respectively. For each sampled projection, we measure its importance by accumulating the metrics of selected points. The projection with the highest importance is recommended. Second, we design an exploring interface with a scatterplot, a projection trail map, and a control panel. Our interface allows users to explore projections by specifying interested data subsets. At last, we employ three datasets and demonstrate the effectiveness of our approach through three case studies of exploring clusters, outliers, and trends.
{"title":"Exploring linear projections for revealing clusters, outliers, and trends in subsets of multi-dimensional datasets","authors":"Jiazhi Xia , Le Gao , Kezhi Kong , Ying Zhao , Yi Chen , Xiaoyan Kui , Yixiong Liang","doi":"10.1016/j.jvlc.2018.08.003","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.08.003","url":null,"abstract":"<div><p>Identifying patterns in 2D linear projections is important in understanding multi-dimensional datasets. However, local patterns, which are composed of partial data points, are usually obscured by noises and missed in traditional quality measure approaches that measure the whole dataset. In this paper, we propose an interactive interface to explore 2D linear projections with visual patterns on subsets. First, we propose a voting-based algorithm to recommend optimal projection, in which the identified pattern looks the most salient. Specifically, we propose three kinds of point-wise quality metrics of 2D linear projections for outliers, clusterings, and trends, respectively. For each sampled projection, we measure its importance by accumulating the metrics of selected points. The projection with the highest importance is recommended. Second, we design an exploring interface with a scatterplot, a projection trail map, and a control panel. Our interface allows users to explore projections by specifying interested data subsets. At last, we employ three datasets and demonstrate the effectiveness of our approach through three case studies of exploring clusters, outliers, and trends.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 52-60"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.08.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72036447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}