首页 > 最新文献

Journal of Visual Languages and Computing最新文献

英文 中文
Visual augmentation of source code editors: A systematic mapping study 源代码编辑器的可视化扩充:一项系统映射研究
Q3 Computer Science Pub Date : 2018-12-01 DOI: 10.1016/j.jvlc.2018.10.001
Matúš Sulír, Michaela Bačíková, Sergej Chodarev, Jaroslav Porubän

Source code written in textual programming languages is typically edited in integrated development environments (IDEs) or specialized code editors. These tools often display various visual items, such as icons, color highlights or more advanced graphical overlays directly in the main editable source code view. We call such visualizations source code editor augmentation.

In this paper, we present a first systematic mapping study of source code editor augmentation tools and approaches. We manually reviewed the metadata of 5553 articles published during the last twenty years in two phases – keyword search and references search. The result is a list of 103 relevant articles and a taxonomy of source code editor augmentation tools with seven dimensions, which we used to categorize the resulting list of the surveyed articles.

We also provide the definition of the term source code editor augmentation, along with a brief overview of historical development and augmentations available in current industrial IDEs.

用文本编程语言编写的源代码通常在集成开发环境(IDE)或专门的代码编辑器中进行编辑。这些工具通常直接在主可编辑源代码视图中显示各种视觉项目,如图标、颜色高亮显示或更高级的图形覆盖。我们称这种可视化为源代码编辑器扩充。在本文中,我们提出了第一个源代码编辑器扩充工具和方法的系统映射研究。我们手动审查了过去二十年中发表的5553篇文章的元数据,分为两个阶段——关键词搜索和参考文献搜索。结果是103篇相关文章的列表和具有七个维度的源代码编辑器增强工具的分类法,我们使用这些工具对调查文章的结果列表进行分类。我们还提供了术语源代码编辑器扩充的定义,以及当前工业IDE中可用的历史发展和扩充的简要概述。
{"title":"Visual augmentation of source code editors: A systematic mapping study","authors":"Matúš Sulír,&nbsp;Michaela Bačíková,&nbsp;Sergej Chodarev,&nbsp;Jaroslav Porubän","doi":"10.1016/j.jvlc.2018.10.001","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.10.001","url":null,"abstract":"<div><p>Source code written in textual programming languages is typically edited in integrated development environments (IDEs) or specialized code editors. These tools often display various visual items, such as icons, color highlights or more advanced graphical overlays directly in the main editable source code view. We call such visualizations source code editor augmentation.</p><p>In this paper, we present a first systematic mapping study of source code editor augmentation tools and approaches. We manually reviewed the metadata of 5553 articles published during the last twenty years in two phases – keyword search and references search. The result is a list of 103 relevant articles and a taxonomy of source code editor augmentation tools with seven dimensions, which we used to categorize the resulting list of the surveyed articles.</p><p>We also provide the definition of the term source code editor augmentation, along with a brief overview of historical development and augmentations available in current industrial IDEs.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"49 ","pages":"Pages 46-59"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.10.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72060150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Optimizing type-specific instrumentation on the JVM with reflective supertype information 使用反射超类型信息优化JVM上的特定类型检测
Q3 Computer Science Pub Date : 2018-12-01 DOI: 10.1016/j.jvlc.2018.10.007
Andrea Rosà, Walter Binder

Reflective supertype information (RSI) is useful for many instrumentation-based type-specific analyses on the Java Virtual Machine (JVM). On the one hand, while such information can be obtained when performing the instrumentation within the same JVM process executing the instrumented program, in-process instrumentation severely limits the bytecode coverage of the analysis. On the other hand, performing the instrumentation in a separate process can achieve full bytecode coverage, but complete RSI is generally not available, often requiring the insertion of expensive runtime type checks in the instrumented program. In this article, we present a novel technique to accurately reify complete RSI in a separate instrumentation process. This is challenging, because the observed application may make use of custom classloaders and the loaded classes in one application execution are generally only known upon termination of the application. We implement our technique in an extension of the dynamic analysis framework DiSL. The resulting framework guarantees full bytecode coverage, while providing RSI. Evaluation results on a task profiler demonstrate that our technique can achieve speedups up to a factor of 6.24× wrt. resorting to runtime type checks in the instrumentation code for an analysis with full bytecode coverage.

反射超类型信息(RSI)对于Java虚拟机(JVM)上的许多基于检测的类型特定分析非常有用。一方面,虽然在执行插入程序的同一JVM进程中执行插入时可以获得此类信息,但进程内插入严重限制了分析的字节码覆盖范围。另一方面,在单独的进程中执行插入可以实现完全的字节码覆盖,但通常不提供完整的RSI,这通常需要在插入插入的程序中插入昂贵的运行时类型检查。在这篇文章中,我们提出了一种新的技术,可以在一个单独的仪器过程中准确地具体化完整的RSI。这是具有挑战性的,因为观察到的应用程序可以使用自定义类加载器,并且在一个应用程序执行中加载的类通常只有在应用程序终止时才是已知的。我们在动态分析框架DiSL的扩展中实现了我们的技术。由此产生的框架保证了完全的字节码覆盖,同时提供RSI。在任务探查器上的评估结果表明,我们的技术可以实现高达6.24×wrt的加速。通过在检测代码中进行运行时类型检查,可以进行完全字节码覆盖的分析。
{"title":"Optimizing type-specific instrumentation on the JVM with reflective supertype information","authors":"Andrea Rosà,&nbsp;Walter Binder","doi":"10.1016/j.jvlc.2018.10.007","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.10.007","url":null,"abstract":"<div><p><em>Reflective supertype information (RSI)</em><span> is useful for many instrumentation-based type-specific analyses on the Java Virtual Machine (JVM). On the one hand, while such information can be obtained when performing the instrumentation within the same JVM process executing the instrumented program, in-process instrumentation severely limits the bytecode coverage of the analysis. On the other hand, performing the instrumentation in a separate process can achieve full bytecode coverage, but complete RSI is generally not available, often requiring the insertion of expensive runtime type checks in the instrumented program. In this article, we present a novel technique to accurately reify complete RSI in a separate instrumentation process. This is challenging, because the observed application may make use of custom classloaders and the loaded classes in one application execution are generally only known upon termination of the application. We implement our technique in an extension of the dynamic analysis framework DiSL. The resulting framework guarantees full bytecode coverage, while providing RSI. Evaluation results on a task profiler demonstrate that our technique can achieve speedups up to a factor of 6.24× wrt. resorting to runtime type checks in the instrumentation code for an analysis with full bytecode coverage.</span></p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"49 ","pages":"Pages 29-45"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.10.007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72060151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Qualitative representation of spatio-temporal knowledge 时空知识的定性表示
Q3 Computer Science Pub Date : 2018-12-01 DOI: 10.1016/j.jvlc.2018.10.002
Giuseppe Della Penna, Sergio Orefice

In this paper we present PCT (Position-Connection-Time), a formalism capable of representing spatio-temporal knowledge in a qualitative fashion. This framework achieves an expressive power comparable to other classic spatial relation formalisms describing common topological and directional spatial relations. In addition, PCT introduces new classes of relations based both on the position of the objects and on their interconnections, and incorporates the notion of time within spatial relations in order to describe dynamic contexts. In this way, PCT is also able to model spatial arrangements that change over time, e.g., moving objects.

在本文中,我们提出了PCT(位置连接时间),这是一种能够以定性的方式表示时空知识的形式主义。该框架实现了与描述常见拓扑和方向空间关系的其他经典空间关系形式主义相当的表达能力。此外,PCT引入了基于对象位置及其相互连接的新关系类别,并将时间概念纳入空间关系中,以描述动态上下文。通过这种方式,PCT还能够对随时间变化的空间安排(例如,移动物体)进行建模。
{"title":"Qualitative representation of spatio-temporal knowledge","authors":"Giuseppe Della Penna,&nbsp;Sergio Orefice","doi":"10.1016/j.jvlc.2018.10.002","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.10.002","url":null,"abstract":"<div><p>In this paper we present PCT (<em>Position-Connection-Time</em>), a formalism capable of representing <em>spatio-temporal knowledge</em><span> in a qualitative fashion. This framework achieves an expressive power<span> comparable to other classic spatial relation formalisms describing common topological and directional spatial relations. In addition, PCT introduces new classes of relations based both on the position of the objects and on their interconnections, and incorporates the notion of time within spatial relations in order to describe dynamic contexts. In this way, PCT is also able to model spatial arrangements that change over time, e.g., moving objects.</span></span></p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"49 ","pages":"Pages 1-16"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.10.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72060153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Error recovery in parsing expression grammars through labeled failures and its implementation based on a parsing machine 基于标记故障的解析表达式语法错误恢复及其在解析机上的实现
Q3 Computer Science Pub Date : 2018-12-01 DOI: 10.1016/j.jvlc.2018.10.003
Sérgio Queiroz de Medeiros , Fabio Mascarenhas

Parsing Expression Grammars (PEGs) are a formalism used to describe top-down parsers with backtracking. As PEGs do not provide a good error recovery mechanism, PEG-based parsers usually do not recover from syntax errors in the input, or recover from syntax errors using ad-hoc, implementation-specific features. The lack of proper error recovery makes PEG parsers unsuitable for use with Integrated Development Environments (IDEs), which need to build syntactic trees even for incomplete, syntactically invalid programs.

We discuss a conservative extension, based on PEGs with labeled failures, that adds a syntax error recovery mechanism for PEGs. This extension associates recovery expressionsto labels, where a label now not only reports a syntax error but also uses this recovery expression to reach a synchronization point in the input and resume parsing. We give an operational semantics of PEGs with this recovery mechanism, as well as an operational semantics for a parsing machinethat we can translate labeled PEGs with error recovery to, and prove the correctness of this translation. We use an implementation of labeled PEGs with error recovery via a parsing machine to build robust parsers, which use different recovery strategies, for the Lua language. We evaluate the effectiveness of these parsers, alone and in comparison with a Lua parser with automatic error recovery generated by ANTLR, a popular parser generator .

解析表达式语法(PEG)是一种形式主义,用于描述具有回溯的自上而下的解析器。由于PEG不能提供良好的错误恢复机制,基于PEG的解析器通常不会从输入中的语法错误中恢复,也不会使用特定于实现的特殊功能从语法错误中进行恢复。由于缺乏正确的错误恢复,PEG解析器不适合与集成开发环境(IDE)一起使用,即使对于不完整、语法无效的程序,IDE也需要构建语法树。我们讨论了一个基于带有标记故障的PEG的保守扩展,该扩展为PEG添加了语法错误恢复机制。此扩展将恢复表达式与标签相关联,其中标签现在不仅报告语法错误,还使用此恢复表达式到达输入中的同步点并恢复解析。我们给出了具有这种恢复机制的PEG的操作语义,以及我们可以将带有错误恢复的标记PEG翻译到的解析机的操作语义。并证明了这种翻译的正确性。我们通过解析机使用带错误恢复的标记PEG的实现来为Lua语言构建健壮的解析器,该解析器使用不同的恢复策略。我们单独评估了这些解析器的有效性,并与流行的解析器生成器ANTLR生成的具有自动错误恢复功能的Lua解析器进行了比较。
{"title":"Error recovery in parsing expression grammars through labeled failures and its implementation based on a parsing machine","authors":"Sérgio Queiroz de Medeiros ,&nbsp;Fabio Mascarenhas","doi":"10.1016/j.jvlc.2018.10.003","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.10.003","url":null,"abstract":"<div><p><span>Parsing Expression Grammars (PEGs) are a formalism used to describe top-down parsers with backtracking. As PEGs do not provide a good error recovery mechanism, PEG-based parsers usually do not recover from syntax errors in the input, or recover from syntax errors using ad-hoc, implementation-specific features. The lack of proper error recovery makes PEG parsers unsuitable for use with Integrated Development Environments (IDEs), which need to build </span>syntactic trees even for incomplete, syntactically invalid programs.</p><p>We discuss a conservative extension, based on PEGs with labeled failures, that adds a syntax error recovery mechanism for PEGs. This extension associates <em>recovery expressions</em><span>to labels, where a label now not only reports a syntax error but also uses this recovery expression to reach a synchronization point<span> in the input and resume parsing. We give an operational semantics of PEGs with this recovery mechanism, as well as an operational semantics for a </span></span><em>parsing machine</em><span>that we can translate labeled PEGs with error recovery to, and prove the correctness of this translation. We use an implementation of labeled PEGs with error recovery via a parsing machine to build robust parsers, which use different recovery strategies, for the Lua language. We evaluate the effectiveness of these parsers, alone and in comparison with a Lua parser with automatic error recovery generated by ANTLR, a popular parser generator .</span></p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"49 ","pages":"Pages 17-28"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.10.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72060149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
E-Embed: A time series visualization framework based on earth mover’s distance E-Embed:一个基于地球运动距离的时间序列可视化框架
Q3 Computer Science Pub Date : 2018-10-01 DOI: 10.1016/j.jvlc.2018.08.002
Bingkun Chen, Hong Zhou, Xiaojun Chen

Time series analysis is an important topic in machine learning and a suitable visualization method can be used to facilitate the work of data mining. In this paper, we propose E-Embed: a novel framework to visualize time series data by projecting them into a low-dimensional space while capturing the underlying data structure. In the E-Embed framework, we use discrete distributions to model time series and measure the distances between them by using earth mover’s distance (EMD). After the distances between time series are calculated, we can visualize the data by dimensionality reduction algorithms. To combine different dimensionality reduction methods (such as Isomap) that depend on K-nearest neighbor (KNN) graph effectively, we propose an algorithm for constructing a KNN graph based on the earth mover’s distance. We evaluate our visualization framework on both univariate time series data and multivariate time series data. Experimental results demonstrate that E-Embed can provide high quality visualization with low computational cost.

时间序列分析是机器学习中的一个重要主题,可以使用合适的可视化方法来促进数据挖掘工作。在本文中,我们提出了E-Embed:一种新的框架,通过将时间序列数据投影到低维空间中,同时捕获底层数据结构,来可视化时间序列数据。在E-Embed框架中,我们使用离散分布对时间序列进行建模,并使用地球移动器距离(EMD)来测量它们之间的距离。在计算出时间序列之间的距离后,我们可以通过降维算法对数据进行可视化。为了有效地结合依赖于K近邻(KNN)图的不同降维方法(如Isomap),我们提出了一种基于地球运动距离的KNN图构造算法。我们在单变量时间序列数据和多元时间序列数据上评估我们的可视化框架。实验结果表明,E-Embed可以以较低的计算成本提供高质量的可视化。
{"title":"E-Embed: A time series visualization framework based on earth mover’s distance","authors":"Bingkun Chen,&nbsp;Hong Zhou,&nbsp;Xiaojun Chen","doi":"10.1016/j.jvlc.2018.08.002","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.08.002","url":null,"abstract":"<div><p>Time series analysis is an important topic in machine learning and a suitable visualization method can be used to facilitate the work of data mining. In this paper, we propose E-Embed: a novel framework to visualize time series data<span> by projecting them into a low-dimensional space while capturing the underlying data structure. In the E-Embed framework, we use discrete distributions to model time series and measure the distances between them by using earth mover’s distance (EMD). After the distances between time series are calculated, we can visualize the data by dimensionality reduction algorithms. To combine different dimensionality reduction methods (such as Isomap) that depend on K-nearest neighbor (KNN) graph effectively, we propose an algorithm for constructing a KNN graph based on the earth mover’s distance. We evaluate our visualization framework on both univariate time series data and multivariate time series data. Experimental results demonstrate that E-Embed can provide high quality visualization with low computational cost.</span></p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 110-122"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.08.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72081868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Physically based optical parameter database obtained from real materials for real-time material rendering 从真实材料中获得的用于实时材料渲染的基于物理的光学参数数据库
Q3 Computer Science Pub Date : 2018-10-01 DOI: 10.1016/j.jvlc.2018.06.004
Hong Sungin, Lee Chulhee, Chin Seongah

In order to render objects in computer graphics and video games that closely resemble real objects, it is necessary to emulate the physical characteristics of the material and determine optical parameters consisting of an absorption coefficient and a scattering coefficient, which are measured from real objects. In this study, we propose a physically based rendering technique that enables real-time rendering by extracting the optical parameters required for rendering opaque and translucent materials and then collecting the obtained information in a database (DB). For this purpose, optical parameters were extracted from the high-dynamic-range image (HDRI) of an object, which was obtained using self-produced optical imaging equipment by taking images of its upper and lower parts. Furthermore, by binding the optical parameter with the texture of the corresponding material, 122 material-rendering DB sets were established. The validity of the proposed method was verified through the evaluation of the result by 118 users.

为了在计算机图形和视频游戏中渲染与真实物体非常相似的物体,有必要模拟材料的物理特性,并确定由真实物体测量的吸收系数和散射系数组成的光学参数。在这项研究中,我们提出了一种基于物理的渲染技术,通过提取渲染不透明和半透明材料所需的光学参数,然后将获得的信息收集在数据库(DB)中,实现实时渲染。为此,从物体的高动态范围图像(HDRI)中提取光学参数,该图像是使用自产的光学成像设备通过拍摄其上部和下部的图像而获得的。此外,通过将光学参数与相应材料的纹理绑定,建立了122个材料渲染DB集。通过118名用户对结果的评价,验证了该方法的有效性。
{"title":"Physically based optical parameter database obtained from real materials for real-time material rendering","authors":"Hong Sungin,&nbsp;Lee Chulhee,&nbsp;Chin Seongah","doi":"10.1016/j.jvlc.2018.06.004","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.06.004","url":null,"abstract":"<div><p>In order to render objects in computer graphics and video games that closely resemble real objects, it is necessary to emulate the physical characteristics of the material and determine optical parameters consisting of an absorption coefficient and a scattering coefficient, which are measured from real objects. In this study, we propose a physically based rendering technique that enables real-time rendering by extracting the optical parameters required for rendering opaque and translucent materials and then collecting the obtained information in a database (DB). For this purpose, optical parameters were extracted from the high-dynamic-range image (HDRI) of an object, which was obtained using self-produced optical imaging equipment by taking images of its upper and lower parts. Furthermore, by binding the optical parameter with the texture of the corresponding material, 122 material-rendering DB sets were established. The validity of the proposed method was verified through the evaluation of the result by 118 users.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 29-39"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.06.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72036418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Illustrative visualization of time-varying features in spatio-temporal data 时空数据中时变特征的图解可视化
Q3 Computer Science Pub Date : 2018-10-01 DOI: 10.1016/j.jvlc.2018.08.010
Xiangyang Wu , Zixi Chen , Yuhui Gu , Weiru Chen , Mei-e Fang

Identifying and analyzing the time-varying features is important for understanding the spatio-temporal datasets. While there are numerous studies on illustrative visualization, existing solutions can hardly show subtle variations in a temporal dataset. This paper introduces a novel illustrative visualization scheme that employs temporal filtering techniques to disclose desired tiny features, which are further enhanced by an adaptive temporal illustration technique. The unconcerned context can be suppressed in a similar fashion. We develop a visual exploration system that empowers users to interactively manipulate and analyze temporal features. The experimental results on a mobile calling data demonstrate the effectivity and usefulness of our method.

识别和分析时变特征对于理解时空数据集非常重要。虽然有许多关于说明性可视化的研究,但现有的解决方案很难在时间数据集中显示出细微的变化。本文介绍了一种新的说明性可视化方案,该方案采用时间滤波技术来揭示所需的微小特征,并通过自适应时间说明技术进一步增强了这些特征。不关心的上下文可以以类似的方式被抑制。我们开发了一个视觉探索系统,使用户能够交互式地操作和分析时间特征。在手机通话数据上的实验结果证明了该方法的有效性和实用性。
{"title":"Illustrative visualization of time-varying features in spatio-temporal data","authors":"Xiangyang Wu ,&nbsp;Zixi Chen ,&nbsp;Yuhui Gu ,&nbsp;Weiru Chen ,&nbsp;Mei-e Fang","doi":"10.1016/j.jvlc.2018.08.010","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.08.010","url":null,"abstract":"<div><p>Identifying and analyzing the time-varying features is important for understanding the spatio-temporal datasets. While there are numerous studies on illustrative visualization, existing solutions can hardly show subtle variations in a temporal dataset. This paper introduces a novel illustrative visualization scheme that employs temporal filtering techniques to disclose desired tiny features, which are further enhanced by an adaptive temporal illustration technique. The unconcerned context can be suppressed in a similar fashion. We develop a visual exploration system that empowers users to interactively manipulate and analyze temporal features. The experimental results on a mobile calling data demonstrate the effectivity and usefulness of our method.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 157-168"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.08.010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72036421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Visual potential expert prediction in question and answering communities 问答社区中的可视化潜在专家预测
Q3 Computer Science Pub Date : 2018-10-01 DOI: 10.1016/j.jvlc.2018.03.001
Xiaoxiao Xiong, Min Fu, Min Zhu, Jing Liang

The success of Question and Answering (Q&A) communities mainly depends on the contribution of experts. However, there is a bottleneck for machine to identify these experts as soon as they participate in a community due to lack of enough activities during users’ early participation. To tackle that, we bring human’s business experience to potential expert prediction by combining machine learning and visual analytics. In this work, we propose a visual analytics system to identify potential experts semi-automatically. After the machine learning algorithm gives the result of the expert probability, analysts can locate a set of interested users whose expert probability is ambiguous and check the user information and behavior patterns of those users via the design of multi-dimension data visualization. Finally, our system models analysts’ knowledge of the community members’ identities, and then abstracts the knowledge quantificationally for machine learning algorithm. Thus, analysts can modify machine learning algorithm and the prediction process smoothly. A quantitative evaluation with real data has been studied to demonstrate the effectiveness of our system.

问答社区的成功主要取决于专家的贡献。然而,由于用户早期参与过程中缺乏足够的活动,机器在这些专家加入社区后立即识别他们是一个瓶颈。为了解决这一问题,我们将机器学习和视觉分析相结合,将人类的商业经验带入潜在的专家预测中。在这项工作中,我们提出了一个视觉分析系统来半自动识别潜在的专家。在机器学习算法给出专家概率的结果后,分析师可以通过多维数据可视化的设计来定位一组专家概率不明确的感兴趣用户,并检查这些用户的用户信息和行为模式。最后,我们的系统对分析师对社区成员身份的知识进行建模,然后定量地提取知识用于机器学习算法。因此,分析师可以顺利地修改机器学习算法和预测过程。用实际数据进行了定量评估,以证明我们的系统的有效性。
{"title":"Visual potential expert prediction in question and answering communities","authors":"Xiaoxiao Xiong,&nbsp;Min Fu,&nbsp;Min Zhu,&nbsp;Jing Liang","doi":"10.1016/j.jvlc.2018.03.001","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.03.001","url":null,"abstract":"<div><p><span>The success of Question and Answering (Q&amp;A) communities mainly depends on the contribution of experts. However, there is a bottleneck for machine to identify these experts as soon as they participate in a community due to lack of enough activities during users’ early participation. To tackle that, we bring human’s business experience to potential expert prediction by combining machine learning and visual analytics. In this work, we propose a visual analytics system to identify potential experts semi-automatically. After the machine learning algorithm gives the result of the expert probability, analysts can locate a set of </span><em>interested users</em><span> whose expert probability is ambiguous and check the user information and behavior patterns of those users via the design of multi-dimension data visualization. Finally, our system models analysts’ knowledge of the community members’ identities, and then abstracts the knowledge quantificationally for machine learning algorithm. Thus, analysts can modify machine learning algorithm and the prediction process smoothly. A quantitative evaluation with real data has been studied to demonstrate the effectiveness of our system.</span></p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 70-80"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.03.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72036163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Knowledge graph based on domain ontology and natural language processing technology for Chinese intangible cultural heritage 基于领域本体和自然语言处理技术的中国非物质文化遗产知识图谱
Q3 Computer Science Pub Date : 2018-10-01 DOI: 10.1016/j.jvlc.2018.06.005
Jinhua Dou , Jingyan Qin , Zanxia Jin , Zhuang Li

Intangible cultural heritage (ICH) is a precious historical and cultural resource of a country. Protection and inheritance of ICH is important to the sustainable development of national culture. There are many different intangible cultural heritage items in China. With the development of information technology, ICH database resources were built by government departments or public cultural services institutions, but most databases were widely dispersed. Certain traditional database systems are disadvantageous to storage, management and analysis of massive data. At the same time, a large quantity of data has been produced, accompanied by digital intangible cultural heritage development. The public is unable to grasp key knowledge quickly because of the massive and fragmented nature of the data. To solve these problems, we proposed the intangible cultural heritage knowledge graph to assist knowledge management and provide a service to the public. ICH domain ontology was defined with the help of intangible cultural heritage experts and knowledge engineers to regulate the concept, attribute and relationship of ICH knowledge. In this study, massive ICH data were obtained, and domain knowledge was extracted from ICH text data using the Natural Language Processing (NLP) technology. A knowledge base based on domain ontology and instances for Chinese intangible cultural heritage was constructed, and the knowledge graph was developed. The pattern and characteristics behind the intangible cultural heritage were presented based on the ICH knowledge graph. The knowledge graph for ICH could foster support for organization, management and protection of the intangible cultural heritage knowledge. The public can also obtain the ICH knowledge quickly and discover the linked knowledge. The knowledge graph is helpful for the protection and inheritance of intangible cultural heritage.

非物质文化遗产是一个国家宝贵的历史文化资源。非物质文化遗产的保护和传承对民族文化的可持续发展具有重要意义。中国有许多不同的非物质文化遗产。随着信息技术的发展,非物质文化遗产数据库资源由政府部门或公共文化服务机构建设,但大多数数据库分布广泛。某些传统的数据库系统不利于海量数据的存储、管理和分析。与此同时,伴随着数字非物质文化遗产的发展,产生了大量的数据。由于数据的庞大性和碎片性,公众无法快速掌握关键知识。为了解决这些问题,我们提出了非物质文化遗产知识图谱,以辅助知识管理,为公众提供服务。在非物质文化遗产专家和知识工程师的帮助下,定义了非物质文化领域本体论,以规范非物质文化知识的概念、属性和关系。在本研究中,获得了大量的非物质文化遗产数据,并使用自然语言处理(NLP)技术从非物质文化遗址文本数据中提取了领域知识。构建了一个基于领域本体和实例的中国非物质文化遗产知识库,并开发了知识图谱。以非物质文化遗产知识图谱为基础,展示了非物质文化遗背后的格局和特征。非物质文化遗产知识图谱可以为非物质文化遗址知识的组织、管理和保护提供支持。公众还可以快速获得非物质文化遗产知识,并发现相关知识。知识图谱有助于非物质文化遗产的保护和传承。
{"title":"Knowledge graph based on domain ontology and natural language processing technology for Chinese intangible cultural heritage","authors":"Jinhua Dou ,&nbsp;Jingyan Qin ,&nbsp;Zanxia Jin ,&nbsp;Zhuang Li","doi":"10.1016/j.jvlc.2018.06.005","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.06.005","url":null,"abstract":"<div><p>Intangible cultural heritage (ICH) is a precious historical and cultural resource of a country. Protection and inheritance of ICH is important to the sustainable development of national culture. There are many different intangible cultural heritage items in China. With the development of information technology, ICH database resources were built by government departments or public cultural services institutions, but most databases were widely dispersed. Certain traditional database systems are disadvantageous to storage, management and analysis of massive data. At the same time, a large quantity of data has been produced, accompanied by digital intangible cultural heritage development. The public is unable to grasp key knowledge quickly because of the massive and fragmented nature of the data. To solve these problems, we proposed the intangible cultural heritage knowledge graph to assist knowledge management and provide a service to the public. ICH domain ontology was defined with the help of intangible cultural heritage experts and knowledge engineers to regulate the concept, attribute and relationship of ICH knowledge. In this study, massive ICH data were obtained, and domain knowledge was extracted from ICH text data using the Natural Language Processing (NLP) technology. A knowledge base based on domain ontology and instances for Chinese intangible cultural heritage was constructed, and the knowledge graph was developed. The pattern and characteristics behind the intangible cultural heritage were presented based on the ICH knowledge graph. The knowledge graph for ICH could foster support for organization, management and protection of the intangible cultural heritage knowledge. The public can also obtain the ICH knowledge quickly and discover the linked knowledge. The knowledge graph is helpful for the protection and inheritance of intangible cultural heritage.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 19-28"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.06.005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72036161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Exploring linear projections for revealing clusters, outliers, and trends in subsets of multi-dimensional datasets 探索线性投影,以揭示多维数据集子集中的聚类、异常值和趋势
Q3 Computer Science Pub Date : 2018-10-01 DOI: 10.1016/j.jvlc.2018.08.003
Jiazhi Xia , Le Gao , Kezhi Kong , Ying Zhao , Yi Chen , Xiaoyan Kui , Yixiong Liang

Identifying patterns in 2D linear projections is important in understanding multi-dimensional datasets. However, local patterns, which are composed of partial data points, are usually obscured by noises and missed in traditional quality measure approaches that measure the whole dataset. In this paper, we propose an interactive interface to explore 2D linear projections with visual patterns on subsets. First, we propose a voting-based algorithm to recommend optimal projection, in which the identified pattern looks the most salient. Specifically, we propose three kinds of point-wise quality metrics of 2D linear projections for outliers, clusterings, and trends, respectively. For each sampled projection, we measure its importance by accumulating the metrics of selected points. The projection with the highest importance is recommended. Second, we design an exploring interface with a scatterplot, a projection trail map, and a control panel. Our interface allows users to explore projections by specifying interested data subsets. At last, we employ three datasets and demonstrate the effectiveness of our approach through three case studies of exploring clusters, outliers, and trends.

识别二维线性投影中的模式对于理解多维数据集非常重要。然而,在测量整个数据集的传统质量测量方法中,由部分数据点组成的局部模式通常会被噪声掩盖和遗漏。在本文中,我们提出了一个交互式界面来探索子集上具有视觉模式的二维线性投影。首先,我们提出了一种基于投票的算法来推荐最优投影,其中识别的模式看起来最显著。具体来说,我们分别针对异常值、聚类和趋势提出了三种2D线性投影的逐点质量度量。对于每个采样投影,我们通过累积所选点的度量来衡量其重要性。建议使用最重要的投影。其次,我们设计了一个带有散点图、投影轨迹图和控制面板的探索界面。我们的界面允许用户通过指定感兴趣的数据子集来探索投影。最后,我们使用了三个数据集,并通过探索聚类、异常值和趋势的三个案例研究证明了我们方法的有效性。
{"title":"Exploring linear projections for revealing clusters, outliers, and trends in subsets of multi-dimensional datasets","authors":"Jiazhi Xia ,&nbsp;Le Gao ,&nbsp;Kezhi Kong ,&nbsp;Ying Zhao ,&nbsp;Yi Chen ,&nbsp;Xiaoyan Kui ,&nbsp;Yixiong Liang","doi":"10.1016/j.jvlc.2018.08.003","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.08.003","url":null,"abstract":"<div><p>Identifying patterns in 2D linear projections is important in understanding multi-dimensional datasets. However, local patterns, which are composed of partial data points, are usually obscured by noises and missed in traditional quality measure approaches that measure the whole dataset. In this paper, we propose an interactive interface to explore 2D linear projections with visual patterns on subsets. First, we propose a voting-based algorithm to recommend optimal projection, in which the identified pattern looks the most salient. Specifically, we propose three kinds of point-wise quality metrics of 2D linear projections for outliers, clusterings, and trends, respectively. For each sampled projection, we measure its importance by accumulating the metrics of selected points. The projection with the highest importance is recommended. Second, we design an exploring interface with a scatterplot, a projection trail map, and a control panel. Our interface allows users to explore projections by specifying interested data subsets. At last, we employ three datasets and demonstrate the effectiveness of our approach through three case studies of exploring clusters, outliers, and trends.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 52-60"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.08.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72036447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Journal of Visual Languages and Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1