Pub Date : 2021-09-29DOI: 10.1109/TVCG.2021.3114755
Filip Opaleny, Pavol Ulbrich, Joan Planas-Iglesias, Jan Byska, Gaspar P Pinto, David Bednar, Katarina FurmanovA, Barbora KozlikovA
In the process of understanding and redesigning the function of proteins in modern biochemistry, protein engineers are increasingly focusing on the exploration of regions in proteins called loops. Analyzing various characteristics of these regions helps the experts to design the transfer of the desired function from one protein to another. This process is denoted as loop grafting. As this process requires extensive manual treatment and currently there is no proper visual support for it, we designed LoopGrafter: a web-based tool that provides experts with visual support through all the loop grafting pipeline steps. The tool is logically divided into several phases, starting with the definition of two input proteins and ending with a set of grafted proteins. Each phase is supported by a specific set of abstracted 2D visual representations of loaded proteins and their loops that are interactively linked with the 3D view onto proteins. By sequentially passing through the individual phases, the user is shaping the list of loops that are potential candidates for loop grafting. In the end, the actual in-silico insertion of the loop candidates from one protein to the other is performed and the results are visually presented to the user. In this way, the fully computational rational design of proteins and their loops results in newly designed protein structures that can be further assembled and tested through in-vitro experiments. LoopGrafter was designed in tight collaboration with protein engineers, and its final appearance reflects many testing iterations. We showcase the contribution of LoopGrafter on a real case scenario and provide the readers with the experts' feedback, confirming the usefulness of our tool.
{"title":"LoopGrafter: Visual Support for the Grafting Workflow of Protein Loops.","authors":"Filip Opaleny, Pavol Ulbrich, Joan Planas-Iglesias, Jan Byska, Gaspar P Pinto, David Bednar, Katarina FurmanovA, Barbora KozlikovA","doi":"10.1109/TVCG.2021.3114755","DOIUrl":"10.1109/TVCG.2021.3114755","url":null,"abstract":"<p><p>In the process of understanding and redesigning the function of proteins in modern biochemistry, protein engineers are increasingly focusing on the exploration of regions in proteins called loops. Analyzing various characteristics of these regions helps the experts to design the transfer of the desired function from one protein to another. This process is denoted as loop grafting. As this process requires extensive manual treatment and currently there is no proper visual support for it, we designed LoopGrafter: a web-based tool that provides experts with visual support through all the loop grafting pipeline steps. The tool is logically divided into several phases, starting with the definition of two input proteins and ending with a set of grafted proteins. Each phase is supported by a specific set of abstracted 2D visual representations of loaded proteins and their loops that are interactively linked with the 3D view onto proteins. By sequentially passing through the individual phases, the user is shaping the list of loops that are potential candidates for loop grafting. In the end, the actual in-silico insertion of the loop candidates from one protein to the other is performed and the results are visually presented to the user. In this way, the fully computational rational design of proteins and their loops results in newly designed protein structures that can be further assembled and tested through in-vitro experiments. LoopGrafter was designed in tight collaboration with protein engineers, and its final appearance reflects many testing iterations. We showcase the contribution of LoopGrafter on a real case scenario and provide the readers with the experts' feedback, confirming the usefulness of our tool.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"PP ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2021-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39468468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spencer C. Castro, P. S. Quinan, Helia Hosseinpour, Lace M. K. Padilla
As uncertainty visualizations for general audiences become increasingly common, designers must understand the full impact of uncertainty communication techniques on viewers' decision processes. Prior work demonstrates mixed performance outcomes with respect to how individuals make decisions using various visual and textual depictions of uncertainty. Part of the inconsistency across findings may be due to an over-reliance on task accuracy, which cannot, on its own, provide a comprehensive understanding of how uncertainty visualization techniques support reasoning processes. In this work, we advance the debate surrounding the efficacy of modern 1D uncertainty visualizations by conducting converging quantitative and qualitative analyses of both the effort and strategies used by individuals when provided with quantile dotplots, density plots, interval plots, mean plots, and textual descriptions of uncertainty. We utilize two approaches for examining effort across uncertainty communication techniques: a measure of individual differences in working-memory capacity known as an operation span (OSPAN) task and self-reports of perceived workload via the NASA-TLX. The results reveal that both visualization methods and working-memory capacity impact participants' decisions. Specifically, quantile dotplots and density plots (i.e., distributional annotations) result in more accurate judgments than interval plots, textual descriptions of uncertainty, and mean plots (i.e., summary annotations). Additionally, participants' open-ended responses suggest that individuals viewing distributional annotations are more likely to employ a strategy that explicitly incorporates uncertainty into their judgments than those viewing summary annotations. When comparing quantile dotplots to density plots, this work finds that both methods are equally effective for low-working-memory individuals. However, for individuals with high-working-memory capacity, quantile dotplots evoke more accurate responses with less perceived effort. Given these results, we advocate for the inclusion of converging behavioral and subjective workload metrics in addition to accuracy performance to further disambiguate meaningful differences among visualization techniques.
{"title":"Examining Effort in 1D Uncertainty Communication Using Individual Differences in Working Memory and NASA-TLX","authors":"Spencer C. Castro, P. S. Quinan, Helia Hosseinpour, Lace M. K. Padilla","doi":"10.31234/osf.io/wpz8b","DOIUrl":"https://doi.org/10.31234/osf.io/wpz8b","url":null,"abstract":"As uncertainty visualizations for general audiences become increasingly common, designers must understand the full impact of uncertainty communication techniques on viewers' decision processes. Prior work demonstrates mixed performance outcomes with respect to how individuals make decisions using various visual and textual depictions of uncertainty. Part of the inconsistency across findings may be due to an over-reliance on task accuracy, which cannot, on its own, provide a comprehensive understanding of how uncertainty visualization techniques support reasoning processes. In this work, we advance the debate surrounding the efficacy of modern 1D uncertainty visualizations by conducting converging quantitative and qualitative analyses of both the effort and strategies used by individuals when provided with quantile dotplots, density plots, interval plots, mean plots, and textual descriptions of uncertainty. We utilize two approaches for examining effort across uncertainty communication techniques: a measure of individual differences in working-memory capacity known as an operation span (OSPAN) task and self-reports of perceived workload via the NASA-TLX. The results reveal that both visualization methods and working-memory capacity impact participants' decisions. Specifically, quantile dotplots and density plots (i.e., distributional annotations) result in more accurate judgments than interval plots, textual descriptions of uncertainty, and mean plots (i.e., summary annotations). Additionally, participants' open-ended responses suggest that individuals viewing distributional annotations are more likely to employ a strategy that explicitly incorporates uncertainty into their judgments than those viewing summary annotations. When comparing quantile dotplots to density plots, this work finds that both methods are equally effective for low-working-memory individuals. However, for individuals with high-working-memory capacity, quantile dotplots evoke more accurate responses with less perceived effort. Given these results, we advocate for the inclusion of converging behavioral and subjective workload metrics in addition to accuracy performance to further disambiguate meaningful differences among visualization techniques.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1-1"},"PeriodicalIF":5.2,"publicationDate":"2021-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44762862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Caitlyn M. McColeman, Fumeng Yang, S. Franconeri, Timothy F. Brady
Data can be visually represented using visual channels like position, length or luminance. An existing ranking of these visual channels is based on how accurately participants could report the ratio between two depicted values. There is an assumption that this ranking should hold for different tasks and for different numbers of marks. However, there is surprisingly little existing work that tests this assumption, especially given that visually computing ratios is relatively unimportant in real-world visualizations, compared to seeing, remembering, and comparing trends and motifs, across displays that almost universally depict more than two values. To simulate the information extracted from a glance at a visualization, we instead asked participants to immediately reproduce a set of values from memory after they were shown the visualization. These values could be shown in a bar graph (position (bar)), line graph (position (line)), heat map (luminance), bubble chart (area), misaligned bar graph (length), or ‘wind map’ (angle). With a Bayesian multilevel modeling approach, we show how the rank positions of visual channels shift across different numbers of marks (2, 4 or 8) and for bias, precision, and error measures. The ranking did not hold, even for reproductions of only 2 marks, and the new probabilistic ranking was highly inconsistent for reproductions of different numbers of marks. Other factors besides channel choice had an order of magnitude more influence on performance, such as the number of values in the series (e.g., more marks led to larger errors), or the value of each mark (e.g., small values were systematically overestimated). Every visual channel was worse for displays with 8 marks than 4, consistent with established limits on visual memory. These results point to the need for a body of empirical studies that move beyond two-value ratio judgments as a baseline for reliably ranking the quality of a visual channel, including testing new tasks (detection of trends or motifs), timescales (immediate computation, or later comparison), and the number of values (from a handful, to thousands).
{"title":"Rethinking the Ranks of Visual Channels","authors":"Caitlyn M. McColeman, Fumeng Yang, S. Franconeri, Timothy F. Brady","doi":"10.31219/osf.io/n7kxu","DOIUrl":"https://doi.org/10.31219/osf.io/n7kxu","url":null,"abstract":"Data can be visually represented using visual channels like position, length or luminance. An existing ranking of these visual channels is based on how accurately participants could report the ratio between two depicted values. There is an assumption that this ranking should hold for different tasks and for different numbers of marks. However, there is surprisingly little existing work that tests this assumption, especially given that visually computing ratios is relatively unimportant in real-world visualizations, compared to seeing, remembering, and comparing trends and motifs, across displays that almost universally depict more than two values. To simulate the information extracted from a glance at a visualization, we instead asked participants to immediately reproduce a set of values from memory after they were shown the visualization. These values could be shown in a bar graph (position (bar)), line graph (position (line)), heat map (luminance), bubble chart (area), misaligned bar graph (length), or ‘wind map’ (angle). With a Bayesian multilevel modeling approach, we show how the rank positions of visual channels shift across different numbers of marks (2, 4 or 8) and for bias, precision, and error measures. The ranking did not hold, even for reproductions of only 2 marks, and the new probabilistic ranking was highly inconsistent for reproductions of different numbers of marks. Other factors besides channel choice had an order of magnitude more influence on performance, such as the number of values in the series (e.g., more marks led to larger errors), or the value of each mark (e.g., small values were systematically overestimated). Every visual channel was worse for displays with 8 marks than 4, consistent with established limits on visual memory. These results point to the need for a body of empirical studies that move beyond two-value ratio judgments as a baseline for reliably ranking the quality of a visual channel, including testing new tasks (detection of trends or motifs), timescales (immediate computation, or later comparison), and the number of values (from a handful, to thousands).","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"28 1","pages":"707-717"},"PeriodicalIF":5.2,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46952789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Devin Lange, Edward R. Polanco, R. Judson-Torres, T. Zangle, A. Lex
Which drug is most promising for a cancer patient? A new microscopy-based approach for measuring the mass of individual cancer cells treated with different drugs promises to answer this question in only a few hours. However, the analysis pipeline for extracting data from these images is still far from complete automation: human intervention is necessary for quality control for preprocessing steps such as segmentation, adjusting filters, removing noise, and analyzing the result. To address this workflow, we developed Loon, a visualization tool for analyzing drug screening data based on quantitative phase microscopy imaging. Loon visualizes both derived data such as growth rates and imaging data. Since the images are collected automatically at a large scale, manual inspection of images and segmentations is infeasible. However, reviewing representative samples of cells is essential, both for quality control and for data analysis. We introduce a new approach for choosing and visualizing representative exemplar cells that retain a close connection to the low-level data. By tightly integrating the derived data visualization capabilities with the novel exemplar visualization and providing selection and filtering capabilities, Loon is well suited for making decisions about which drugs are suitable for a specific patient.
{"title":"Loon: Using Exemplars to Visualize Large-Scale Microscopy Data","authors":"Devin Lange, Edward R. Polanco, R. Judson-Torres, T. Zangle, A. Lex","doi":"10.31219/osf.io/dfajc","DOIUrl":"https://doi.org/10.31219/osf.io/dfajc","url":null,"abstract":"Which drug is most promising for a cancer patient? A new microscopy-based approach for measuring the mass of individual cancer cells treated with different drugs promises to answer this question in only a few hours. However, the analysis pipeline for extracting data from these images is still far from complete automation: human intervention is necessary for quality control for preprocessing steps such as segmentation, adjusting filters, removing noise, and analyzing the result. To address this workflow, we developed Loon, a visualization tool for analyzing drug screening data based on quantitative phase microscopy imaging. Loon visualizes both derived data such as growth rates and imaging data. Since the images are collected automatically at a large scale, manual inspection of images and segmentations is infeasible. However, reviewing representative samples of cells is essential, both for quality control and for data analysis. We introduce a new approach for choosing and visualizing representative exemplar cells that retain a close connection to the low-level data. By tightly integrating the derived data visualization capabilities with the novel exemplar visualization and providing selection and filtering capabilities, Loon is well suited for making decisions about which drugs are suitable for a specific patient.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1-1"},"PeriodicalIF":5.2,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44644774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Eckelt, A. Hinterreiter, Patrick Adelberger, C. Walchshofer, V. Dhanoa, C. Humer, Moritz Heckmann, C. Steinparz, M. Streit
In this work, we propose an interactive visual approach for the exploration and formation of structural relationships in embeddings of high-dimensional data. These structural relationships, such as item sequences, associations of items with groups, and hierarchies between groups of items, are defining properties of many real-world datasets. Nevertheless, most existing methods for the visual exploration of embeddings treat these structures as second-class citizens or do not take them into account at all. In our proposed analysis workflow, users explore enriched scatterplots of the embedding, in which relationships between items and/or groups are visually highlighted. The original high-dimensional data for single items, groups of items, or differences between connected items and groups is accessible through additional summary visualizations. We carefully tailored these summary and difference visualizations to the various data types and semantic contexts. During their exploratory analysis, users can externalize their insights by setting up additional groups and relationships between items and/or groups. We demonstrate the utility and potential impact of our approach by means of two use cases and multiple examples from various domains.
{"title":"Visual Exploration of Relationships and Structure in Low-Dimensional Embeddings","authors":"K. Eckelt, A. Hinterreiter, Patrick Adelberger, C. Walchshofer, V. Dhanoa, C. Humer, Moritz Heckmann, C. Steinparz, M. Streit","doi":"10.31219/osf.io/ujbrs","DOIUrl":"https://doi.org/10.31219/osf.io/ujbrs","url":null,"abstract":"In this work, we propose an interactive visual approach for the exploration and formation of structural relationships in embeddings of high-dimensional data. These structural relationships, such as item sequences, associations of items with groups, and hierarchies between groups of items, are defining properties of many real-world datasets. Nevertheless, most existing methods for the visual exploration of embeddings treat these structures as second-class citizens or do not take them into account at all. In our proposed analysis workflow, users explore enriched scatterplots of the embedding, in which relationships between items and/or groups are visually highlighted. The original high-dimensional data for single items, groups of items, or differences between connected items and groups is accessible through additional summary visualizations. We carefully tailored these summary and difference visualizations to the various data types and semantic contexts. During their exploratory analysis, users can externalize their insights by setting up additional groups and relationships between items and/or groups. We demonstrate the utility and potential impact of our approach by means of two use cases and multiple examples from various domains.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1-1"},"PeriodicalIF":5.2,"publicationDate":"2021-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48757984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1109/tvcg.2020.3033686
{"title":"INFOVIS 2020 Program Committee","authors":"","doi":"10.1109/tvcg.2020.3033686","DOIUrl":"https://doi.org/10.1109/tvcg.2020.3033686","url":null,"abstract":"","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48986798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1109/tvcg.2020.3035922
{"title":"Copyright notice","authors":"","doi":"10.1109/tvcg.2020.3035922","DOIUrl":"https://doi.org/10.1109/tvcg.2020.3035922","url":null,"abstract":"","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/tvcg.2020.3035922","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48791118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1109/tvcg.2020.3033677
{"title":"Contents","authors":"","doi":"10.1109/tvcg.2020.3033677","DOIUrl":"https://doi.org/10.1109/tvcg.2020.3033677","url":null,"abstract":"","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/tvcg.2020.3033677","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43763541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1109/tvcg.2020.3033652
{"title":"Info Vis Reviewers","authors":"","doi":"10.1109/tvcg.2020.3033652","DOIUrl":"https://doi.org/10.1109/tvcg.2020.3033652","url":null,"abstract":"","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45990745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}