首页 > 最新文献

2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV)最新文献

英文 中文
Lossy Compression for Visualization of Atmospheric Data 大气数据可视化的有损压缩
Pub Date : 2021-10-01 DOI: 10.1109/LDAV53230.2021.00017
D. Semeraro, Leigh Orf
Lossy compression is a data compression technique that sacrifices precision for the sake of higher compression rates. While loss of precision is unacceptable when storing simulation data for check pointing, it has little discernable impact on visualization. Saving simulation output for later examination is still a prevalent workflow. Domain scientists often return to data from older runs to examine the data in a new context. Storage of visualization data at full precision is not necessary for this purpose. The use of lossy compression can therefore relieve the pressure on HPC storage equipment or be used to store data at higher temporal resolution than without compression. In this poster we show how lossy compression was used to store visualization data for the analysis of a supercell thunderstorm. The visual results will be shown as well as details of how the compression was used in the workflow.
有损压缩是一种为了获得更高的压缩率而牺牲精度的数据压缩技术。当为检查点存储模拟数据时,精度损失是不可接受的,它对可视化几乎没有明显的影响。保存模拟输出以供以后检查仍然是一个流行的工作流程。领域科学家经常返回到旧运行的数据,以在新的环境中检查数据。为了达到这个目的,完全精确地存储可视化数据是不必要的。因此,使用有损压缩可以减轻HPC存储设备的压力,或者用于以比不压缩更高的时间分辨率存储数据。在这张海报中,我们展示了如何使用有损压缩来存储用于分析超级单体雷暴的可视化数据。将显示可视化结果以及如何在工作流中使用压缩的细节。
{"title":"Lossy Compression for Visualization of Atmospheric Data","authors":"D. Semeraro, Leigh Orf","doi":"10.1109/LDAV53230.2021.00017","DOIUrl":"https://doi.org/10.1109/LDAV53230.2021.00017","url":null,"abstract":"Lossy compression is a data compression technique that sacrifices precision for the sake of higher compression rates. While loss of precision is unacceptable when storing simulation data for check pointing, it has little discernable impact on visualization. Saving simulation output for later examination is still a prevalent workflow. Domain scientists often return to data from older runs to examine the data in a new context. Storage of visualization data at full precision is not necessary for this purpose. The use of lossy compression can therefore relieve the pressure on HPC storage equipment or be used to store data at higher temporal resolution than without compression. In this poster we show how lossy compression was used to store visualization data for the analysis of a supercell thunderstorm. The visual results will be shown as well as details of how the compression was used in the workflow.","PeriodicalId":441438,"journal":{"name":"2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121412084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
IExchange: Asynchronous Communication and Termination Detection for Iterative Algorithms 迭代算法的异步通信和终止检测
Pub Date : 2021-10-01 DOI: 10.1109/LDAV53230.2021.00009
D. Morozov, T. Peterka, Hanqi Guo, Mukund Raj, Jiayi Xu, Han-Wei Shen
Iterative parallel algorithms can be implemented by synchronizing after each round. This bulk-synchronous parallel (BSP) pattern is inefficient when strict synchronization is not required: global synchronization is costly at scale and prohibits amortizing load imbalance over the entire execution, and termination detection is challenging with irregular data-dependent communication. We present an asynchronous communication protocol that efficiently interleaves communication with computation. The protocol includes global termination detection without obstructing computation and communication between nodes. The user's computational primitive only needs to indicate when local work is done; our algorithm detects when all processors reach this state. We do not assume that global work decreases monotonically, allowing processors to createnew work. We illustrate the utility of our solution through experiments, including two large data analysis and visualization codes: parallel particle advection and distributed union-find. Our asynchronous algorithm is several times faster with better strong scaling efficiency than the synchronous approach.
迭代并行算法可以通过每轮后同步实现。当不需要严格的同步时,这种大容量同步并行(BSP)模式是低效的:全局同步在规模上是昂贵的,并且禁止在整个执行过程中分摊负载不平衡,并且由于不规则的数据依赖通信,终止检测是具有挑战性的。我们提出了一种异步通信协议,有效地将通信与计算交织在一起。该协议包括全局终止检测,不妨碍节点间的计算和通信。用户的计算原语只需要指示本地工作何时完成;我们的算法检测所有处理器何时达到此状态。我们不假设全局工作单调地减少,允许处理器创建新的工作。我们通过实验说明了我们的解决方案的实用性,包括两个大数据分析和可视化代码:平行粒子平流和分布式并集查找。我们的异步算法比同步方法快几倍,并且具有更好的强缩放效率。
{"title":"IExchange: Asynchronous Communication and Termination Detection for Iterative Algorithms","authors":"D. Morozov, T. Peterka, Hanqi Guo, Mukund Raj, Jiayi Xu, Han-Wei Shen","doi":"10.1109/LDAV53230.2021.00009","DOIUrl":"https://doi.org/10.1109/LDAV53230.2021.00009","url":null,"abstract":"Iterative parallel algorithms can be implemented by synchronizing after each round. This bulk-synchronous parallel (BSP) pattern is inefficient when strict synchronization is not required: global synchronization is costly at scale and prohibits amortizing load imbalance over the entire execution, and termination detection is challenging with irregular data-dependent communication. We present an asynchronous communication protocol that efficiently interleaves communication with computation. The protocol includes global termination detection without obstructing computation and communication between nodes. The user's computational primitive only needs to indicate when local work is done; our algorithm detects when all processors reach this state. We do not assume that global work decreases monotonically, allowing processors to createnew work. We illustrate the utility of our solution through experiments, including two large data analysis and visualization codes: parallel particle advection and distributed union-find. Our asynchronous algorithm is several times faster with better strong scaling efficiency than the synchronous approach.","PeriodicalId":441438,"journal":{"name":"2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114613207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Entropy-Based Approach for Identifying User-Preferred Camera Positions 一种基于熵的用户偏好相机位置识别方法
Pub Date : 2021-10-01 DOI: 10.1109/LDAV53230.2021.00015
Nicole Marsaglia, Yuya Kawakami, Samuel D. Schwartz, Stefan Fields, H. Childs
Viewpoint Quality (VQ) metrics have the potential to predict human preferences for camera placement. With this study, we introduce new VQ metrics that incorporate entropy, and explore how they can be used in combination. Our evaluation involves three phases: (1) creating a database of isosurface imagery from ten large, scientific data sets, (2) conducting a user study with approximately 30 large data visualization experts who provided over 1000 responses, and (3) analyzing how our entropy-based VQ metrics compared with existing VQ metrics in predicting expert preference. In terms of findings, we find that our entropy-based metrics are able to predict expert preferences 68% of the time, while existing VQ metrics perform much worse (52%). This finding, while valuable on its own, also opens the door for future work on in situ camera placement. Finally, as another important contribution, this work has the most extensive evaluation to date of existing VQ metrics to predict expert preference for visualizations of large, scientific data sets.
视点质量(VQ)指标有可能预测人类对摄像机放置的偏好。通过这项研究,我们引入了包含熵的新VQ指标,并探索了如何将它们组合使用。我们的评估包括三个阶段:(1)从10个大型科学数据集中创建等面图像数据库;(2)与大约30名大型数据可视化专家进行用户研究,他们提供了1000多个回复;(3)分析我们基于熵的VQ指标与现有VQ指标在预测专家偏好方面的比较。就结果而言,我们发现基于熵的指标能够在68%的时间内预测专家的偏好,而现有的VQ指标表现得更差(52%)。这一发现虽然本身很有价值,但也为未来的原位摄像机安置工作打开了大门。最后,作为另一个重要贡献,这项工作对现有的VQ指标进行了迄今为止最广泛的评估,以预测专家对大型科学数据集可视化的偏好。
{"title":"An Entropy-Based Approach for Identifying User-Preferred Camera Positions","authors":"Nicole Marsaglia, Yuya Kawakami, Samuel D. Schwartz, Stefan Fields, H. Childs","doi":"10.1109/LDAV53230.2021.00015","DOIUrl":"https://doi.org/10.1109/LDAV53230.2021.00015","url":null,"abstract":"Viewpoint Quality (VQ) metrics have the potential to predict human preferences for camera placement. With this study, we introduce new VQ metrics that incorporate entropy, and explore how they can be used in combination. Our evaluation involves three phases: (1) creating a database of isosurface imagery from ten large, scientific data sets, (2) conducting a user study with approximately 30 large data visualization experts who provided over 1000 responses, and (3) analyzing how our entropy-based VQ metrics compared with existing VQ metrics in predicting expert preference. In terms of findings, we find that our entropy-based metrics are able to predict expert preferences 68% of the time, while existing VQ metrics perform much worse (52%). This finding, while valuable on its own, also opens the door for future work on in situ camera placement. Finally, as another important contribution, this work has the most extensive evaluation to date of existing VQ metrics to predict expert preference for visualizations of large, scientific data sets.","PeriodicalId":441438,"journal":{"name":"2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116107364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Trigger Happy: Assessing the Viability of Trigger-Based In Situ Analysis 触发快乐:评估基于触发的原位分析的可行性
Pub Date : 2021-10-01 DOI: 10.1109/LDAV53230.2021.00010
Matthew Larsen, Lawrence Livermore, C. Harrison, Terece L. Turton, S. Sane, S. Brink, H. Childs
Triggers are an emerging strategy for optimizing execution time for in situ analysis. However, their performance characteristics are complex, making it difficult to decide if a particular trigger-based approach is viable. With this study, we propose a cost model for trigger-based in situ analysis that can assess viability, and we also validate the model's efficacy. Then, once the cost model is established, we apply the model to inform the space of viable approaches, considering variation in simulation code, trigger techniques, and analyses, as well as trigger inspection and fire rates. Real-world values are needed both to validate the model and to use the model to inform the space of viable approaches. We obtain these values by surveying science application teams and by performing runs as large as 2,040 GPUs and 32 billion cells.
触发器是优化原位分析执行时间的一种新兴策略。然而,它们的性能特征很复杂,因此很难确定特定的基于触发器的方法是否可行。通过这项研究,我们提出了一个基于触发的原位分析的成本模型,可以评估可行性,并验证了模型的有效性。然后,一旦成本模型建立,我们应用该模型来告知空间可行的方法,考虑到模拟代码、触发技术和分析的变化,以及触发检查和射击率。真实世界的价值既需要验证模型,也需要使用模型来告知可行方法的空间。我们通过调查科学应用团队和执行多达2,040个gpu和320亿个单元的运行来获得这些值。
{"title":"Trigger Happy: Assessing the Viability of Trigger-Based In Situ Analysis","authors":"Matthew Larsen, Lawrence Livermore, C. Harrison, Terece L. Turton, S. Sane, S. Brink, H. Childs","doi":"10.1109/LDAV53230.2021.00010","DOIUrl":"https://doi.org/10.1109/LDAV53230.2021.00010","url":null,"abstract":"Triggers are an emerging strategy for optimizing execution time for in situ analysis. However, their performance characteristics are complex, making it difficult to decide if a particular trigger-based approach is viable. With this study, we propose a cost model for trigger-based in situ analysis that can assess viability, and we also validate the model's efficacy. Then, once the cost model is established, we apply the model to inform the space of viable approaches, considering variation in simulation code, trigger techniques, and analyses, as well as trigger inspection and fire rates. Real-world values are needed both to validate the model and to use the model to inform the space of viable approaches. We obtain these values by surveying science application teams and by performing runs as large as 2,040 GPUs and 32 billion cells.","PeriodicalId":441438,"journal":{"name":"2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128048355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Portable and Composable Flow Graphs for In Situ Analytics 便携式和可组合的流程图在现场分析
Pub Date : 2021-10-01 DOI: 10.1109/LDAV53230.2021.00014
Sergei Shudler, Steve Petruzza, Valerio Pascucci, P. Bremer
Existing data analysis and visualization algorithms are used in a wide range of simulations that strive to support an increasing number of runtime systems. The BabelFlow framework has been designed to address this situation by providing users with a simple interface to implement analysis algorithms as dataflow graphs portable across different runtimes. The limitation in BabelFlow, however, is that the graphs are not easily reusable. Plugging them into existing in situ workflows and constructing more complex graphs is difficult. In this paper, we introduce LegoFlow, an extension to BabelFlow that addresses these challenges. Specifically, we integrate LegoFlow into Ascent, a flyweight framework for large scale in situ analytics, and provide a graph composability mechanism. This mechanism is an intuitive approach to link an arbitrary number of graphs together to create more complex patterns, as well as avoid costly reimple-mentations for minor modifications. Without sacrificing portability, LegoFlow introduces complete flexibility that maximizes the productivity of in situ analytics workflows. Furthermore, we demonstrate a complete LULESH simulation with LegoFlow-based in situ visualization running on top of Charm++. It is a novel approach for in situ analytics, whereby the asynchronous tasking runtime allows routines for computation and analysis to overlap. Finally, we evaluate a number of LegoFlow-based filters and extracts in Ascent, as well as the scaling behavior of a LegoFlow graph for Radix-k based image compositing.
现有的数据分析和可视化算法被广泛用于各种模拟,这些模拟努力支持越来越多的运行时系统。BabelFlow框架的设计就是为了解决这种情况,它为用户提供了一个简单的接口,将分析算法实现为可在不同运行时移植的数据流图。然而,BabelFlow的限制是图形不容易重用。将它们插入现有的原位工作流和构建更复杂的图形是困难的。在本文中,我们介绍了LegoFlow,这是BabelFlow的一个扩展,可以解决这些挑战。具体来说,我们将LegoFlow集成到Ascent中,这是一个用于大规模现场分析的轻量级框架,并提供了一个图形可组合性机制。这种机制是一种直观的方法,可以将任意数量的图链接在一起,以创建更复杂的模式,并避免为微小的修改进行代价高昂的重新定义。在不牺牲可移植性的情况下,LegoFlow引入了完全的灵活性,最大限度地提高了现场分析工作流程的生产力。此外,我们展示了一个完整的LULESH仿真与乐高流的现场可视化运行在Charm++之上。这是一种新的原位分析方法,异步任务运行时允许计算和分析的例程重叠。最后,我们在Ascent中评估了一些基于LegoFlow的过滤器和提取,以及基于Radix-k的图像合成的LegoFlow图的缩放行为。
{"title":"Portable and Composable Flow Graphs for In Situ Analytics","authors":"Sergei Shudler, Steve Petruzza, Valerio Pascucci, P. Bremer","doi":"10.1109/LDAV53230.2021.00014","DOIUrl":"https://doi.org/10.1109/LDAV53230.2021.00014","url":null,"abstract":"Existing data analysis and visualization algorithms are used in a wide range of simulations that strive to support an increasing number of runtime systems. The BabelFlow framework has been designed to address this situation by providing users with a simple interface to implement analysis algorithms as dataflow graphs portable across different runtimes. The limitation in BabelFlow, however, is that the graphs are not easily reusable. Plugging them into existing in situ workflows and constructing more complex graphs is difficult. In this paper, we introduce LegoFlow, an extension to BabelFlow that addresses these challenges. Specifically, we integrate LegoFlow into Ascent, a flyweight framework for large scale in situ analytics, and provide a graph composability mechanism. This mechanism is an intuitive approach to link an arbitrary number of graphs together to create more complex patterns, as well as avoid costly reimple-mentations for minor modifications. Without sacrificing portability, LegoFlow introduces complete flexibility that maximizes the productivity of in situ analytics workflows. Furthermore, we demonstrate a complete LULESH simulation with LegoFlow-based in situ visualization running on top of Charm++. It is a novel approach for in situ analytics, whereby the asynchronous tasking runtime allows routines for computation and analysis to overlap. Finally, we evaluate a number of LegoFlow-based filters and extracts in Ascent, as well as the scaling behavior of a LegoFlow graph for Radix-k based image compositing.","PeriodicalId":441438,"journal":{"name":"2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121166702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Quality and Low-Memory-Footprint Progressive Decoding of Large-Scale Particle Data 大规模粒子数据的高质量和低内存占用渐进解码
Pub Date : 2021-10-01 DOI: 10.1109/LDAV53230.2021.00011
D. Hoang, H. Bhatia, P. Lindstrom, Valerio Pascucci
Particle representations are used often in large-scale simulations and observations, frequently creating datasets containing several millions of particles or more. Due to their sheer size, such datasets are difficult to store, transfer, and analyze efficiently. Data compression is a promising solution; however, effective approaches to compress particle data are lacking and no community-standard and accepted techniques exist. Current techniques are designed either to compress small data very well but require high computational resources when applied to large data, or to work with large data but without a focus on compression, resulting in low reconstruction quality per bit stored. In this paper, we present innovations targeting tree-based particle compression approaches that improve the tradeoff between high quality and low memory-footprint for compression and decompression of large particle datasets. Inspired by the lazy wavelet transform, we introduce a new way of partitioning space, which allows a low-cost depth-first traversal of a particle hierarchy to cover the space broadly. We also devise novel data-adaptive traversal orders that significantly reduce reconstruction error compared to traditional data-agnostic orders such as breadth-first and depth-first traversals. The new partitioning and traversal schemes are used to build novel particle hierarchies that can be traversed with asymptotically constant memory footprint while incurring low reconstruction error. Our solution to encoding and (lossy) decoding of large particle data is a flexible block-based hierarchy that supports progressive, random-access, and error-driven decoding, where error heuristics can be supplied by the user. Finally, through extensive experimentation, we demonstrate the efficacy and the flexibility of the proposed techniques when combined as well as when used independently with existing approaches on a wide range of scientific particle datasets.
粒子表示通常用于大规模模拟和观测,经常创建包含数百万或更多粒子的数据集。由于其庞大的规模,这些数据集很难有效地存储、传输和分析。数据压缩是一个很有前途的解决方案;然而,目前缺乏有效的压缩粒子数据的方法,也没有统一的标准和公认的技术。目前的技术要么是为了压缩小数据而设计的,但在应用于大数据时需要大量的计算资源,要么是为了处理大数据而不关注压缩,导致每比特存储的重构质量很低。在本文中,我们提出了针对基于树的粒子压缩方法的创新,这些方法在压缩和解压大型粒子数据集时改善了高质量和低内存占用之间的权衡。受惰性小波变换的启发,我们引入了一种新的空间划分方法,该方法允许对粒子层次结构进行低成本的深度优先遍历,以广泛地覆盖空间。我们还设计了新的数据自适应遍历顺序,与传统的数据不可知顺序(如宽度优先和深度优先遍历)相比,它显著减少了重构误差。新的分区和遍历方案用于构建新的粒子层次结构,该结构可以在具有渐进恒定内存占用的情况下遍历,同时产生低重构错误。我们对大粒子数据的编码和(有损)解码的解决方案是一个灵活的基于块的层次结构,它支持渐进式、随机访问和错误驱动的解码,其中错误启发式可以由用户提供。最后,通过广泛的实验,我们证明了所提出的技术在结合以及与广泛的科学粒子数据集上的现有方法独立使用时的有效性和灵活性。
{"title":"High-Quality and Low-Memory-Footprint Progressive Decoding of Large-Scale Particle Data","authors":"D. Hoang, H. Bhatia, P. Lindstrom, Valerio Pascucci","doi":"10.1109/LDAV53230.2021.00011","DOIUrl":"https://doi.org/10.1109/LDAV53230.2021.00011","url":null,"abstract":"Particle representations are used often in large-scale simulations and observations, frequently creating datasets containing several millions of particles or more. Due to their sheer size, such datasets are difficult to store, transfer, and analyze efficiently. Data compression is a promising solution; however, effective approaches to compress particle data are lacking and no community-standard and accepted techniques exist. Current techniques are designed either to compress small data very well but require high computational resources when applied to large data, or to work with large data but without a focus on compression, resulting in low reconstruction quality per bit stored. In this paper, we present innovations targeting tree-based particle compression approaches that improve the tradeoff between high quality and low memory-footprint for compression and decompression of large particle datasets. Inspired by the lazy wavelet transform, we introduce a new way of partitioning space, which allows a low-cost depth-first traversal of a particle hierarchy to cover the space broadly. We also devise novel data-adaptive traversal orders that significantly reduce reconstruction error compared to traditional data-agnostic orders such as breadth-first and depth-first traversals. The new partitioning and traversal schemes are used to build novel particle hierarchies that can be traversed with asymptotically constant memory footprint while incurring low reconstruction error. Our solution to encoding and (lossy) decoding of large particle data is a flexible block-based hierarchy that supports progressive, random-access, and error-driven decoding, where error heuristics can be supplied by the user. Finally, through extensive experimentation, we demonstrate the efficacy and the flexibility of the proposed techniques when combined as well as when used independently with existing approaches on a wide range of scientific particle datasets.","PeriodicalId":441438,"journal":{"name":"2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV)","volume":"192 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131583200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Instrumenting Multiphysics Blood Flow Simulation Codes for In Situ Visualization and Analysis 仪器多物理场血流模拟代码的现场可视化和分析
Pub Date : 2021-10-01 DOI: 10.1109/LDAV53230.2021.00018
Anthony Bucaro, Connor Murphy, N. Ferrier, J. Insley, V. Mateevitsi, M. Papka, S. Rizzi, Jifu Tan
Blood flow simulations have important applications in engineering and medicine, requiring visualization and analysis for both fluid (blood plasma) and solid (cells). Recent advances in blood flow simulations highlight the need of a more efficient analysis of large data sets. Traditionally, analysis is performed after a simulation is completed, and any changes of simulation settings require running the simulation again. With bi-directional in situ analysis we aim to solve this problem by allowing manipulation of simulation parameters in run time. In this project, we describe our early steps toward this goal and present the in situ instrumentation of two coupled codes for blood flow simulation using the SENSEI in situ framework.
血流模拟在工程和医学中有着重要的应用,需要对流体(血浆)和固体(细胞)进行可视化和分析。血流模拟的最新进展强调了对大数据集进行更有效分析的需要。传统上,分析是在模拟完成后执行的,任何模拟设置的更改都需要再次运行模拟。通过双向原位分析,我们的目标是通过允许在运行时操纵仿真参数来解决这个问题。在本项目中,我们描述了实现这一目标的早期步骤,并介绍了使用SENSEI原位框架进行血流模拟的两个耦合代码的原位仪器。
{"title":"Instrumenting Multiphysics Blood Flow Simulation Codes for In Situ Visualization and Analysis","authors":"Anthony Bucaro, Connor Murphy, N. Ferrier, J. Insley, V. Mateevitsi, M. Papka, S. Rizzi, Jifu Tan","doi":"10.1109/LDAV53230.2021.00018","DOIUrl":"https://doi.org/10.1109/LDAV53230.2021.00018","url":null,"abstract":"Blood flow simulations have important applications in engineering and medicine, requiring visualization and analysis for both fluid (blood plasma) and solid (cells). Recent advances in blood flow simulations highlight the need of a more efficient analysis of large data sets. Traditionally, analysis is performed after a simulation is completed, and any changes of simulation settings require running the simulation again. With bi-directional in situ analysis we aim to solve this problem by allowing manipulation of simulation parameters in run time. In this project, we describe our early steps toward this goal and present the in situ instrumentation of two coupled codes for blood flow simulation using the SENSEI in situ framework.","PeriodicalId":441438,"journal":{"name":"2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132900831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameter Analysis and Contrail Detection of Aircraft Engine Simulations 飞机发动机仿真参数分析与尾迹检测
Pub Date : 2021-10-01 DOI: 10.1109/LDAV53230.2021.00016
Nafiul Nipu, Carla Floricel, Negar Naghashzadeh, R. Paoli, G. Marai
Aircraft engines emit particulates that alter the chemical composition of the atmosphere and perturb the Earth's radiation budget by creating additional ice clouds in the form of condensation trails called contrails. We propose a multi-scale visual computing system that will assist in defining contrail features with parameter analysis for computer-generated aircraft engine simulations. These simulations are computationally intensive and rely on high performance computing (HPC) solutions. Our multi-linked visual system seeks to help in the identification of the formation and evolution of contrails and in the identification of contrail-related spatial features from the simulation workflow.
飞机引擎释放的微粒会改变大气的化学成分,并通过形成凝结尾迹的形式产生额外的冰云,扰乱地球的辐射收支。我们提出了一个多尺度视觉计算系统,该系统将有助于通过参数分析来定义计算机生成的飞机发动机模拟的轨迹特征。这些模拟是计算密集型的,依赖于高性能计算(HPC)解决方案。我们的多链接视觉系统旨在帮助识别尾迹的形成和演变,并从模拟工作流程中识别与尾迹相关的空间特征。
{"title":"Parameter Analysis and Contrail Detection of Aircraft Engine Simulations","authors":"Nafiul Nipu, Carla Floricel, Negar Naghashzadeh, R. Paoli, G. Marai","doi":"10.1109/LDAV53230.2021.00016","DOIUrl":"https://doi.org/10.1109/LDAV53230.2021.00016","url":null,"abstract":"Aircraft engines emit particulates that alter the chemical composition of the atmosphere and perturb the Earth's radiation budget by creating additional ice clouds in the form of condensation trails called contrails. We propose a multi-scale visual computing system that will assist in defining contrail features with parameter analysis for computer-generated aircraft engine simulations. These simulations are computationally intensive and rely on high performance computing (HPC) solutions. Our multi-linked visual system seeks to help in the identification of the formation and evolution of contrails and in the identification of contrail-related spatial features from the simulation workflow.","PeriodicalId":441438,"journal":{"name":"2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115044988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2021 IEEE 11th Symposium on Large Data Analysis and Visualization LDAV 2021 2021 IEEE第11届大数据分析与可视化研讨会LDAV 2021
Pub Date : 2021-10-01 DOI: 10.1109/ldav53230.2021.00001
{"title":"2021 IEEE 11th Symposium on Large Data Analysis and Visualization LDAV 2021","authors":"","doi":"10.1109/ldav53230.2021.00001","DOIUrl":"https://doi.org/10.1109/ldav53230.2021.00001","url":null,"abstract":"","PeriodicalId":441438,"journal":{"name":"2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127771444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Writing, Running, and Analyzing Large-scale Scientific Simulations with Jupyter Notebooks 用Jupyter笔记本编写、运行和分析大规模科学模拟
Pub Date : 2021-10-01 DOI: 10.1109/LDAV53230.2021.00020
Pambayun Savira, T. Marrinan, M. Papka
Large-scale scientific simulations typically output massive amounts of data that must be later read in for post-hoc visualization and analysis. With codes simulating complex phenomena at ever-increasing fidelity, writing data to disk during this traditional high-performance computing workflow has become a significant bottleneck. In situ workflows offer a solution to this bottleneck, whereby data is simultaneously produced and analyzed without involving disk storage. In situ analysis can increase efficiency for domain scientists who are exploring a data set or fine-tuning visualization and analysis parameters. Our work seeks to enable researchers to easily create and interactively analyze large-scale simulations through the use of Jupyter Notebooks without requiring application developers to explicitly integrate in situ libraries.
大规模的科学模拟通常会输出大量的数据,这些数据必须在之后的可视化和分析中读取。随着模拟复杂现象的代码的保真度越来越高,在这种传统的高性能计算工作流程中向磁盘写入数据已成为一个重要的瓶颈。就地工作流为这一瓶颈提供了一个解决方案,即在不涉及磁盘存储的情况下同时生成和分析数据。原位分析可以提高正在探索数据集或微调可视化和分析参数的领域科学家的效率。我们的工作旨在使研究人员能够通过使用Jupyter Notebooks轻松创建和交互式分析大规模模拟,而不需要应用程序开发人员显式地集成原位库。
{"title":"Writing, Running, and Analyzing Large-scale Scientific Simulations with Jupyter Notebooks","authors":"Pambayun Savira, T. Marrinan, M. Papka","doi":"10.1109/LDAV53230.2021.00020","DOIUrl":"https://doi.org/10.1109/LDAV53230.2021.00020","url":null,"abstract":"Large-scale scientific simulations typically output massive amounts of data that must be later read in for post-hoc visualization and analysis. With codes simulating complex phenomena at ever-increasing fidelity, writing data to disk during this traditional high-performance computing workflow has become a significant bottleneck. In situ workflows offer a solution to this bottleneck, whereby data is simultaneously produced and analyzed without involving disk storage. In situ analysis can increase efficiency for domain scientists who are exploring a data set or fine-tuning visualization and analysis parameters. Our work seeks to enable researchers to easily create and interactively analyze large-scale simulations through the use of Jupyter Notebooks without requiring application developers to explicitly integrate in situ libraries.","PeriodicalId":441438,"journal":{"name":"2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132706217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1