Brenda Praggastis, Sinan Aksoy, Dustin Arendt, Mark Bonicillo, Cliff Joslyn, Emilie Purvine, Madelyn Shapiro, Ji Young Yun
HyperNetX (HNX) is an open source Python library for the analysis and visualization of complex network data modeled as hypergraphs. Initially released in 2019, HNX facilitates exploratory data analysis of complex networks using algebraic topology, combinatorics, and generalized hypergraph and graph theoretical methods on structured data inputs. With its 2023 release, the library supports attaching metadata, numerical and categorical, to nodes (vertices) and hyperedges, as well as to node-hyperedge pairings (incidences). HNX has a customizable Matplotlib-based visualization module as well as HypernetX-Widget, its JavaScript addon for interactive exploration and visualization of hypergraphs within Jupyter Notebooks. Both packages are available on GitHub and PyPI. With a growing community of users and collaborators, HNX has become a preeminent tool for hypergraph analysis.
{"title":"HyperNetX: A Python package for modeling complex network data as hypergraphs","authors":"Brenda Praggastis, Sinan Aksoy, Dustin Arendt, Mark Bonicillo, Cliff Joslyn, Emilie Purvine, Madelyn Shapiro, Ji Young Yun","doi":"arxiv-2310.11626","DOIUrl":"https://doi.org/arxiv-2310.11626","url":null,"abstract":"HyperNetX (HNX) is an open source Python library for the analysis and\u0000visualization of complex network data modeled as hypergraphs. Initially\u0000released in 2019, HNX facilitates exploratory data analysis of complex networks\u0000using algebraic topology, combinatorics, and generalized hypergraph and graph\u0000theoretical methods on structured data inputs. With its 2023 release, the\u0000library supports attaching metadata, numerical and categorical, to nodes\u0000(vertices) and hyperedges, as well as to node-hyperedge pairings (incidences).\u0000HNX has a customizable Matplotlib-based visualization module as well as\u0000HypernetX-Widget, its JavaScript addon for interactive exploration and\u0000visualization of hypergraphs within Jupyter Notebooks. Both packages are\u0000available on GitHub and PyPI. With a growing community of users and\u0000collaborators, HNX has become a preeminent tool for hypergraph analysis.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"15 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The introduction of posit reopened the debate about the utility of IEEE754 in specific domains. In this context, we propose a high-level language (Scala) library that aims to reduce the effort of designing and testing new number representation systems (NRSs). The library's efficiency is tested with three new NRSs derived from Morris Tapered Floating-Point by adding a hidden exponent bit. We call these NRSs MorrisHEB, MorrisBiasHEB, and MorrisUnaryHEB, respectively. We show that they offer a better dynamic range, better decimal accuracy for unary operations, more exact results for addition (37.61% in the case of MorrisUnaryHEB), and better average decimal accuracy for inexact results on binary operations than posit and IEEE754. Going through existing benchmarks in the literature, and favorable/unfavorable examples for IEEE754/posit, we show that these new NRSs produce similar (less than one decimal accuracy difference) or even better results than IEEE754 and posit. Given the entire spectrum of results, there are arguments for MorrisBiasHEB to be used as a replacement for IEEE754 in general computations. MorrisUnaryHEB has a more populated ``golden zone'' (+13.6%) and a better dynamic range (149X) than posit, making it a candidate for machine learning computations.
{"title":"A Number Representation Systems Library Supporting New Representations Based on Morris Tapered Floating-point with Hidden Exponent Bit","authors":"Stefan-Dan Ciocirlan, Dumitrel Loghin","doi":"arxiv-2310.09797","DOIUrl":"https://doi.org/arxiv-2310.09797","url":null,"abstract":"The introduction of posit reopened the debate about the utility of IEEE754 in\u0000specific domains. In this context, we propose a high-level language (Scala)\u0000library that aims to reduce the effort of designing and testing new number\u0000representation systems (NRSs). The library's efficiency is tested with three\u0000new NRSs derived from Morris Tapered Floating-Point by adding a hidden exponent\u0000bit. We call these NRSs MorrisHEB, MorrisBiasHEB, and MorrisUnaryHEB,\u0000respectively. We show that they offer a better dynamic range, better decimal\u0000accuracy for unary operations, more exact results for addition (37.61% in the\u0000case of MorrisUnaryHEB), and better average decimal accuracy for inexact\u0000results on binary operations than posit and IEEE754. Going through existing\u0000benchmarks in the literature, and favorable/unfavorable examples for\u0000IEEE754/posit, we show that these new NRSs produce similar (less than one\u0000decimal accuracy difference) or even better results than IEEE754 and posit.\u0000Given the entire spectrum of results, there are arguments for MorrisBiasHEB to\u0000be used as a replacement for IEEE754 in general computations. MorrisUnaryHEB\u0000has a more populated ``golden zone'' (+13.6%) and a better dynamic range (149X)\u0000than posit, making it a candidate for machine learning computations.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"16 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timbwaoga A. J. Ouermi, Robert M Kirby, Martin Berzins
Polynomial interpolation is an important component of many computational problems. In several of these computational problems, failure to preserve positivity when using polynomials to approximate or map data values between meshes can lead to negative unphysical quantities. Currently, most polynomial-based methods for enforcing positivity are based on splines and polynomial rescaling. The spline-based approaches build interpolants that are positive over the intervals in which they are defined and may require solving a minimization problem and/or system of equations. The linear polynomial rescaling methods allow for high-degree polynomials but enforce positivity only at limited locations (e.g., quadrature nodes). This work introduces open-source software (HiPPIS) for high-order data-bounded interpolation (DBI) and positivity-preserving interpolation (PPI) that addresses the limitations of both the spline and polynomial rescaling methods. HiPPIS is suitable for approximating and mapping physical quantities such as mass, density, and concentration between meshes while preserving positivity. This work provides Fortran and Matlab implementations of the DBI and PPI methods, presents an analysis of the mapping error in the context of PDEs, and uses several 1D and 2D numerical examples to demonstrate the benefits and limitations of HiPPIS.
{"title":"Algorithm xxxx: HiPPIS A High-Order Positivity-Preserving Mapping Software for Structured Meshes","authors":"Timbwaoga A. J. Ouermi, Robert M Kirby, Martin Berzins","doi":"arxiv-2310.08818","DOIUrl":"https://doi.org/arxiv-2310.08818","url":null,"abstract":"Polynomial interpolation is an important component of many computational\u0000problems. In several of these computational problems, failure to preserve\u0000positivity when using polynomials to approximate or map data values between\u0000meshes can lead to negative unphysical quantities. Currently, most\u0000polynomial-based methods for enforcing positivity are based on splines and\u0000polynomial rescaling. The spline-based approaches build interpolants that are\u0000positive over the intervals in which they are defined and may require solving a\u0000minimization problem and/or system of equations. The linear polynomial\u0000rescaling methods allow for high-degree polynomials but enforce positivity only\u0000at limited locations (e.g., quadrature nodes). This work introduces open-source\u0000software (HiPPIS) for high-order data-bounded interpolation (DBI) and\u0000positivity-preserving interpolation (PPI) that addresses the limitations of\u0000both the spline and polynomial rescaling methods. HiPPIS is suitable for\u0000approximating and mapping physical quantities such as mass, density, and\u0000concentration between meshes while preserving positivity. This work provides\u0000Fortran and Matlab implementations of the DBI and PPI methods, presents an\u0000analysis of the mapping error in the context of PDEs, and uses several 1D and\u00002D numerical examples to demonstrate the benefits and limitations of HiPPIS.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"19 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eve Le Guillou, Michael Will, Pierre Guillou, Jonas Lukasczyk, Pierre Fortin, Christoph Garth, Julien Tierny
This system paper presents a software framework for the support of topological analysis pipelines in a distributed-memory model. While several recent papers introduced topology-based approaches for distributed-memory environments, these were reporting experiments obtained with tailored, mono-algorithm implementations. In contrast, we describe in this paper a general-purpose, generic framework for topological analysis pipelines, i.e. a sequence of topological algorithms interacting together, possibly on distinct numbers of processes. Specifically, we instantiated our framework with the MPI model, within the Topology ToolKit (TTK). While developing this framework, we faced several algorithmic and software engineering challenges, which we document in this paper. We provide a taxonomy for the distributed-memory topological algorithms supported by TTK, depending on their communication needs and provide examples of hybrid MPI+thread parallelizations. Detailed performance analyses show that parallel efficiencies range from $20%$ to $80%$ (depending on the algorithms), and that the MPI-specific preconditioning introduced by our framework induces a negligible computation time overhead. We illustrate the new distributed-memory capabilities of TTK with an example of advanced analysis pipeline, combining multiple algorithms, run on the largest publicly available dataset we have found (120 billion vertices) on a standard cluster with 64 nodes (for a total of 1,536 cores). Finally, we provide a roadmap for the completion of TTK's MPI extension, along with generic recommendations for each algorithm communication category.
{"title":"A Generic Software Framework for Distributed Topological Analysis Pipelines","authors":"Eve Le Guillou, Michael Will, Pierre Guillou, Jonas Lukasczyk, Pierre Fortin, Christoph Garth, Julien Tierny","doi":"arxiv-2310.08339","DOIUrl":"https://doi.org/arxiv-2310.08339","url":null,"abstract":"This system paper presents a software framework for the support of\u0000topological analysis pipelines in a distributed-memory model. While several\u0000recent papers introduced topology-based approaches for distributed-memory\u0000environments, these were reporting experiments obtained with tailored,\u0000mono-algorithm implementations. In contrast, we describe in this paper a\u0000general-purpose, generic framework for topological analysis pipelines, i.e. a\u0000sequence of topological algorithms interacting together, possibly on distinct\u0000numbers of processes. Specifically, we instantiated our framework with the MPI\u0000model, within the Topology ToolKit (TTK). While developing this framework, we\u0000faced several algorithmic and software engineering challenges, which we\u0000document in this paper. We provide a taxonomy for the distributed-memory\u0000topological algorithms supported by TTK, depending on their communication needs\u0000and provide examples of hybrid MPI+thread parallelizations. Detailed\u0000performance analyses show that parallel efficiencies range from $20%$ to\u0000$80%$ (depending on the algorithms), and that the MPI-specific preconditioning\u0000introduced by our framework induces a negligible computation time overhead. We\u0000illustrate the new distributed-memory capabilities of TTK with an example of\u0000advanced analysis pipeline, combining multiple algorithms, run on the largest\u0000publicly available dataset we have found (120 billion vertices) on a standard\u0000cluster with 64 nodes (for a total of 1,536 cores). Finally, we provide a\u0000roadmap for the completion of TTK's MPI extension, along with generic\u0000recommendations for each algorithm communication category.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Programs involving discontinuities introduced by control flow constructs such as conditional branches pose challenges to mathematical optimization methods that assume a degree of smoothness in the objective function's response surface. Smooth interpretation (SI) is a form of abstract interpretation that approximates the convolution of a program's output with a Gaussian kernel, thus smoothing its output in a principled manner. Here, we combine SI with automatic differentiation (AD) to efficiently compute gradients of smoothed programs. In contrast to AD across a regular program execution, these gradients also capture the effects of alternative control flow paths. The combination of SI with AD enables the direct gradient-based parameter synthesis for branching programs, allowing for instance the calibration of simulation models or their combination with neural network models in machine learning pipelines. We detail the effects of the approximations made for tractability in SI and propose a novel Monte Carlo estimator that avoids the underlying assumptions by estimating the smoothed programs' gradients through a combination of AD and sampling. Using DiscoGrad, our tool for automatically translating simple C++ programs to a smooth differentiable form, we perform an extensive evaluation. We compare the combination of SI with AD and our Monte Carlo estimator to existing gradient-free and stochastic methods on four non-trivial and originally discontinuous problems ranging from classical simulation-based optimization to neural network-driven control. While the optimization progress with the SI-based estimator depends on the complexity of the programs' control flow, our Monte Carlo estimator is competitive in all problems, exhibiting the fastest convergence by a substantial margin in our highest-dimensional problem.
{"title":"Smoothing Methods for Automatic Differentiation Across Conditional Branches","authors":"Justin N. Kreikemeyer, Philipp Andelfinger","doi":"arxiv-2310.03585","DOIUrl":"https://doi.org/arxiv-2310.03585","url":null,"abstract":"Programs involving discontinuities introduced by control flow constructs such\u0000as conditional branches pose challenges to mathematical optimization methods\u0000that assume a degree of smoothness in the objective function's response\u0000surface. Smooth interpretation (SI) is a form of abstract interpretation that\u0000approximates the convolution of a program's output with a Gaussian kernel, thus\u0000smoothing its output in a principled manner. Here, we combine SI with automatic\u0000differentiation (AD) to efficiently compute gradients of smoothed programs. In\u0000contrast to AD across a regular program execution, these gradients also capture\u0000the effects of alternative control flow paths. The combination of SI with AD\u0000enables the direct gradient-based parameter synthesis for branching programs,\u0000allowing for instance the calibration of simulation models or their combination\u0000with neural network models in machine learning pipelines. We detail the effects\u0000of the approximations made for tractability in SI and propose a novel Monte\u0000Carlo estimator that avoids the underlying assumptions by estimating the\u0000smoothed programs' gradients through a combination of AD and sampling. Using\u0000DiscoGrad, our tool for automatically translating simple C++ programs to a\u0000smooth differentiable form, we perform an extensive evaluation. We compare the\u0000combination of SI with AD and our Monte Carlo estimator to existing\u0000gradient-free and stochastic methods on four non-trivial and originally\u0000discontinuous problems ranging from classical simulation-based optimization to\u0000neural network-driven control. While the optimization progress with the\u0000SI-based estimator depends on the complexity of the programs' control flow, our\u0000Monte Carlo estimator is competitive in all problems, exhibiting the fastest\u0000convergence by a substantial margin in our highest-dimensional problem.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"10 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jakob Sauer Jørgensen, Evangelos Papoutsellis, Laura Murgatroyd, Gemma Fardell, Edoardo Pasca
This article presents the algorithms developed by the Core Imaging Library (CIL) developer team for the Helsinki Tomography Challenge 2022. The challenge focused on reconstructing 2D phantom shapes from limited-angle computed tomography (CT) data. The CIL team designed and implemented five reconstruction methods using CIL (https://ccpi.ac.uk/cil/), an open-source Python package for tomographic imaging. The CIL team adopted a model-based reconstruction strategy, unique to this challenge with all other teams relying on deep-learning techniques. The CIL algorithms showcased exceptional performance, with one algorithm securing the third place in the competition. The best-performing algorithm employed careful CT data pre-processing and an optimization problem with single-sided directional total variation regularization combined with isotropic total variation and tailored lower and upper bounds. The reconstructions and segmentations achieved high quality for data with angular ranges down to 50 degrees, and in some cases acceptable performance even at 40 and 30 degrees. This study highlights the effectiveness of model-based approaches in limited-angle tomography and emphasizes the importance of proper algorithmic design leveraging on available prior knowledge to overcome data limitations. Finally, this study highlights the flexibility of CIL for prototyping and comparison of different optimization methods.
{"title":"A directional regularization method for the limited-angle Helsinki Tomography Challenge using the Core Imaging Library (CIL)","authors":"Jakob Sauer Jørgensen, Evangelos Papoutsellis, Laura Murgatroyd, Gemma Fardell, Edoardo Pasca","doi":"arxiv-2310.01671","DOIUrl":"https://doi.org/arxiv-2310.01671","url":null,"abstract":"This article presents the algorithms developed by the Core Imaging Library\u0000(CIL) developer team for the Helsinki Tomography Challenge 2022. The challenge\u0000focused on reconstructing 2D phantom shapes from limited-angle computed\u0000tomography (CT) data. The CIL team designed and implemented five reconstruction\u0000methods using CIL (https://ccpi.ac.uk/cil/), an open-source Python package for\u0000tomographic imaging. The CIL team adopted a model-based reconstruction\u0000strategy, unique to this challenge with all other teams relying on\u0000deep-learning techniques. The CIL algorithms showcased exceptional performance,\u0000with one algorithm securing the third place in the competition. The\u0000best-performing algorithm employed careful CT data pre-processing and an\u0000optimization problem with single-sided directional total variation\u0000regularization combined with isotropic total variation and tailored lower and\u0000upper bounds. The reconstructions and segmentations achieved high quality for\u0000data with angular ranges down to 50 degrees, and in some cases acceptable\u0000performance even at 40 and 30 degrees. This study highlights the effectiveness\u0000of model-based approaches in limited-angle tomography and emphasizes the\u0000importance of proper algorithmic design leveraging on available prior knowledge\u0000to overcome data limitations. Finally, this study highlights the flexibility of\u0000CIL for prototyping and comparison of different optimization methods.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"9 4-5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quantifying the causal effects of continuous exposures on outcomes of interest is critical for social, economic, health, and medical research. However, most existing software packages focus on binary exposures. We develop the CausalGPS R package that implements a collection of algorithms to provide algorithmic solutions for causal inference with continuous exposures. CausalGPS implements a causal inference workflow, with algorithms based on generalized propensity scores (GPS) as the core, extending propensity scores (the probability of a unit being exposed given pre-exposure covariates) from binary to continuous exposures. As the first step, the package implements efficient and flexible estimations of the GPS, allowing multiple user-specified modeling options. As the second step, the package provides two ways to adjust for confounding: weighting and matching, generating weighted and matched data sets, respectively. Lastly, the package provides built-in functions to fit flexible parametric, semi-parametric, or non-parametric regression models on the weighted or matched data to estimate the exposure-response function relating the outcome with the exposures. The computationally intensive tasks are implemented in C++, and efficient shared-memory parallelization is achieved by OpenMP API. This paper outlines the main components of the CausalGPS R package and demonstrates its application to assess the effect of long-term exposure to PM2.5 on educational attainment using zip code-level data from the contiguous United States from 2000-2016.
{"title":"CausalGPS: An R Package for Causal Inference With Continuous Exposures","authors":"Naeem Khoshnevis, Xiao Wu, Danielle Braun","doi":"arxiv-2310.00561","DOIUrl":"https://doi.org/arxiv-2310.00561","url":null,"abstract":"Quantifying the causal effects of continuous exposures on outcomes of\u0000interest is critical for social, economic, health, and medical research.\u0000However, most existing software packages focus on binary exposures. We develop\u0000the CausalGPS R package that implements a collection of algorithms to provide\u0000algorithmic solutions for causal inference with continuous exposures. CausalGPS\u0000implements a causal inference workflow, with algorithms based on generalized\u0000propensity scores (GPS) as the core, extending propensity scores (the\u0000probability of a unit being exposed given pre-exposure covariates) from binary\u0000to continuous exposures. As the first step, the package implements efficient\u0000and flexible estimations of the GPS, allowing multiple user-specified modeling\u0000options. As the second step, the package provides two ways to adjust for\u0000confounding: weighting and matching, generating weighted and matched data sets,\u0000respectively. Lastly, the package provides built-in functions to fit flexible\u0000parametric, semi-parametric, or non-parametric regression models on the\u0000weighted or matched data to estimate the exposure-response function relating\u0000the outcome with the exposures. The computationally intensive tasks are\u0000implemented in C++, and efficient shared-memory parallelization is achieved by\u0000OpenMP API. This paper outlines the main components of the CausalGPS R package\u0000and demonstrates its application to assess the effect of long-term exposure to\u0000PM2.5 on educational attainment using zip code-level data from the contiguous\u0000United States from 2000-2016.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Migran N. Gevorkyan, Anna V. Korolkova, Dmitry S. Kulyabov
This article discusses a universal way to create animation using Asymptote the language for vector graphics. The Asymptote language itself has a built-in library for creating animations, but its practical use is complicated by an extremely brief description in the official documentation and unstable execution of existing examples. The purpose of this article is to eliminate this gap. The method we describe is based on creating a PDF file with frames using Asymptote, with further converting it into a set of PNG images and merging them into a video using FFmpeg. All stages are described in detail, which allows the reader to use the described method without being familiar with the used utilities.
{"title":"Asymptote-based scientific animation","authors":"Migran N. Gevorkyan, Anna V. Korolkova, Dmitry S. Kulyabov","doi":"arxiv-2310.06860","DOIUrl":"https://doi.org/arxiv-2310.06860","url":null,"abstract":"This article discusses a universal way to create animation using Asymptote\u0000the language for vector graphics. The Asymptote language itself has a built-in\u0000library for creating animations, but its practical use is complicated by an\u0000extremely brief description in the official documentation and unstable\u0000execution of existing examples. The purpose of this article is to eliminate\u0000this gap. The method we describe is based on creating a PDF file with frames\u0000using Asymptote, with further converting it into a set of PNG images and\u0000merging them into a video using FFmpeg. All stages are described in detail,\u0000which allows the reader to use the described method without being familiar with\u0000the used utilities.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"11 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert L. Peach, Matteo Vinao-Carl, Nir Grossman, Michael David, Emma Mallas, David Sharp, Paresh A. Malhotra, Pierre Vandergheynst, Adam Gosztolai
Gaussian processes (GPs) are popular nonparametric statistical models for learning unknown functions and quantifying the spatiotemporal uncertainty in data. Recent works have extended GPs to model scalar and vector quantities distributed over non-Euclidean domains, including smooth manifolds appearing in numerous fields such as computer vision, dynamical systems, and neuroscience. However, these approaches assume that the manifold underlying the data is known, limiting their practical utility. We introduce RVGP, a generalisation of GPs for learning vector signals over latent Riemannian manifolds. Our method uses positional encoding with eigenfunctions of the connection Laplacian, associated with the tangent bundle, readily derived from common graph-based approximation of data. We demonstrate that RVGP possesses global regularity over the manifold, which allows it to super-resolve and inpaint vector fields while preserving singularities. Furthermore, we use RVGP to reconstruct high-density neural dynamics derived from low-density EEG recordings in healthy individuals and Alzheimer's patients. We show that vector field singularities are important disease markers and that their reconstruction leads to a comparable classification accuracy of disease states to high-density recordings. Thus, our method overcomes a significant practical limitation in experimental and clinical applications.
{"title":"Implicit Gaussian process representation of vector fields over arbitrary latent manifolds","authors":"Robert L. Peach, Matteo Vinao-Carl, Nir Grossman, Michael David, Emma Mallas, David Sharp, Paresh A. Malhotra, Pierre Vandergheynst, Adam Gosztolai","doi":"arxiv-2309.16746","DOIUrl":"https://doi.org/arxiv-2309.16746","url":null,"abstract":"Gaussian processes (GPs) are popular nonparametric statistical models for\u0000learning unknown functions and quantifying the spatiotemporal uncertainty in\u0000data. Recent works have extended GPs to model scalar and vector quantities\u0000distributed over non-Euclidean domains, including smooth manifolds appearing in\u0000numerous fields such as computer vision, dynamical systems, and neuroscience.\u0000However, these approaches assume that the manifold underlying the data is\u0000known, limiting their practical utility. We introduce RVGP, a generalisation of\u0000GPs for learning vector signals over latent Riemannian manifolds. Our method\u0000uses positional encoding with eigenfunctions of the connection Laplacian,\u0000associated with the tangent bundle, readily derived from common graph-based\u0000approximation of data. We demonstrate that RVGP possesses global regularity\u0000over the manifold, which allows it to super-resolve and inpaint vector fields\u0000while preserving singularities. Furthermore, we use RVGP to reconstruct\u0000high-density neural dynamics derived from low-density EEG recordings in healthy\u0000individuals and Alzheimer's patients. We show that vector field singularities\u0000are important disease markers and that their reconstruction leads to a\u0000comparable classification accuracy of disease states to high-density\u0000recordings. Thus, our method overcomes a significant practical limitation in\u0000experimental and clinical applications.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Discrete Element Methods (DEM), i.e.~the simulation of many rigid particles, suffer from very stiff differential equations plus multiscale challenges in space and time. The particles move smoothly through space until they interact almost instantaneously due to collisions. Dense particle packings hence require tiny time step sizes, while free particles can advance with large time steps. Admissible time step sizes can span multiple orders of magnitudes. We propose an adaptive local time stepping algorithm which identifies clusters of particles that can be updated independently, advances them optimistically and independently in time, determines collision time stamps in space-time such that we maximise the time step sizes used, and resolves the momentum exchange implicitly. It is combined with various acceleration techniques which exploit multiscale geometry representations and multiscale behaviour in time. The collision time stamp detection in space-time in combination with the implicit solve of the actual collision equations avoids that particles get locked into tiny time step sizes, the clustering yields a high concurrency level, and the acceleration techniques plus local time stepping avoid unnecessary computations. This brings a scaling, adaptive time stepping for DEM for real-world challenges into reach.
{"title":"Parallel local time stepping for rigid bodies represented by triangulated meshes","authors":"Peter Noble, Tobias Weinzierl","doi":"arxiv-2309.15417","DOIUrl":"https://doi.org/arxiv-2309.15417","url":null,"abstract":"Discrete Element Methods (DEM), i.e.~the simulation of many rigid particles,\u0000suffer from very stiff differential equations plus multiscale challenges in\u0000space and time. The particles move smoothly through space until they interact\u0000almost instantaneously due to collisions. Dense particle packings hence require\u0000tiny time step sizes, while free particles can advance with large time steps.\u0000Admissible time step sizes can span multiple orders of magnitudes. We propose\u0000an adaptive local time stepping algorithm which identifies clusters of\u0000particles that can be updated independently, advances them optimistically and\u0000independently in time, determines collision time stamps in space-time such that\u0000we maximise the time step sizes used, and resolves the momentum exchange\u0000implicitly. It is combined with various acceleration techniques which exploit\u0000multiscale geometry representations and multiscale behaviour in time. The\u0000collision time stamp detection in space-time in combination with the implicit\u0000solve of the actual collision equations avoids that particles get locked into\u0000tiny time step sizes, the clustering yields a high concurrency level, and the\u0000acceleration techniques plus local time stepping avoid unnecessary\u0000computations. This brings a scaling, adaptive time stepping for DEM for\u0000real-world challenges into reach.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"15 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}