Richard M. K. van Dijk, Daniela Gawehns, Matthijs van Leeuwen
We present WEARDA,1 the open source WEARable sensor Data Acquisition software package. WEARDA facilitates the acquisition of human activity data with smartwatches and is primarily aimed at researchers who require transparency, full control, and access to raw sensor data. It provides functionality to simultaneously record raw data from four sensors—tri-axis accelerometer, tri-axis gyroscope, barometer, and GPS—which should enable researchers to, for example, estimate energy expenditure and mine movement trajectories. A Samsung smartwatch running the Tizen OS was chosen because of 1) the required functionalities of the smartwatch software API, 2) the availability of software development tools and accessible documentation, 3) having the required sensors, and 4) the requirements on case design for acceptance by the target user group. WEARDA addresses five practical challenges concerning preparation, measurement, logistics, privacy preservation, and reproducibility to ensure efficient and errorless data collection. The software package was initially created for the project “Dementia back at the heart of the community”,2 and has been successfully used in that context.
{"title":"WEARDA: Recording Wearable Sensor Data for Human Activity Monitoring","authors":"Richard M. K. van Dijk, Daniela Gawehns, Matthijs van Leeuwen","doi":"10.5334/jors.454","DOIUrl":"https://doi.org/10.5334/jors.454","url":null,"abstract":"We present WEARDA,1 the open source WEARable sensor Data Acquisition software package. WEARDA facilitates the acquisition of human activity data with smartwatches and is primarily aimed at researchers who require transparency, full control, and access to raw sensor data. It provides functionality to simultaneously record raw data from four sensors—tri-axis accelerometer, tri-axis gyroscope, barometer, and GPS—which should enable researchers to, for example, estimate energy expenditure and mine movement trajectories. A Samsung smartwatch running the Tizen OS was chosen because of 1) the required functionalities of the smartwatch software API, 2) the availability of software development tools and accessible documentation, 3) having the required sensors, and 4) the requirements on case design for acceptance by the target user group. WEARDA addresses five practical challenges concerning preparation, measurement, logistics, privacy preservation, and reproducibility to ensure efficient and errorless data collection. The software package was initially created for the project “Dementia back at the heart of the community”,2 and has been successfully used in that context.","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135210955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reinier Cornelis Anthonius van Linschoten, Sebastiaan Laurens Knijnenburg, Rachel Louise West, Desirée van Noord
Real world data is being used increasingly in medical research. Castor Electronic Data Capture is a secure and user-friendly platform for managing study data. Integrating data from several databases into a single Castor database is complex. We developed CastorEDC API, a free and open source Python package which can be used to interact with the API of Castor, and through which data can be imported from multiple sources into a Castor database. The importer reads, cleans, validates and imports data while accounting for differences in column structure and variable coding between databases. https://github.com/reiniervlinschoten/castoredc_api.
{"title":"CastorEDC API: A Python Package for Managing Real World Data in Castor Electronic Data Capture","authors":"Reinier Cornelis Anthonius van Linschoten, Sebastiaan Laurens Knijnenburg, Rachel Louise West, Desirée van Noord","doi":"10.5334/jors.436","DOIUrl":"https://doi.org/10.5334/jors.436","url":null,"abstract":"Real world data is being used increasingly in medical research. Castor Electronic Data Capture is a secure and user-friendly platform for managing study data. Integrating data from several databases into a single Castor database is complex. We developed CastorEDC API, a free and open source Python package which can be used to interact with the API of Castor, and through which data can be imported from multiple sources into a Castor database. The importer reads, cleans, validates and imports data while accounting for differences in column structure and variable coding between databases. https://github.com/reiniervlinschoten/castoredc_api.","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135059394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article describes GTdowloader, a Python package that serves as both an API wrapper and as a geographic information pre-processing helper to facilitate the download of Twitter data from the Twitter API. Specifically, the package offers functions that enable the download of Twitter data through single functions that integrate access to the API call parameters in the form of familiar Python functions syntax. In addition, the data is available for download in common formats for further analysis
{"title":"GTdownloader: A Python Package to Download, Visualize, and Export Georeferenced Tweets From the Twitter API","authors":"Juan Acosta-Sequeda, S. Derrible","doi":"10.5334/jors.443","DOIUrl":"https://doi.org/10.5334/jors.443","url":null,"abstract":"This article describes GTdowloader, a Python package that serves as both an API wrapper and as a geographic information pre-processing helper to facilitate the download of Twitter data from the Twitter API. Specifically, the package offers functions that enable the download of Twitter data through single functions that integrate access to the API call parameters in the form of familiar Python functions syntax. In addition, the data is available for download in common formats for further analysis","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70682487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon Christ, Daniel Schwabeneder, Christopher Rackauckas, Michael Krabbe Borregaard, Thomas Breloff
There are many excellent plotting libraries. Each excels at a specific use case: one is particularly suited for creating printable 2D figures for publication, another for generating interactive 3D graphics, while a third may have excellent LATEX integration or be ideal for creating dashboards on the web. The aim of Plots.jl is to enable the user to use the same syntax to interact with a range of different plotting libraries, making it possible to change the library that does the actual plotting (the backend) without needing to touch the code that creates the content – and without having to learn multiple application programming interfaces (API). This is achieved by separating the specification of the plot from the implementation of the graphical backend. This plot specification is extendable by a recipe system that allows package authors and users to create new types of plots, as well as to specify how to plot any type of object (e.g. a statistical model, a map, a phylogenetic tree or the solution to a system of differential equations) without depending on the Plots.jl package. This design supports a modular ecosystem structure for plotting and yields a high code reuse potential across the entire Julia package ecosystem. Plots.jl is publicly available at https://github.com/JuliaPlots/Plots.jl.
{"title":"Plots.jl – A User Extendable Plotting API for the Julia Programming Language","authors":"Simon Christ, Daniel Schwabeneder, Christopher Rackauckas, Michael Krabbe Borregaard, Thomas Breloff","doi":"10.5334/jors.431","DOIUrl":"https://doi.org/10.5334/jors.431","url":null,"abstract":"There are many excellent plotting libraries. Each excels at a specific use case: one is particularly suited for creating printable 2D figures for publication, another for generating interactive 3D graphics, while a third may have excellent L<small>A</small>T<small>E</small>X integration or be ideal for creating dashboards on the web. The aim of <tt>Plots.jl</tt> is to enable the user to use the same syntax to interact with a range of different plotting libraries, making it possible to change the library that does the actual plotting (the <em>backend</em>) without needing to touch the code that creates the content – and without having to learn multiple application programming interfaces (API). This is achieved by separating the specification of the plot from the implementation of the graphical backend. This plot specification is extendable by a <em>recipe</em> system that allows package authors and users to create new types of plots, as well as to specify how to plot any type of object (e.g. a statistical model, a map, a phylogenetic tree or the solution to a system of differential equations) without depending on the <tt>Plots.jl</tt> package. This design supports a modular ecosystem structure for plotting and yields a high code reuse potential across the entire Julia package ecosystem. <tt>Plots.jl</tt> is publicly available at <a href=\"https://github.com/JuliaPlots/Plots.jl\" target=\"_blank\">https://github.com/JuliaPlots/Plots.jl</a>.","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135959476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Sochat, Matthieu Muffato, Audrey Stott, Marco De La Pierre, Georgia K. Stuart
Linux container technologies such as Docker and Singularity offer encapsulated environments for easy execution of software. In high performance computing, this is especially important for evolving and complex software stacks with conflicting dependencies that must co-exist. Singularity Registry HPC (“shpc”) was created as an effort to install containers in this environment as modules, seamlessly allowing for typically hidden executables inside containers to be presented to the user as commands, and as such significantly simplifying the user experience. A remaining challenge, however, is deriving the list of important executables in the container. In this work, we present a new modular methodology that allows for discovering new containers in large community sets, deriving container entries with relevant executables therein, and fully automating both recipe generation and updates over time. As an exemplar outcome, we have employed this methodology to add to the Registry over 8,000 containers from the BioContainers community that can be maintained and updated by the software automation. All software is publicly available on the GitHub platform, and can be beneficial to container registries and infrastructure providers for automatically generating container modules, thus lowering the usage entry barrier and improving user experience.
{"title":"Automated Discovery of Container Executables","authors":"V. Sochat, Matthieu Muffato, Audrey Stott, Marco De La Pierre, Georgia K. Stuart","doi":"10.5334/jors.451","DOIUrl":"https://doi.org/10.5334/jors.451","url":null,"abstract":"Linux container technologies such as Docker and Singularity offer encapsulated environments for easy execution of software. In high performance computing, this is especially important for evolving and complex software stacks with conflicting dependencies that must co-exist. Singularity Registry HPC (“shpc”) was created as an effort to install containers in this environment as modules, seamlessly allowing for typically hidden executables inside containers to be presented to the user as commands, and as such significantly simplifying the user experience. A remaining challenge, however, is deriving the list of important executables in the container. In this work, we present a new modular methodology that allows for discovering new containers in large community sets, deriving container entries with relevant executables therein, and fully automating both recipe generation and updates over time. As an exemplar outcome, we have employed this methodology to add to the Registry over 8,000 containers from the BioContainers community that can be maintained and updated by the software automation. All software is publicly available on the GitHub platform, and can be beneficial to container registries and infrastructure providers for automatically generating container modules, thus lowering the usage entry barrier and improving user experience.","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70682496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Taskfarm: A Client/Server Framework for Supporting Massive Embarrassingly Parallel Workloads","authors":"M. Hagdorn, N. Gourmelen","doi":"10.5334/jors.393","DOIUrl":"https://doi.org/10.5334/jors.393","url":null,"abstract":"","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70682099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
João Ramalhinho, T. Dowrick, E. Bonmati, M. Clarkson
Fan-Slicer (https://github.com/UCL/fan-slicer) is a Python package that enables the fast sampling (slicing) of 2D ultrasound-shaped images from a 3D volume. To increase sampling speed, CUDA kernel functions are used in conjunction with the Pycuda package. The main features include functions to generate images from both 3D surface models and 3D volumes. Additionally, the package also allows for the sampling of images from curvilinear (fan shaped planes) and linear (rectangle shaped planes) ultrasound transducers. Potential uses of Fan-slicer include the generation of large datasets of 2D images from 3D volumes and the simulation of intra-operative data among others
{"title":"Fan-Slicer: A Pycuda Package for Fast Reslicing of Ultrasound Shaped Planes","authors":"João Ramalhinho, T. Dowrick, E. Bonmati, M. Clarkson","doi":"10.5334/jors.422","DOIUrl":"https://doi.org/10.5334/jors.422","url":null,"abstract":"Fan-Slicer (https://github.com/UCL/fan-slicer) is a Python package that enables the fast sampling (slicing) of 2D ultrasound-shaped images from a 3D volume. To increase sampling speed, CUDA kernel functions are used in conjunction with the Pycuda package. The main features include functions to generate images from both 3D surface models and 3D volumes. Additionally, the package also allows for the sampling of images from curvilinear (fan shaped planes) and linear (rectangle shaped planes) ultrasound transducers. Potential uses of Fan-slicer include the generation of large datasets of 2D images from 3D volumes and the simulation of intra-operative data among others","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70682609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Amin Tahavori, Nicolò Golinucci, Lorenzo Rinaldi, Matteo Vincenzo Rocco, Emanuela Colombo
MARIO (Multi-Regional Analysis of Regions through Input-Output) is a Python-based framework for building input-output models. It automates the parsing of well-known databases (e.g. EXIOBASE, EORA, Eurostat) and of customized tables. With respect to similar tools, like pymrio, it broadens the scope of application to supply-use tables and handles both monetary and physical units. Employing an intuitive Excel-based API, it facilitates advanced table manipulations and allows for modelling additional supply chains through a hybrid LCA approach. It provides built-in functions for footprinting and scenario analyses as well as for visualizations of model outcomes. Results are exportable into various formats, possibly supplemented by a metadata file tracking the full history of applied changes. MARIO comes with extensive documentation and is available on Zenodo, GitHub, or installable via PyPI.
MARIO (Multi-Regional Analysis of Regions through Input-Output)是一个基于python的框架,用于构建投入-产出模型。它可以自动解析知名数据库(例如EXIOBASE, EORA, Eurostat)和定制表。相对于类似的工具,如pymrio,它将应用范围扩大到供应-使用表,并处理货币和物理单位。它采用直观的基于excel的API,便于高级表操作,并允许通过混合LCA方法建模额外的供应链。它为足迹和场景分析以及模型结果的可视化提供了内置功能。结果可以导出为各种格式,还可以通过跟踪应用更改的完整历史记录的元数据文件进行补充。马里奥带有广泛的文档,可在Zenodo, GitHub上使用,或通过PyPI安装。
{"title":"MARIO: A Versatile and User-Friendly Software for Building Input-Output Models","authors":"Mohammad Amin Tahavori, Nicolò Golinucci, Lorenzo Rinaldi, Matteo Vincenzo Rocco, Emanuela Colombo","doi":"10.5334/jors.473","DOIUrl":"https://doi.org/10.5334/jors.473","url":null,"abstract":"MARIO (Multi-Regional Analysis of Regions through Input-Output) is a Python-based framework for building input-output models. It automates the parsing of well-known databases (e.g. EXIOBASE, EORA, Eurostat) and of customized tables. With respect to similar tools, like pymrio, it broadens the scope of application to supply-use tables and handles both monetary and physical units. Employing an intuitive Excel-based API, it facilitates advanced table manipulations and allows for modelling additional supply chains through a hybrid LCA approach. It provides built-in functions for footprinting and scenario analyses as well as for visualizations of model outcomes. Results are exportable into various formats, possibly supplemented by a metadata file tracking the full history of applied changes. MARIO comes with extensive documentation and is available on Zenodo, GitHub, or installable via PyPI.","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135214705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joose Helle, Age Poom, Elias Willberg, Tuuli Toivonen
Green Paths is a prototype of route planning software for finding exposure-optimised routes for active travel. It incorporates external data on environmental exposures, including traffic noise levels, air quality, and street-level greenery into the street and paths network produced by the OpenStreetMap project. Written in the Python programming language, the software applies a novel environmental impedance function in the least cost path routing to find exposure-optimised routes. Routes for externally defined origin-destination pairs can be queried via a RESTful API. The API returns alternative routes equipped with rich exposure data. The published version of the software has been applied in population level environmental exposure assessment and in an end-user-oriented web-based route planner application designed for use in the Helsinki Metropolitan Area.
Green Paths是一个路线规划软件的原型,用于为主动旅行寻找暴露优化路线。它将环境暴露的外部数据(包括交通噪音水平、空气质量和街道绿化)整合到OpenStreetMap项目生成的街道和路径网络中。该软件用Python编程语言编写,在成本最低的路径路由中应用了一种新的环境阻抗函数,以找到暴露优化的路径。外部定义的始发目的地对的路由可以通过RESTful API查询。该API返回带有丰富暴露数据的替代路由。已出版的软件版本已应用于人口水平的环境暴露评估,并应用于设计用于赫尔辛基都市地区的面向最终用户的网络路线规划应用程序。
{"title":"The Green Paths Route Planning Software for Exposure-Optimised Active Travel","authors":"Joose Helle, Age Poom, Elias Willberg, Tuuli Toivonen","doi":"10.5334/jors.400","DOIUrl":"https://doi.org/10.5334/jors.400","url":null,"abstract":"Green Paths is a prototype of route planning software for finding exposure-optimised routes for active travel. It incorporates external data on environmental exposures, including traffic noise levels, air quality, and street-level greenery into the street and paths network produced by the OpenStreetMap project. Written in the Python programming language, the software applies a novel environmental impedance function in the least cost path routing to find exposure-optimised routes. Routes for externally defined origin-destination pairs can be queried via a RESTful API. The API returns alternative routes equipped with rich exposure data. The published version of the software has been applied in population level environmental exposure assessment and in an end-user-oriented web-based route planner application designed for use in the Helsinki Metropolitan Area.","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136305078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alice Künzel, Kai Mühlbauer, Julia Neelmeijer, Daniel Spengler
WRaINfo is a software for real-time weather radar data processing developed by the Helmholtz Innovation Lab FERN.Lab, a technology and innovation platform of the German Research Centre for Geosciences Potsdam (GFZ). WRaINfo is specifically designed for processing X-band weather radar data of FURUNO devices. The modules of the package allow to read and process raw data of the WR2120 and WR2100. For this purpose, many functions of the library wradlib are used and adapted. The processing is controlled by a configuration file, main functionalities include formatting, attenuation correction, clutter detection, georeferencing and gridding of the data. This allows the construction of reproducible, automatic data processing chains. The package is written in the Python programming language. The source code is publicly available on GitLab. Compiled versions are also available on PyPi. The package is distributed under the Apache 2.0 license.
{"title":"WRaINfo: An Open Source Library for Weather Radar INformation for FURUNO Weather Radars Based on Wradlib","authors":"Alice Künzel, Kai Mühlbauer, Julia Neelmeijer, Daniel Spengler","doi":"10.5334/jors.453","DOIUrl":"https://doi.org/10.5334/jors.453","url":null,"abstract":"WRaINfo is a software for real-time weather radar data processing developed by the Helmholtz Innovation Lab FERN.Lab, a technology and innovation platform of the German Research Centre for Geosciences Potsdam (GFZ). WRaINfo is specifically designed for processing X-band weather radar data of FURUNO devices. The modules of the package allow to read and process raw data of the WR2120 and WR2100. For this purpose, many functions of the library wradlib are used and adapted. The processing is controlled by a configuration file, main functionalities include formatting, attenuation correction, clutter detection, georeferencing and gridding of the data. This allows the construction of reproducible, automatic data processing chains. The package is written in the Python programming language. The source code is publicly available on GitLab. Compiled versions are also available on PyPi. The package is distributed under the Apache 2.0 license.","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136303793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}