Pub Date : 2024-10-07DOI: 10.1016/j.softx.2024.101915
Ondrej Budik , Milan Novak , Florian Sobieczky , Ivo Bukovsky
We present AISLEX, an online anomaly detection module based on the Learning Entropy algorithm, a novel machine learning-based information measure that quantifies the learning effort of neural networks. AISLEX detects anomalous data samples when the learning entropy value is high. The module is designed to be readily usable, with both NumPy and JAX backends, making it suitable for various application fields. The NumPy backend is optimized for devices running Python3, prioritizing limited memory and CPU usage. In contrast, the JAX backend is optimized for fast execution on CPUs, GPUs, and TPUs but requires more computational resources. AISLEX also provides extensive implementation examples in Jupyter notebooks, utilizing in-parameter-linear-nonlinear neural architectures selected for their low data requirements, computational simplicity, convergence analyzability, and dynamical stability.
{"title":"AISLEX: Approximate individual sample learning entropy with JAX","authors":"Ondrej Budik , Milan Novak , Florian Sobieczky , Ivo Bukovsky","doi":"10.1016/j.softx.2024.101915","DOIUrl":"10.1016/j.softx.2024.101915","url":null,"abstract":"<div><div>We present AISLEX, an online anomaly detection module based on the Learning Entropy algorithm, a novel machine learning-based information measure that quantifies the learning effort of neural networks. AISLEX detects anomalous data samples when the learning entropy value is high. The module is designed to be readily usable, with both NumPy and JAX backends, making it suitable for various application fields. The NumPy backend is optimized for devices running Python3, prioritizing limited memory and CPU usage. In contrast, the JAX backend is optimized for fast execution on CPUs, GPUs, and TPUs but requires more computational resources. AISLEX also provides extensive implementation examples in Jupyter notebooks, utilizing in-parameter-linear-nonlinear neural architectures selected for their low data requirements, computational simplicity, convergence analyzability, and dynamical stability.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101915"},"PeriodicalIF":2.4,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1016/j.softx.2024.101916
Carolina V. Giraldo , Sara E. Acevedo , Carlos A. Bonilla
This paper discusses the interest in utilizing R, a programming language, in soil physics for enhanced data reproducibility. Reproducibility is challenging across scientific disciplines, including soil science, and it is encouraged by demands for transparency from funding bodies and governments. Open and reproducible soil physics research can benefit the scientific community. With a focus on open science practices, the authors developed {infiltrodiscR}, leveraging existing R knowledge in soil physics. The package facilitates analysis of infiltration data, demonstrated through analysing changes in infiltration using published data. Results align with previous findings, showcasing {infiltrodiscR}'s potential in promoting reproducibility in soil science research.
本文讨论了在土壤物理学中使用 R(一种编程语言)来提高数据可重复性的兴趣。包括土壤科学在内的各个科学学科都面临着可重复性的挑战,而资助机构和政府对透明度的要求也鼓励可重复性。开放和可重现的土壤物理学研究可以造福科学界。作者以开放科学实践为重点,开发了{infiltrodiscR},充分利用了现有的土壤物理学 R 知识。该软件包便于分析渗透数据,通过使用已发布的数据分析渗透的变化进行了演示。结果与之前的研究结果一致,展示了{infiltrodiscR}在促进土壤科学研究可重复性方面的潜力。
{"title":"The R package infiltrodiscR: A package for infiltrometer data analysis and an experience for improving data reproducibility in soil physics","authors":"Carolina V. Giraldo , Sara E. Acevedo , Carlos A. Bonilla","doi":"10.1016/j.softx.2024.101916","DOIUrl":"10.1016/j.softx.2024.101916","url":null,"abstract":"<div><div>This paper discusses the interest in utilizing R, a programming language, in soil physics for enhanced data reproducibility. Reproducibility is challenging across scientific disciplines, including soil science, and it is encouraged by demands for transparency from funding bodies and governments. Open and reproducible soil physics research can benefit the scientific community. With a focus on open science practices, the authors developed {infiltrodiscR}, leveraging existing R knowledge in soil physics. The package facilitates analysis of infiltration data, demonstrated through analysing changes in infiltration using published data. Results align with previous findings, showcasing {infiltrodiscR}'s potential in promoting reproducibility in soil science research.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101916"},"PeriodicalIF":2.4,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-05DOI: 10.1016/j.softx.2024.101917
Francisco M. Garcia-Moreno , Jesús Cortés Alcaraz , José Manuel del Castillo de la Fuente , Luis Rodrigo Rodríguez-Simón , María Visitación Hurtado-Torres
The increasing interest in digital preservation of cultural heritage has led to ARTDET, a machine learning software for automated detection of deterioration in easel paintings. This web application uses a pre-trained Mask R-CNN model to detect Lacune (areas of missing paint, resulting in visible support panel) from the loss of the Painting Layer (LPL) and stucco repairs. ARTDET leverages high-resolution images annotated by expert restorers. The software achieved 80.4 % recall for LPL and stucco, with a 99 % confidence score in detected damages. Available as open access resource, ARTDET aids conservators and researchers in preserving invaluable artworks.
{"title":"ARTDET: Machine learning software for automated detection of art deterioration in easel paintings","authors":"Francisco M. Garcia-Moreno , Jesús Cortés Alcaraz , José Manuel del Castillo de la Fuente , Luis Rodrigo Rodríguez-Simón , María Visitación Hurtado-Torres","doi":"10.1016/j.softx.2024.101917","DOIUrl":"10.1016/j.softx.2024.101917","url":null,"abstract":"<div><div>The increasing interest in digital preservation of cultural heritage has led to ARTDET, a machine learning software for automated detection of deterioration in easel paintings. This web application uses a pre-trained Mask R-CNN model to detect Lacune (areas of missing paint, resulting in visible support panel) from the loss of the Painting Layer (LPL) and stucco repairs. ARTDET leverages high-resolution images annotated by expert restorers. The software achieved 80.4 % recall for LPL and stucco, with a 99 % confidence score in detected damages. Available as open access resource, ARTDET aids conservators and researchers in preserving invaluable artworks.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101917"},"PeriodicalIF":2.4,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-04DOI: 10.1016/j.softx.2024.101899
Michal Bukowski, Benedykt Wladyka
High-throughput quantification techniques provide considerable amounts of data. Making sense of such data requires not only thorough statistical analysis but a logical approach to data visualisation. DGE-ontology is software that has been primarily designed for transcriptomics, however it may be utilised for any data that express fold change of relative or absolute quantity measures of multiple entities, such as transcripts, proteins or metabolites. The software integrates results of differential and functional analyses in order to produce a single circular, highly informative and visually appealing chart. The chart simultaneously depicts numbers of quantified entities, their assignment to functional categories, singles out statistically over-represented categories, and visualises quantity fold change values. The presented approach to data visualisation considerably facilitates communication of experimental results as well as inference from large omic data sets.
{"title":"DGE-ontology: A quick and simple gene set enrichment analysis and visualisation tool","authors":"Michal Bukowski, Benedykt Wladyka","doi":"10.1016/j.softx.2024.101899","DOIUrl":"10.1016/j.softx.2024.101899","url":null,"abstract":"<div><div>High-throughput quantification techniques provide considerable amounts of data. Making sense of such data requires not only thorough statistical analysis but a logical approach to data visualisation. DGE-ontology is software that has been primarily designed for transcriptomics, however it may be utilised for any data that express fold change of relative or absolute quantity measures of multiple entities, such as transcripts, proteins or metabolites. The software integrates results of differential and functional analyses in order to produce a single circular, highly informative and visually appealing chart. The chart simultaneously depicts numbers of quantified entities, their assignment to functional categories, singles out statistically over-represented categories, and visualises quantity fold change values. The presented approach to data visualisation considerably facilitates communication of experimental results as well as inference from large omic data sets.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101899"},"PeriodicalIF":2.4,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-04DOI: 10.1016/j.softx.2024.101897
Kacper Derlatka , Maciej Manna , Oleksii Bulenok , David Zwicker , Sylwester Arabas
The numba-mpi package offers access to the Message Passing Interface (MPI) routines from Python code that uses the Numba just-in-time (JIT) compiler. As a result, high-performance and multi-threaded Python code may utilize MPI communication facilities without leaving the JIT-compiled code blocks, which is not possible with the mpi4py package, a higher-level Python interface to MPI. For debugging or code-coverage analysis purposes, numba-mpi retains full functionality of the code even if the JIT compilation is disabled. The numba-mpi API constitutes a thin wrapper around the C API of MPI and is built around Numpy arrays including handling of non-contiguous views over array slices. Project development is hosted at GitHub leveraging the mpi4py/setup-mpi workflow enabling continuous integration tests on Linux (MPICH, OpenMPI & Intel MPI), macOS (MPICH & OpenMPI) and Windows (MS MPI). The paper covers an overview of the package features, architecture and performance. As of v1.0, the following MPI routines are exposed and covered by unit tests: size/rank, [i]send/[i]recv, wait[all|any], test[all|any], allreduce, bcast, barrier, scatter/[all]gather & wtime. The package is implemented in pure Python and depends on numpy, numba and mpi4py (the latter used at initialization and as a source of utility routines only). The performance advantage of using numba-mpi compared to mpi4py is depicted with a simple example, with entirety of the code included in listings discussed in the text. Application of numba-mpi for handling domain decomposition in numerical solvers for partial differential equations is presented using two external packages that depend on numba-mpi: py-pde and PyMPDATA-MPI.
{"title":"Numba-MPI v1.0: Enabling MPI communication within Numba/LLVM JIT-compiled Python code","authors":"Kacper Derlatka , Maciej Manna , Oleksii Bulenok , David Zwicker , Sylwester Arabas","doi":"10.1016/j.softx.2024.101897","DOIUrl":"10.1016/j.softx.2024.101897","url":null,"abstract":"<div><div>The <span>numba-mpi</span> package offers access to the Message Passing Interface (MPI) routines from Python code that uses the Numba just-in-time (JIT) compiler. As a result, high-performance and multi-threaded Python code may utilize MPI communication facilities without leaving the JIT-compiled code blocks, which is not possible with the <span>mpi4py</span> package, a higher-level Python interface to MPI. For debugging or code-coverage analysis purposes, <span>numba-mpi</span> retains full functionality of the code even if the JIT compilation is disabled. The <span>numba-mpi</span> API constitutes a thin wrapper around the C API of MPI and is built around Numpy arrays including handling of non-contiguous views over array slices. Project development is hosted at GitHub leveraging the <span>mpi4py/setup-mpi</span> workflow enabling continuous integration tests on Linux (<span>MPICH</span>, <span>OpenMPI</span> & <span>Intel MPI</span>), macOS (<span>MPICH</span> & <span>OpenMPI</span>) and Windows (<span>MS MPI</span>). The paper covers an overview of the package features, architecture and performance. As of v1.0, the following MPI routines are exposed and covered by unit tests: <span>size</span>/<span>rank</span>, <span>[i]send</span>/<span>[i]recv</span>, <span>wait[all|any]</span>, <span>test[all|any]</span>, <span>allreduce</span>, <span>bcast</span>, <span>barrier</span>, <span>scatter/[all]gather</span> & <span>wtime</span>. The package is implemented in pure Python and depends on <span>numpy</span>, <span>numba</span> and <span>mpi4py</span> (the latter used at initialization and as a source of utility routines only). The performance advantage of using <span>numba-mpi</span> compared to <span>mpi4py</span> is depicted with a simple example, with entirety of the code included in listings discussed in the text. Application of <span>numba-mpi</span> for handling domain decomposition in numerical solvers for partial differential equations is presented using two external packages that depend on <span>numba-mpi</span>: <span>py-pde</span> and <span>PyMPDATA-MPI</span>.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101897"},"PeriodicalIF":2.4,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
After software or program updates, it is crucial to establish a new set of test cases. Reusing parts of the old test case set in unit testing is a cost-effective, efficient, and common approach. However, only a few commercial software are utilized for this purpose, and their techniques for reusing test cases are not publicly available. PC-TRT is a test case reuse tool primarily designed for software and programs written in the C language. PC-TRT reuses test cases from historical program versions and generates test data for uncovered paths, resulting in a high path coverage test case set. Its key functions include analyzing test case path coverage information, selecting reusable cases from old test case sets based on path similarity, and generating test data for uncovered paths. PC-TRT significantly improves both the efficiency and reliability of software testing.
软件或程序更新后,建立一套新的测试用例至关重要。在单元测试中重复使用旧测试用例集的部分内容是一种经济、高效和常见的方法。然而,只有少数商业软件可用于此目的,而且它们的测试用例重用技术并不公开。PC-TRT 是一种测试用例重用工具,主要针对用 C 语言编写的软件和程序。PC-TRT 可重用历史程序版本中的测试用例,并为未覆盖的路径生成测试数据,从而生成高路径覆盖率的测试用例集。其主要功能包括分析测试用例路径覆盖信息、根据路径相似性从旧测试用例集中选择可重复使用的案例,以及为未覆盖路径生成测试数据。PC-TRT 大大提高了软件测试的效率和可靠性。
{"title":"PC-TRT: A Test Case Reuse and generation Tool to achieve high path coverage for Unit Test","authors":"Zhonghao Guo, Sinong Chen, Xinyue Xu, Xiangxian Chen","doi":"10.1016/j.softx.2024.101918","DOIUrl":"10.1016/j.softx.2024.101918","url":null,"abstract":"<div><div>After software or program updates, it is crucial to establish a new set of test cases. Reusing parts of the old test case set in unit testing is a cost-effective, efficient, and common approach. However, only a few commercial software are utilized for this purpose, and their techniques for reusing test cases are not publicly available. PC-TRT is a test case reuse tool primarily designed for software and programs written in the C language. PC-TRT reuses test cases from historical program versions and generates test data for uncovered paths, resulting in a high path coverage test case set. Its key functions include analyzing test case path coverage information, selecting reusable cases from old test case sets based on path similarity, and generating test data for uncovered paths. PC-TRT significantly improves both the efficiency and reliability of software testing.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101918"},"PeriodicalIF":2.4,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-03DOI: 10.1016/j.softx.2024.101912
Dilek Güzel , Tim Furlan , Tobias Kaiser , Andreas Menzel
The effective macroscopic behaviour of a material is a manifestation of the underlying microstructure and microscale processes. This renders the generation of highly accurate digital microstructure twins indispensable for multiscale simulations. Mosaic is a Python-based, open-source software tool designed to address the challenge of incorporating non-planar, periodic microstructures generated by the software Neper into simulations that require periodic boundary conditions. Mosaic transforms these complex microstructures into rectilinear periodic equivalents and, additionally, makes it possible to account for material interfaces such as grain and phase boundaries. This transformation enables continuous integration with various simulation tools and workflows, facilitating accurate and efficient simulations of the effective material response.
{"title":"Neper-Mosaic: Seamless generation of periodic representative volume elements on unit domains","authors":"Dilek Güzel , Tim Furlan , Tobias Kaiser , Andreas Menzel","doi":"10.1016/j.softx.2024.101912","DOIUrl":"10.1016/j.softx.2024.101912","url":null,"abstract":"<div><div>The effective macroscopic behaviour of a material is a manifestation of the underlying microstructure and microscale processes. This renders the generation of highly accurate digital microstructure twins indispensable for multiscale simulations. <span>Mosaic</span> is a Python-based, open-source software tool designed to address the challenge of incorporating non-planar, periodic microstructures generated by the software <span>Neper</span> into simulations that require periodic boundary conditions. <span>Mosaic</span> transforms these complex microstructures into rectilinear periodic equivalents and, additionally, makes it possible to account for material interfaces such as grain and phase boundaries. This transformation enables continuous integration with various simulation tools and workflows, facilitating accurate and efficient simulations of the effective material response.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101912"},"PeriodicalIF":2.4,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.softx.2024.101911
Rahil Ashtari Mahini , Gerardo Casanola-Martin , Simone A. Ludwig , Bakhtiyor Rasulev
Multi-component materials/compounds and polymeric/composite systems pose structural complexity that challenges the conventional methods of molecular representation in cheminformatics, which have limited applicability in such cases. Therefore, we have introduced an innovative structural representation technique tailored for complex materials. We implemented different mixing rules based on linear and nonlinear relationships’ additive effect of different components in composites treating each multi-component material as a mixture system. We developed and improved mixture descriptors based on 12 different mixture functions grouped into three main categories: property-based descriptors, concentration-weighted descriptors, and deviation-combination descriptors. A python package was developed for this purpose, allowing users to compute 12 different mixture-descriptors to use as input for the generation of mixture-based Quantitative Structure-Activity/Property Relationship (mxb-QSAR/QSPR) machine learning models for predicting a range of chemical and physical properties across various complex systems.
{"title":"MixtureMetrics: A comprehensive package to develop additive numerical features to describe complex materials for machine learning modeling","authors":"Rahil Ashtari Mahini , Gerardo Casanola-Martin , Simone A. Ludwig , Bakhtiyor Rasulev","doi":"10.1016/j.softx.2024.101911","DOIUrl":"10.1016/j.softx.2024.101911","url":null,"abstract":"<div><div>Multi-component materials/compounds and polymeric/composite systems pose structural complexity that challenges the conventional methods of molecular representation in cheminformatics, which have limited applicability in such cases. Therefore, we have introduced an innovative structural representation technique tailored for complex materials. We implemented different mixing rules based on linear and nonlinear relationships’ additive effect of different components in composites treating each multi-component material as a mixture system. We developed and improved mixture descriptors based on 12 different mixture functions grouped into three main categories: property-based descriptors, concentration-weighted descriptors, and deviation-combination descriptors. A python package was developed for this purpose, allowing users to compute 12 different mixture-descriptors to use as input for the generation of mixture-based Quantitative Structure-Activity/Property Relationship (mxb-QSAR/QSPR) machine learning models for predicting a range of chemical and physical properties across various complex systems.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101911"},"PeriodicalIF":2.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.softx.2024.101914
Nanxi Chen, Rujin Ma, Baixue Ge, Haocheng Chang
Given the substantial influence of wind loading on the structural performance of long-span bridges, continuous monitoring of wind field characteristics in their vicinity is paramount. PyWindAM is purpose-built web server software meticulously designed to simplify comprehensive wind data analysis and efficient management derived from on-site measurements. The software automates the retrieval of raw data from hardware devices and employs vector decomposition to extract essential wind parameters, including mean wind speed, wind direction, turbulence intensity, and more, from data collected at multiple measurement points. These critical wind parameters are securely stored in the InfluxDB database hosted on the server. In terms of user-friendliness, InfluxDB itself provides an intuitive interface, facilitating convenient data visualization and efficient management for researchers and technicians engaged in wind field analysis and structural safety assessments.
{"title":"PyWindAM: A Python software for wind field analysis and cloud-based data management","authors":"Nanxi Chen, Rujin Ma, Baixue Ge, Haocheng Chang","doi":"10.1016/j.softx.2024.101914","DOIUrl":"10.1016/j.softx.2024.101914","url":null,"abstract":"<div><div>Given the substantial influence of wind loading on the structural performance of long-span bridges, continuous monitoring of wind field characteristics in their vicinity is paramount. PyWindAM is purpose-built web server software meticulously designed to simplify comprehensive wind data analysis and efficient management derived from on-site measurements. The software automates the retrieval of raw data from hardware devices and employs vector decomposition to extract essential wind parameters, including mean wind speed, wind direction, turbulence intensity, and more, from data collected at multiple measurement points. These critical wind parameters are securely stored in the InfluxDB database hosted on the server. In terms of user-friendliness, InfluxDB itself provides an intuitive interface, facilitating convenient data visualization and efficient management for researchers and technicians engaged in wind field analysis and structural safety assessments.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101914"},"PeriodicalIF":2.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-28DOI: 10.1016/j.softx.2024.101907
Domen Vake , Niki Hrovatin , Aleksandar Tošić , Jernej Vičič
Recent efforts to make research publications public have had a profound effect on the scientific publishing landscape. With a large influx in publicly available research contributions, the need for software tooling that supports information retrieval from indexing services is invaluable. Complementing well established indexing services such as Scopus, Web of Science, PubMed, etc. is CORE, which vows to provide a holistic view including contributions contained in aforementioned well established indexers. Conveniently, CORE offers an API for accessing data. This paper presents a client library that fully implements their API and enables quick and easy access to information, which is relevant for literature reviews as well as the scientific field of scientometrics.
最近,公开研究出版物的努力对科学出版业产生了深远的影响。随着大量公开研究成果的涌入,对支持从索引服务中进行信息检索的软件工具的需求变得非常宝贵。CORE 与 Scopus、Web of Science、PubMed 等成熟的索引服务相辅相成,致力于提供包括上述成熟索引服务所含论文在内的整体视图。方便的是,CORE 提供了访问数据的应用程序接口(API)。本文介绍了一个客户端库,该库完全实现了 CORE 的 API,可以快速、方便地访问信息,这与文献综述以及科学计量学的科学领域息息相关。
{"title":"core_api_client: An API for the CORE aggregation service for open access papers","authors":"Domen Vake , Niki Hrovatin , Aleksandar Tošić , Jernej Vičič","doi":"10.1016/j.softx.2024.101907","DOIUrl":"10.1016/j.softx.2024.101907","url":null,"abstract":"<div><div>Recent efforts to make research publications public have had a profound effect on the scientific publishing landscape. With a large influx in publicly available research contributions, the need for software tooling that supports information retrieval from indexing services is invaluable. Complementing well established indexing services such as Scopus, Web of Science, PubMed, etc. is CORE, which vows to provide a holistic view including contributions contained in aforementioned well established indexers. Conveniently, CORE offers an API for accessing data. This paper presents a client library that fully implements their API and enables quick and easy access to information, which is relevant for literature reviews as well as the scientific field of scientometrics.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101907"},"PeriodicalIF":2.4,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142357707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}