首页 > 最新文献

Journal of Open Research Software最新文献

英文 中文
DoE.MIParray: An R Package for Algorithmic Creation of Orthogonal Arrays 雌鹿MIParray:一个用于正交阵列算法创建的R包
Q1 Social Sciences Pub Date : 2020-10-07 DOI: 10.5334/jors.286
U. Grömping
The R package DoE.MIParray uses mixed integer optimization for creating well-balanced arrays for experimental designs. Its use requires availability of at least one of the commercial optimizers Gurobi or Mosek. Investing some effort into the creation of a suitable array is justified, because experimental runs are often very expensive, so that their information content should be maximized. DoE.MIParray is particularly useful for creating relatively small mixed level designs. Balance is optimized by applying the quality criterion “generalized minimum aberration” (GMA), which aims at minimizing confounding of low order effects in factorial models, without assuming a specific model. For relevant cases, DoE.MIParray exploits a lower bound on its objective function, which allows to drastically reduce the computational burden of mixed integer optimization.
R包DoE.MIParray使用混合整数优化来创建用于实验设计的平衡良好的阵列。它的使用需要至少一个商业优化器Gurobi或Mosek的可用性。在创建一个合适的数组上投入一些精力是合理的,因为实验运行通常非常昂贵,因此它们的信息内容应该最大化。DoE.MIParray对于创建相对较小的混合级别设计特别有用。平衡是通过应用质量标准“广义最小像差”(GMA)来优化的,该标准旨在最大限度地减少因子模型中低阶效应的混淆,而不假设特定的模型。对于相关情况,DoE.MIParray利用了其目标函数的下界,从而大大减少了混合整数优化的计算负担。
{"title":"DoE.MIParray: An R Package for Algorithmic Creation of Orthogonal Arrays","authors":"U. Grömping","doi":"10.5334/jors.286","DOIUrl":"https://doi.org/10.5334/jors.286","url":null,"abstract":"The R package DoE.MIParray uses mixed integer optimization for creating well-balanced arrays for experimental designs. Its use requires availability of at least one of the commercial optimizers Gurobi or Mosek. Investing some effort into the creation of a suitable array is justified, because experimental runs are often very expensive, so that their information content should be maximized. DoE.MIParray is particularly useful for creating relatively small mixed level designs. Balance is optimized by applying the quality criterion “generalized minimum aberration” (GMA), which aims at minimizing confounding of low order effects in factorial models, without assuming a specific model. For relevant cases, DoE.MIParray exploits a lower bound on its objective function, which allows to drastically reduce the computational burden of mixed integer optimization.","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43445401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring and Comparing Unsupervised Clustering Algorithms 探索和比较无监督聚类算法
Q1 Social Sciences Pub Date : 2020-10-07 DOI: 10.5334/jors.269
M. Lavielle, Philip D. Waggoner
One of the most widely used approaches to explore and understand non-random structure in data in a largely assumption-free manner is clustering. In this paper, we detail two original Shiny apps written in R, openly developed at Github, and archived at Zenodo, for exploring and comparing major unsupervised algorithms for clustering applications: k-means and Gaussian mixture models via Expectation-Maximization. The first app leverages simulated data and the second uses Fisher’s Iris data set to visually and numerically compare the clustering algorithms using data familiar to many applied researchers. In addition to being valuable tools for comparing these clustering techniques, the open source architecture of our Shiny apps allows for wide engagement and extension by the broader open science community, such as including different data sets and algorithms.
聚类是在很大程度上不需要假设的情况下探索和理解数据中的非随机结构的最广泛使用的方法之一。在本文中,我们详细介绍了两个用R语言编写的原始Shiny应用程序,它们在Github上公开开发,并存档于Zenodo,用于探索和比较聚类应用的主要无监督算法:k-means和高斯混合模型。第一个应用程序利用模拟数据,第二个应用程序使用Fisher的Iris数据集,使用许多应用研究人员熟悉的数据,从视觉上和数字上比较聚类算法。除了作为比较这些聚类技术的有价值的工具之外,Shiny应用程序的开源架构允许更广泛的开放科学社区广泛参与和扩展,例如包括不同的数据集和算法。
{"title":"Exploring and Comparing Unsupervised Clustering Algorithms","authors":"M. Lavielle, Philip D. Waggoner","doi":"10.5334/jors.269","DOIUrl":"https://doi.org/10.5334/jors.269","url":null,"abstract":"One of the most widely used approaches to explore and understand non-random structure in data in a largely assumption-free manner is clustering. In this paper, we detail two original Shiny apps written in R, openly developed at Github, and archived at Zenodo, for exploring and comparing major unsupervised algorithms for clustering applications: k-means and Gaussian mixture models via Expectation-Maximization. The first app leverages simulated data and the second uses Fisher’s Iris data set to visually and numerically compare the clustering algorithms using data familiar to many applied researchers. In addition to being valuable tools for comparing these clustering techniques, the open source architecture of our Shiny apps allows for wide engagement and extension by the broader open science community, such as including different data sets and algorithms.","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44365301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
pyfMRIqc: A Software Package for Raw fMRI Data Quality Assurance pyfmri:一个用于原始功能磁共振成像数据质量保证的软件包
Q1 Social Sciences Pub Date : 2020-10-07 DOI: 10.5334/jors.280
B. Williams, Michael Q. Lindner
pyfMRIqc is a tool for checking the quality of raw functional magnetic resonance imaging (fMRI) data. pyfMRIqc produces a range of output files which can be used to identify fMRI data quality issues such as artefacts, motion, signal loss etc. This tool creates a number of 3D and 4D NIFTI files that can be used for in depth quality assurance. Additionally, 2D images are created for each NIFTI file for a quick overview. These images and other information (e.g. about signal-to-noise ratio, scan parameters, etc.) are combined in a user-friendly HTML output file. pyfMRIqc is written entirely in Python and is available under a GNU GPL3 license on GitHub (https://drmichaellindner.github.io/pyfMRIqc/). pyfMRIqc can be used from the command line and therefore can be included as part of a processing pipeline or used to quality-check a series of datasets using batch scripting. The quality assurance of a single dataset can also be performed via dialog boxes.
pyfMRIqc是检查原始功能磁共振成像(fMRI)数据质量的工具。pyfMRIqc生成一系列输出文件,可用于识别fMRI数据质量问题,如伪影、运动、信号丢失等。该工具创建了许多3D和4D NIFTI文件,可用于深度质量保证。此外,为每个NIFTI文件创建2D图像,以便快速概述。这些图像和其他信息(例如关于信噪比,扫描参数等)被组合在一个用户友好的HTML输出文件中。pyfMRIqc完全用Python编写,在GNU GPL3许可下可在GitHub (https://drmichaellindner.github.io/pyfMRIqc/)上获得。pyfMRIqc可以从命令行使用,因此可以作为处理管道的一部分包含,也可以使用批处理脚本对一系列数据集进行质量检查。单个数据集的质量保证也可以通过对话框来执行。
{"title":"pyfMRIqc: A Software Package for Raw fMRI Data Quality Assurance","authors":"B. Williams, Michael Q. Lindner","doi":"10.5334/jors.280","DOIUrl":"https://doi.org/10.5334/jors.280","url":null,"abstract":"pyfMRIqc is a tool for checking the quality of raw functional magnetic resonance imaging (fMRI) data. pyfMRIqc produces a range of output files which can be used to identify fMRI data quality issues such as artefacts, motion, signal loss etc. This tool creates a number of 3D and 4D NIFTI files that can be used for in depth quality assurance. Additionally, 2D images are created for each NIFTI file for a quick overview. These images and other information (e.g. about signal-to-noise ratio, scan parameters, etc.) are combined in a user-friendly HTML output file. pyfMRIqc is written entirely in Python and is available under a GNU GPL3 license on GitHub (https://drmichaellindner.github.io/pyfMRIqc/). pyfMRIqc can be used from the command line and therefore can be included as part of a processing pipeline or used to quality-check a series of datasets using batch scripting. The quality assurance of a single dataset can also be performed via dialog boxes.","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47948560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
PyDDA: A Pythonic Direct Data Assimilation Framework for Wind Retrievals PyDDA:一个用于风力反演的Python直接数据同化框架
Q1 Social Sciences Pub Date : 2020-10-07 DOI: 10.5334/jors.264
R. Jackson, S. Collis, T. Lang, C. Potvin, T. Munson
This software assimilates data from an arbitrary number of weather radars together with other spatial wind fields (eg numerical weather forecasting model data) in order to retrieve high resolution three dimensional wind fields. PyDDA uses NumPy and SciPy’s optimization techniques combined with the Python Atmospheric Radiation Measurement (ARM) Radar Toolkit (Py-ART) in order to create wind fields using the 3D variational technique (3DVAR). PyDDA is hosted and distributed on GitHub at https://github.com/ openradar/PyDDA. PyDDA has the potential to be used by the atmospheric science community to develop high resolution wind retrievals from radar networks. These retrievals can be used for the evaluation of numerical weather forecasting models and plume modelling. This paper shows how wind fields from 2 NEXt generation RADar (NEXRAD) WSR-88D radars and the High Resolution Rapid Refresh can be assimilated together using PyDDA to create a high resolution wind field inside Hurricane Florence.
该软件吸收来自任意数量的天气雷达的数据以及其他空间风场(例如数值天气预报模型数据),以检索高分辨率的三维风场。PyDDA使用NumPy和SciPy的优化技术,结合Python大气辐射测量(ARM)雷达工具包(Py ART),使用3D变分技术(3DVAR)创建风场。PyDDA在GitHub上托管和分发,位于https://github.com/openradar/PyDDA。PyDDA有潜力被大气科学界用于开发雷达网络的高分辨率风反演。这些反演结果可用于数值天气预报模型和羽流建模的评估。本文展示了如何使用PyDDA将2台NEXt代RADar(NEXRAD)WSR-88D雷达和高分辨率快速刷新雷达的风场同化在一起,以在飓风佛罗伦萨内部创建高分辨率风场。
{"title":"PyDDA: A Pythonic Direct Data Assimilation Framework for Wind Retrievals","authors":"R. Jackson, S. Collis, T. Lang, C. Potvin, T. Munson","doi":"10.5334/jors.264","DOIUrl":"https://doi.org/10.5334/jors.264","url":null,"abstract":"This software assimilates data from an arbitrary number of weather radars together with other spatial wind fields (eg numerical weather forecasting model data) in order to retrieve high resolution three dimensional wind fields. PyDDA uses NumPy and SciPy’s optimization techniques combined with the Python Atmospheric Radiation Measurement (ARM) Radar Toolkit (Py-ART) in order to create wind fields using the 3D variational technique (3DVAR). PyDDA is hosted and distributed on GitHub at https://github.com/ openradar/PyDDA. PyDDA has the potential to be used by the atmospheric science community to develop high resolution wind retrievals from radar networks. These retrievals can be used for the evaluation of numerical weather forecasting models and plume modelling. This paper shows how wind fields from 2 NEXt generation RADar (NEXRAD) WSR-88D radars and the High Resolution Rapid Refresh can be assimilated together using PyDDA to create a high resolution wind field inside Hurricane Florence.","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46891617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
esy-osmfilter – A Python Library to Efficiently Extract OpenStreetMap Data esy-osmfilter–一个有效提取OpenStreetMap数据的Python库
Q1 Social Sciences Pub Date : 2020-09-01 DOI: 10.5334/jors.317
A. Pluta, Ontje Lünsdorf
OpenStreetMap is the largest freely accessible geographic database of the world. The necessary processing steps to extract information from this database, namely reading, converting and filtering, can be very consuming in terms of computational time and disk space. esy-osmfilter is a Python library designed to read and filter OpenStreetMap data under optimization of disc space and computational time. It uses parallelized prefiltering for the OSM pbf-files data in order to quickly reduce the original data size. It can store the prefiltered data to the hard drive. In the main filtering process, these prefiltered data can be reused repeatedly to identify different items with the help of more specialized main filters. At the end, the output can be exported to the GeoJSON format.
OpenStreetMap是世界上最大的可自由访问的地理数据库。从该数据库中提取信息的必要处理步骤,即读取、转换和过滤,在计算时间和磁盘空间方面可能非常耗时。esy-osmfilter是一个Python库,设计用于在优化磁盘空间和计算时间的情况下读取和过滤OpenStreetMap数据。它对OSM pbf文件数据使用并行预过滤,以快速减少原始数据大小。它可以将预过滤的数据存储到硬盘驱动器中。在主过滤过程中,这些预过滤的数据可以在更专业的主过滤器的帮助下重复使用,以识别不同的项目。最后,可以将输出导出为GeoJSON格式。
{"title":"esy-osmfilter – A Python Library to Efficiently Extract OpenStreetMap Data","authors":"A. Pluta, Ontje Lünsdorf","doi":"10.5334/jors.317","DOIUrl":"https://doi.org/10.5334/jors.317","url":null,"abstract":"OpenStreetMap is the largest freely accessible geographic database of the world. The necessary processing steps to extract information from this database, namely reading, converting and filtering, can be very consuming in terms of computational time and disk space. esy-osmfilter is a Python library designed to read and filter OpenStreetMap data under optimization of disc space and computational time. It uses parallelized prefiltering for the OSM pbf-files data in order to quickly reduce the original data size. It can store the prefiltered data to the hard drive. In the main filtering process, these prefiltered data can be reused repeatedly to identify different items with the help of more specialized main filters. At the end, the output can be exported to the GeoJSON format.","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46664764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Experiences with a Flexible User Research Process to Build Data Change Tools 使用灵活的用户研究流程来构建数据更改工具的经验
Q1 Social Sciences Pub Date : 2020-09-01 DOI: 10.5334/jors.284
Drew Paine, D. Ghoshal, L. Ramakrishnan
Scientific software development processes are understood to be distinct from commercial software development practices due to uncertain and evolving states of scientific knowledge. Sustaining these software products is a recognized challenge, but under-examined is the usability and usefulness of such tools to their scientific end users. User research is a well-established set of techniques (e.g., interviews, mockups, usability tests) applied in commercial software projects to develop foundational, generative, and evaluative insights about products and the people who use them. Currently these approaches are not commonly applied and discussed in scientific software development work. The use of user research techniques in scientific environments can be challenging due to the nascent, fluid problem spaces of scientific work, varying scope of projects and their user communities, and funding/economic constraints on projects. In this paper, we reflect on our experiences undertaking a multi-method user research process in the Deduce project. The Deduce project is investigating data change to develop metrics, methods, and tools that will help scientists make decisions around data change. There is a lack of common terminology since the concept of systematically measuring and managing data change is under explored in scientific environments. To bridge this gap we conducted user research that focuses on user practices, needs, and motivations to help us design and develop metrics and tools for data change. This paper contributes reflections and the lessons we have learned from our experiences. We offer key takeaways for scientific software project teams to effectively and flexibly incorporate similar processes into their projects.
由于科学知识的不确定性和不断发展的状态,科学软件开发过程被理解为不同于商业软件开发实践。维护这些软件产品是一个公认的挑战,但是对这些工具对其科学最终用户的可用性和有用性的研究还不够充分。用户研究是一套完善的技术(例如,访谈、模型、可用性测试),应用于商业软件项目中,用于开发关于产品和用户的基础、生成和评估性见解。目前,这些方法在科学的软件开发工作中并没有得到普遍的应用和讨论。在科学环境中使用用户研究技术可能具有挑战性,因为科学工作的问题空间是新生的、不稳定的、项目及其用户群体的范围各不相同,以及项目的资金/经济限制。在本文中,我们反思了我们在演绎项目中进行多方法用户研究过程的经验。推导项目正在调查数据变化,以开发指标、方法和工具,帮助科学家围绕数据变化做出决策。由于系统地测量和管理数据变化的概念尚在科学环境中探索,因此缺乏通用术语。为了弥补这一差距,我们进行了用户研究,重点关注用户的实践、需求和动机,以帮助我们设计和开发数据变化的指标和工具。本文提供了我们从经验中得到的反思和教训。我们为科学软件项目团队提供了关键的要点,以有效和灵活地将类似的过程合并到他们的项目中。
{"title":"Experiences with a Flexible User Research Process to Build Data Change Tools","authors":"Drew Paine, D. Ghoshal, L. Ramakrishnan","doi":"10.5334/jors.284","DOIUrl":"https://doi.org/10.5334/jors.284","url":null,"abstract":"Scientific software development processes are understood to be distinct from commercial software development practices due to uncertain and evolving states of scientific knowledge. Sustaining these software products is a recognized challenge, but under-examined is the usability and usefulness of such tools to their scientific end users. User research is a well-established set of techniques (e.g., interviews, mockups, usability tests) applied in commercial software projects to develop foundational, generative, and evaluative insights about products and the people who use them. Currently these approaches are not commonly applied and discussed in scientific software development work. The use of user research techniques in scientific environments can be challenging due to the nascent, fluid problem spaces of scientific work, varying scope of projects and their user communities, and funding/economic constraints on projects. In this paper, we reflect on our experiences undertaking a multi-method user research process in the Deduce project. The Deduce project is investigating data change to develop metrics, methods, and tools that will help scientists make decisions around data change. There is a lack of common terminology since the concept of systematically measuring and managing data change is under explored in scientific environments. To bridge this gap we conducted user research that focuses on user practices, needs, and motivations to help us design and develop metrics and tools for data change. This paper contributes reflections and the lessons we have learned from our experiences. We offer key takeaways for scientific software project teams to effectively and flexibly incorporate similar processes into their projects.","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43079040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CURSAT ver. 2.1: A Simple, Resampling-Based, Program to Generate Pseudoreplicates of Data and Calculate Rarefaction Curves CURSAT 2.1版:一个简单的基于重采样的程序,用于生成数据的伪复制和计算稀疏曲线
Q1 Social Sciences Pub Date : 2020-08-21 DOI: 10.5334/jors.260
G. Gentile
CURSAT ver. 2.1 is an open-source code in QB64 basic, compilable into an executable file, that produces n pseudoreplicates of an empirical data set. Both resamplings with and without replacement are allowed by the software. The number (n) of pseudoreplicates is set by the user. Pseudoreplicates can be exported in a file that can be opened by a spreadsheet. Thus, pseudoreplicates are permanently stored and available for the calculation of statistics of interest and associated variance. The software also uses the n pseudoreplicate data to reconstruct n accumulation matrices, appended in an output file. Accumulation has applicability in cases in which repeated sample-based data must be evaluated for exhaustiveness. Many situations involve repeated sampling from the same set of observations. For example, if data consist of species occurrence, the software can be used by a wide spectrum of specialists such as ecologists, zoologists, botanists, biogeographers, conservationists for biodiversity estimation. The software allows performing accumulation irrespectively whether the input data set contains abundance (quantitative) or incidence (binary) data. Accumulation matrices can be imported in statistical packages to estimate distributions of successive pooling of samples and depict accumulation and rarefaction curves with associated variance. CURSAT ver. 2.1 is released in two editions. Edition #1 is recommended for analysis, whereas Edition #2 generates a log file in which the flow of internal steps of resampling and accumulation routines is reported. Edition #2 is primarily designed for educational purposes and quality check. Funding statement: The software was developed with no specific funds.
CURSAT 2.1版是QB64 basic中的一个开源代码,可编译到一个可执行文件中,生成n个经验数据集的伪复制。软件允许使用和不使用替换进行重新采样。伪复制的数量(n)由用户设置。伪复制可以导出到可以通过电子表格打开的文件中。因此,伪复制被永久存储并可用于计算感兴趣的统计数据和相关的方差。该软件还使用n个伪复制数据来重建附加在输出文件中的n个累积矩阵。累积适用于必须评估重复样本数据的详尽性的情况。许多情况涉及从同一组观测值中重复采样。例如,如果数据由物种发生组成,则生态学家、动物学家、植物学家、生物地理学家、自然资源保护主义者等广泛的专家可以使用该软件进行生物多样性估计。该软件允许执行累积,而不考虑输入数据集是否包含丰度(定量)或发生率(二进制)数据。累积矩阵可以在统计包中导入,以估计连续样本池的分布,并描述具有相关方差的累积和稀疏曲线。CURSAT 2.1版有两个版本。建议使用第1版进行分析,而第2版生成一个日志文件,其中报告了重新采样和累积例程的内部步骤流程。版本#2主要用于教育目的和质量检查。资金说明:该软件是在没有具体资金的情况下开发的。
{"title":"CURSAT ver. 2.1: A Simple, Resampling-Based, Program to Generate Pseudoreplicates of Data and Calculate Rarefaction Curves","authors":"G. Gentile","doi":"10.5334/jors.260","DOIUrl":"https://doi.org/10.5334/jors.260","url":null,"abstract":"CURSAT ver. 2.1 is an open-source code in QB64 basic, compilable into an executable file, that produces n pseudoreplicates of an empirical data set. Both resamplings with and without replacement are allowed by the software. The number (n) of pseudoreplicates is set by the user. Pseudoreplicates can be exported in a file that can be opened by a spreadsheet. Thus, pseudoreplicates are permanently stored and available for the calculation of statistics of interest and associated variance. The software also uses the n pseudoreplicate data to reconstruct n accumulation matrices, appended in an output file. Accumulation has applicability in cases in which repeated sample-based data must be evaluated for exhaustiveness. Many situations involve repeated sampling from the same set of observations. For example, if data consist of species occurrence, the software can be used by a wide spectrum of specialists such as ecologists, zoologists, botanists, biogeographers, conservationists for biodiversity estimation. The software allows performing accumulation irrespectively whether the input data set contains abundance (quantitative) or incidence (binary) data. Accumulation matrices can be imported in statistical packages to estimate distributions of successive pooling of samples and depict accumulation and rarefaction curves with associated variance. CURSAT ver. 2.1 is released in two editions. Edition #1 is recommended for analysis, whereas Edition #2 generates a log file in which the flow of internal steps of resampling and accumulation routines is reported. Edition #2 is primarily designed for educational purposes and quality check. Funding statement: The software was developed with no specific funds.","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49385037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comfort Simulator: A Software Tool to Model Thermoregulation and Perception of Comfort 舒适模拟器:一个模拟体温调节和舒适度感知的软件工具
Q1 Social Sciences Pub Date : 2020-07-20 DOI: 10.5334/jors.288
J. Hussan, P. Hunter
{"title":"Comfort Simulator: A Software Tool to Model Thermoregulation and Perception of Comfort","authors":"J. Hussan, P. Hunter","doi":"10.5334/jors.288","DOIUrl":"https://doi.org/10.5334/jors.288","url":null,"abstract":"","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48012091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Janus: A Python Package for Agent-Based Modeling of Land Use and Land Cover Change Janus:一个用于基于Agent的土地利用和土地覆盖变化建模的Python包
Q1 Social Sciences Pub Date : 2020-06-25 DOI: 10.5334/jors.306
K. Kaiser, A. Flores, C. Vernon
{"title":"Janus: A Python Package for Agent-Based Modeling of Land Use and Land Cover Change","authors":"K. Kaiser, A. Flores, C. Vernon","doi":"10.5334/jors.306","DOIUrl":"https://doi.org/10.5334/jors.306","url":null,"abstract":"","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47029367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
bayest: An R Package for Effect-Size Targeted Bayesian Two-Sample t-Tests bayest:一个R包,用于效果大小目标贝叶斯双样本t检验
Q1 Social Sciences Pub Date : 2020-06-15 DOI: 10.5334/jors.290
Riko Kelter
{"title":"bayest: An R Package for Effect-Size Targeted Bayesian Two-Sample t-Tests","authors":"Riko Kelter","doi":"10.5334/jors.290","DOIUrl":"https://doi.org/10.5334/jors.290","url":null,"abstract":"","PeriodicalId":37323,"journal":{"name":"Journal of Open Research Software","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47629527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
Journal of Open Research Software
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1