This paper presents PyPortTickerSelector, an automated ticker selection library designed to identify top-performing tickers based on predefined and user-defined strategies. The library supports various methods to calculate multiple indicators and performance-metrics. Users have the flexibility to customize the ticker selection process at every step, using built-in options or their own methods. The library achieves improved computational efficiency over manual analysis while maintaining approx 90 % test coverage for business logic. Validation includes comparison against benchmark performance, latency profiling, memory usage optimization, and statistical significance testing, addressing critical gaps in quantitative finance tooling. The library allows seamless integration with the PyPortOptimization Pipeline for portfolio construction.
{"title":"PyPortTickerSelector: A top tickers selection library using multiple indicators, performance metrics, strategies with benchmark","authors":"Rushikesh Nakhate , Harikrishnan Ramachandran , Neeraj Kumar Shukla","doi":"10.1016/j.softx.2025.102506","DOIUrl":"10.1016/j.softx.2025.102506","url":null,"abstract":"<div><div>This paper presents PyPortTickerSelector, an automated ticker selection library designed to identify top-performing tickers based on predefined and user-defined strategies. The library supports various methods to calculate multiple indicators and performance-metrics. Users have the flexibility to customize the ticker selection process at every step, using built-in options or their own methods. The library achieves improved computational efficiency over manual analysis while maintaining approx 90 % test coverage for business logic. Validation includes comparison against benchmark performance, latency profiling, memory usage optimization, and statistical significance testing, addressing critical gaps in quantitative finance tooling. The library allows seamless integration with the PyPortOptimization Pipeline for portfolio construction.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102506"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-01-06DOI: 10.1016/j.softx.2025.102490
Narayanarao Bhogapurapu , Paul Siqueira , Avik Bhattacharya
The current generation of Synthetic Aperture Radar (SAR) satellite missions, such as NASA-ISRO SAR (NISAR), ISRO’s EOS-04, ESA’s BIOMASS, and Sentinel-1, is starting to deliver petabytes of data annually. This volume of open-access SAR data opens up new opportunities for research and applications, but also presents significant software challenges. Traditional tools for working with polarimetric SAR (PolSAR) data are primarily GUI-based, difficult to scale, and unsuited for cloud-native workflows. To address these issues, we introduce polsartools, an open-source Python library designed for scalable and reproducible processing and analysis of PolSAR data. This library is intended for researchers and academicians by supporting a variety of sensors and polarimetric modes. In addition to enabling cloud-native workflows through seamless integration with Jupyter-based platforms and cloud-optimized output formats, polsartools is also designed as a readable and modular reference implementation to support education, community adoption, and extensibility in polarimetric SAR processing. This article outlines the architecture, functionality, and design decisions behind polsartools, and offers insight into building modern, domain-specific scientific software that meets the demands of big data and open science.
当前一代合成孔径雷达(SAR)卫星任务,如NASA-ISRO SAR (NISAR)、ISRO的EOS-04、ESA的BIOMASS和Sentinel-1,每年都开始交付pb级的数据。大量开放获取的SAR数据为研究和应用开辟了新的机会,但也带来了重大的软件挑战。处理偏振SAR (PolSAR)数据的传统工具主要是基于gui的,难以扩展,并且不适合云原生工作流程。为了解决这些问题,我们引入了polsartools,这是一个开源Python库,专为可扩展和可复制的PolSAR数据处理和分析而设计。该库旨在通过支持各种传感器和偏振模式为研究人员和学者。除了通过与基于jupyter的平台和云优化的输出格式无缝集成来实现云原生工作流程外,polsartools还被设计为可读的模块化参考实现,以支持教育、社区采用和偏振SAR处理的可扩展性。本文概述了polsartools背后的架构、功能和设计决策,并提供了构建满足大数据和开放科学需求的现代、特定领域的科学软件的见解。
{"title":"Polsartools: A cloud-native python library for processing open polarimetric SAR data at scale","authors":"Narayanarao Bhogapurapu , Paul Siqueira , Avik Bhattacharya","doi":"10.1016/j.softx.2025.102490","DOIUrl":"10.1016/j.softx.2025.102490","url":null,"abstract":"<div><div>The current generation of Synthetic Aperture Radar (SAR) satellite missions, such as NASA-ISRO SAR (NISAR), ISRO’s EOS-04, ESA’s BIOMASS, and Sentinel-1, is starting to deliver petabytes of data annually. This volume of open-access SAR data opens up new opportunities for research and applications, but also presents significant software challenges. Traditional tools for working with polarimetric SAR (PolSAR) data are primarily GUI-based, difficult to scale, and unsuited for cloud-native workflows. To address these issues, we introduce polsartools, an open-source Python library designed for scalable and reproducible processing and analysis of PolSAR data. This library is intended for researchers and academicians by supporting a variety of sensors and polarimetric modes. In addition to enabling cloud-native workflows through seamless integration with Jupyter-based platforms and cloud-optimized output formats, polsartools is also designed as a readable and modular reference implementation to support education, community adoption, and extensibility in polarimetric SAR processing. This article outlines the architecture, functionality, and design decisions behind polsartools, and offers insight into building modern, domain-specific scientific software that meets the demands of big data and open science.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102490"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-09DOI: 10.1016/j.softx.2025.102477
Catherine Gilbert , German Mandrini , Elhan Ersoz , Nicolas Martin
Agricultural research relies on accurate characterization of the growing environment in field trials. Thus, it is critical to describe the crop growing conditions at a particular trial location. We developed the seasonal characterization engine (SCE), an R shiny app which allows researchers to generate seasonal profiles for a given set of trials. The SCE interfaces with APSIM to dynamically model crop development under the specified trial conditions and returns seasonal information to the user. Seasonal profiles are useful for environmental description and analysis in multi-environment crop varietal trials. Seasonal covariates, derived from these profiles, are useful, biologically relevant parameters for capturing environmental effects in models of crop adaptation. We anticipate that this application will be used by researchers and agronomists to facilitate the description of seasonal conditions and the collection of phenologically derived environmental information which may be used in subsequent modeling.
{"title":"The seasonal characterization engine, an application for describing environment from the perspective of crop development","authors":"Catherine Gilbert , German Mandrini , Elhan Ersoz , Nicolas Martin","doi":"10.1016/j.softx.2025.102477","DOIUrl":"10.1016/j.softx.2025.102477","url":null,"abstract":"<div><div>Agricultural research relies on accurate characterization of the growing environment in field trials. Thus, it is critical to describe the crop growing conditions at a particular trial location. We developed the seasonal characterization engine (SCE), an R shiny app which allows researchers to generate seasonal profiles for a given set of trials. The SCE interfaces with APSIM to dynamically model crop development under the specified trial conditions and returns seasonal information to the user. Seasonal profiles are useful for environmental description and analysis in multi-environment crop varietal trials. Seasonal covariates, derived from these profiles, are useful, biologically relevant parameters for capturing environmental effects in models of crop adaptation. We anticipate that this application will be used by researchers and agronomists to facilitate the description of seasonal conditions and the collection of phenologically derived environmental information which may be used in subsequent modeling.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102477"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145748539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-17DOI: 10.1016/j.softx.2025.102467
Ransi Clark, Jonathan N. Katz, R.Michael Alvarez
This R package estimates multi-period differences-in-differences at the level of the treated unit. This allows more flexible aggregation over estimators whose most granular differences-in-differences estimate is at the treated time and is useful in applications where there is considerable heterogeneity within the treated time group or when units treated at the same time receive somewhat different treatments. For example, when units are treated with different doses, they can be aggregated on the basis of dose to derive a dose-response function. Regional heterogeneity, as illustrated by a cross-country study on democratization, is another example. The software’s calls have the same syntax as the did package and agree with those estimates when panels are balanced and covariates are not relevant.
{"title":"didunit: a unit-level multi-period differences-in-differences estimator in R","authors":"Ransi Clark, Jonathan N. Katz, R.Michael Alvarez","doi":"10.1016/j.softx.2025.102467","DOIUrl":"10.1016/j.softx.2025.102467","url":null,"abstract":"<div><div>This <span>R</span> package estimates multi-period differences-in-differences at the level of the treated unit. This allows more flexible aggregation over estimators whose most granular differences-in-differences estimate is at the treated time and is useful in applications where there is considerable heterogeneity within the treated time group or when units treated at the same time receive somewhat different treatments. For example, when units are treated with different doses, they can be aggregated on the basis of dose to derive a dose-response function. Regional heterogeneity, as illustrated by a cross-country study on democratization, is another example. The software’s calls have the same syntax as the <span>did</span> package and agree with those estimates when panels are balanced and covariates are not relevant.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102467"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145797870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
OCA (Overload Compensation App) is an interactive Shiny web application that automates faculty overload pay calculations in accordance with institutional policy and enables users to visualize the results. Designed to promote transparency, reproducibility, and fairness, OCA allows academic administrators to filter, compute, and export overload data across instructors and departments. The app supports strategic blending between institution- and instructor-favoring approaches, offering both flexibility and clarity in compensation planning. OCA is open-source, released under the AGPL-3 license, and requires no programming expertise to use.
{"title":"OCA: A Shiny web application for transparent overload compensation in higher education","authors":"Dawit Aberra, Xiangyan Zeng, Chunhua Dong Mahon, Sanjeev Arora","doi":"10.1016/j.softx.2025.102375","DOIUrl":"10.1016/j.softx.2025.102375","url":null,"abstract":"<div><div>OCA (Overload Compensation App) is an interactive Shiny web application that automates faculty overload pay calculations in accordance with institutional policy and enables users to visualize the results. Designed to promote transparency, reproducibility, and fairness, OCA allows academic administrators to filter, compute, and export overload data across instructors and departments. The app supports strategic blending between institution- and instructor-favoring approaches, offering both flexibility and clarity in compensation planning. OCA is open-source, released under the AGPL-3 license, and requires no programming expertise to use.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"32 ","pages":"Article 102375"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-21DOI: 10.1016/j.softx.2025.102415
Grzegorz Stępień , Karol Kabała , Jakub Śledziowski
GeoPOINT is an open-source Python tool for generating, transforming, and analyzing 3D point datasets under geospatial constraints. It combines symbolic mathematics with white-box optimization to simulate realistic transformation scenarios, including rotation, translation, and measurement noise. It supports both synthetic clouds (ideal, Gaussian noise, geodetic error models) and real data in PCD/CSV formats. Rigid-body transformations can be applied with configurable noise and bias, enabling controlled testing of geodetic error propagation. Delivered as a Jupyter notebook, GeoPOINT is suitable for testing transformation accuracy, analyzing numerical stability, and teaching geodetic concepts. Its flexible architecture enables reproducible experiments, making it valuable for research, education, and offshore or mobile surveying applications.
{"title":"GeoPOINT – synthetic point generator for geospatial applications","authors":"Grzegorz Stępień , Karol Kabała , Jakub Śledziowski","doi":"10.1016/j.softx.2025.102415","DOIUrl":"10.1016/j.softx.2025.102415","url":null,"abstract":"<div><div>GeoPOINT is an open-source Python tool for generating, transforming, and analyzing 3D point datasets under geospatial constraints. It combines symbolic mathematics with white-box optimization to simulate realistic transformation scenarios, including rotation, translation, and measurement noise. It supports both synthetic clouds (ideal, Gaussian noise, geodetic error models) and real data in PCD/CSV formats. Rigid-body transformations can be applied with configurable noise and bias, enabling controlled testing of geodetic error propagation. Delivered as a Jupyter notebook, GeoPOINT is suitable for testing transformation accuracy, analyzing numerical stability, and teaching geodetic concepts. Its flexible architecture enables reproducible experiments, making it valuable for research, education, and offshore or mobile surveying applications.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"32 ","pages":"Article 102415"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145363066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-23DOI: 10.1016/j.softx.2025.102424
Ulysses de Aguilar , Pablo C. Cañizares , Alberto Núñez
The Internet of Things (IoT) paradigm has experienced exponential growth in recent years, becoming a key component in the business strategies of leading global technology companies. However, IoT systems face critical challenges such as high mobility demands, low latency requirements, and significant bandwidth consumption. Fog computing has emerged as a viable solution to alleviate these challenges. This paradigm introduces intermediate layers between edge devices and centralised cloud systems, which reduces latency and alleviates bandwidth bottlenecks inherent in traditional computing models. Despite its potential, fog systems often require significant financial investment, either for proprietary infrastructure or pay-per-use services, which constrains their study to theoretical analyses.
To address these difficulties, we present Simcan2Fog, a discrete-event simulation platform for modelling and analysing fog computing environments. Built on OMNeT++ and INET – widely adopted frameworks for discrete-event simulation and network protocol modelling, respectively – Simcan2Fog provides highly detailed communication network models and enhanced capabilities to model sensors, actuators, controllers, applications, distribution algorithms, and interconnected fog devices. Additionally, it inherits cloud computing related functionalities such as virtualisation, data centres, cloud provider allocation policies, and user management from the Simcan2Cloud simulator. These features enable Simcan2Fog to simulate realistic IoT scenarios, offering detailed insights into performance metrics such as latency and resource utilisation.
{"title":"Simcan2Fog: A discrete-event platform for the modelling and simulation of Fog computing environments","authors":"Ulysses de Aguilar , Pablo C. Cañizares , Alberto Núñez","doi":"10.1016/j.softx.2025.102424","DOIUrl":"10.1016/j.softx.2025.102424","url":null,"abstract":"<div><div>The Internet of Things (IoT) paradigm has experienced exponential growth in recent years, becoming a key component in the business strategies of leading global technology companies. However, IoT systems face critical challenges such as high mobility demands, low latency requirements, and significant bandwidth consumption. Fog computing has emerged as a viable solution to alleviate these challenges. This paradigm introduces intermediate layers between edge devices and centralised cloud systems, which reduces latency and alleviates bandwidth bottlenecks inherent in traditional computing models. Despite its potential, fog systems often require significant financial investment, either for proprietary infrastructure or pay-per-use services, which constrains their study to theoretical analyses.</div><div>To address these difficulties, we present Simcan2Fog, a discrete-event simulation platform for modelling and analysing fog computing environments. Built on OMNeT++ and INET – widely adopted frameworks for discrete-event simulation and network protocol modelling, respectively – Simcan2Fog provides highly detailed communication network models and enhanced capabilities to model sensors, actuators, controllers, applications, distribution algorithms, and interconnected fog devices. Additionally, it inherits cloud computing related functionalities such as virtualisation, data centres, cloud provider allocation policies, and user management from the Simcan2Cloud simulator. These features enable Simcan2Fog to simulate realistic IoT scenarios, offering detailed insights into performance metrics such as latency and resource utilisation.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"32 ","pages":"Article 102424"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145363070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-04DOI: 10.1016/j.softx.2025.102427
Ryley G. Hill, Keegan S. Davis, Christopher W. Johnson
Predicting bulk behavior from microscale features constitutes a key objective in multiscale modeling research, often involving numerical models composed of finite elements that capture the diversity of constituent phases, shapes, and orientations within the material. The Grain2mesh toolbox allows the user to input unprocessed mesoscopic images for automatic segmentation, pre-processing, quality control, and numerical mesh generation. The numerical mesh generation incorporates Cubit routines to generate robust multi-phase mesh structure for use in computational mechanics solvers. The python classes developed contain detailed documentation and examples to support standard usage and case-specific alternative options.
{"title":"Grain2mesh: A Python and cubit mesh generator from unprocessed mesoscale images","authors":"Ryley G. Hill, Keegan S. Davis, Christopher W. Johnson","doi":"10.1016/j.softx.2025.102427","DOIUrl":"10.1016/j.softx.2025.102427","url":null,"abstract":"<div><div>Predicting bulk behavior from microscale features constitutes a key objective in multiscale modeling research, often involving numerical models composed of finite elements that capture the diversity of constituent phases, shapes, and orientations within the material. The Grain2mesh toolbox allows the user to input unprocessed mesoscopic images for automatic segmentation, pre-processing, quality control, and numerical mesh generation. The numerical mesh generation incorporates Cubit routines to generate robust multi-phase mesh structure for use in computational mechanics solvers. The python classes developed contain detailed documentation and examples to support standard usage and case-specific alternative options.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"32 ","pages":"Article 102427"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145466191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-06DOI: 10.1016/j.softx.2025.102435
J.A. Sergay , A. Hai , C. Franck
NeuroSpikeX is a user-friendly tool for the quantitative analysis of neuronal calcium dynamics. It provides robust calcium spike detection, comprehensive network metrics, and intuitive graphical interfaces. NeuroSpikeX seamlessly integrates into existing workflows using outputs from the established algorithm NeuroCa, enhancing accuracy and reproducibility. The code effectively analyzes calcium dynamics across numerous in vitro datasets containing multiple experimental time points. NeuroSpikeX facilitates detailed cell and network analyses in large datasets, making rigorous calcium transient characterization accessible to researchers with minimal coding expertise.
{"title":"NeuroSpikeX: Comprehensive detection and characterization of neuronal calcium dynamics","authors":"J.A. Sergay , A. Hai , C. Franck","doi":"10.1016/j.softx.2025.102435","DOIUrl":"10.1016/j.softx.2025.102435","url":null,"abstract":"<div><div>NeuroSpikeX is a user-friendly tool for the quantitative analysis of neuronal calcium dynamics. It provides robust calcium spike detection, comprehensive network metrics, and intuitive graphical interfaces. NeuroSpikeX seamlessly integrates into existing workflows using outputs from the established algorithm NeuroCa, enhancing accuracy and reproducibility. The code effectively analyzes calcium dynamics across numerous <em>in vitro</em> datasets containing multiple experimental time points. NeuroSpikeX facilitates detailed cell and network analyses in large datasets, making rigorous calcium transient characterization accessible to researchers with minimal coding expertise.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"32 ","pages":"Article 102435"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145466189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-24DOI: 10.1016/j.softx.2025.102352
Yi Zhou , Yuhao Deng , Yu-Shi Tian , Peng Wu , Wenjie Hu , Haoxiang Wang , Ewout Steyerberg , Xiao-Hua Zhou
In precision medicine, deriving the individualized treatment rule (ITR) is crucial for recommending the optimal treatment based on patients’ baseline covariates. The covariate-specific treatment effect (CSTE) curve presents a graphical method to visualize an ITR within a causal inference framework. Recent advancements have enhanced the causal interpretation of the CSTE curves and provided methods for deriving simultaneous confidence bands for various study types. To facilitate the implementation of these methods and make ITR estimation more accessible, we developed CSTEapp, a web-based application built on the R Shiny framework. CSTEapp allows users to upload data and create CSTE curves through simple “point and click” operations, making it the first application for estimating the ITRs. CSTEapp simplifies the analytical process by providing interactive graphical user interfaces with dynamic results, enabling users to easily report optimal treatments for individual patients based on their covariates information. Currently, CSTEapp is applicable to studies with binary and time-to-event outcomes, and we continually expand its capabilities to accommodate other outcome types as new methods emerge. We demonstrate the utility of CSTEapp using real-world examples and simulation datasets. By making advanced statistical methods more accessible, CSTEapp empowers researchers and practitioners across various fields to advance precision medicine and improve patient outcomes.
{"title":"CSTEapp: An interactive R-Shiny application of the covariate-specific treatment effect curve for visualizing individualized treatment rule","authors":"Yi Zhou , Yuhao Deng , Yu-Shi Tian , Peng Wu , Wenjie Hu , Haoxiang Wang , Ewout Steyerberg , Xiao-Hua Zhou","doi":"10.1016/j.softx.2025.102352","DOIUrl":"10.1016/j.softx.2025.102352","url":null,"abstract":"<div><div>In precision medicine, deriving the individualized treatment rule (ITR) is crucial for recommending the optimal treatment based on patients’ baseline covariates. The covariate-specific treatment effect (CSTE) curve presents a graphical method to visualize an ITR within a causal inference framework. Recent advancements have enhanced the causal interpretation of the CSTE curves and provided methods for deriving simultaneous confidence bands for various study types. To facilitate the implementation of these methods and make ITR estimation more accessible, we developed CSTEapp, a web-based application built on the R Shiny framework. CSTEapp allows users to upload data and create CSTE curves through simple “point and click” operations, making it the first application for estimating the ITRs. CSTEapp simplifies the analytical process by providing interactive graphical user interfaces with dynamic results, enabling users to easily report optimal treatments for individual patients based on their covariates information. Currently, CSTEapp is applicable to studies with binary and time-to-event outcomes, and we continually expand its capabilities to accommodate other outcome types as new methods emerge. We demonstrate the utility of CSTEapp using real-world examples and simulation datasets. By making advanced statistical methods more accessible, CSTEapp empowers researchers and practitioners across various fields to advance precision medicine and improve patient outcomes.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"32 ","pages":"Article 102352"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}