首页 > 最新文献

SoftwareX最新文献

英文 中文
DREAMS: A python framework for training deep learning models on EEG data with model card reporting for medical applications
IF 2.4 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-04-24 DOI: 10.1016/j.softx.2025.102140
Rabindra Khadka , Pedro G. Lind , Anis Yazidi , Asma Belhadi
Electroencephalography (EEG) provides a non-invasive way to observe brain activity in real time. Deep learning has enhanced EEG analysis, enabling meaningful pattern detection for clinical and research purposes. However, most existing frameworks for EEG data analysis are either focused on preprocessing techniques or deep learning model development, often overlooking the crucial need for structured documentation and model interpretability. In this paper, we introduce DREAMS (Deep REport for AI ModelS), a Python-based framework designed to generate automated model cards for deep learning models applied to EEG data. Unlike generic model reporting tools, DREAMS is specifically tailored for EEG-based deep learning applications, incorporating domain-specific metadata, preprocessing details, performance metrics, and uncertainty quantification. The framework seamlessly integrates with deep learning pipelines, providing structured YAML-based documentation. We evaluate DREAMS through two case studies: an EEG emotion classification task using the FACED dataset and a abnormal EEG classification task using the Temple University Hospital (TUH) Abnormal dataset. These evaluations demonstrate how the generated model card enhances transparency by documenting model performance, dataset biases, and interpretability limitations. Unlike existing model documentation approaches, DREAMS provides visualized performance metrics, dataset alignment details, and model uncertainty estimations, making it a valuable tool for researchers and clinicians working with EEG-based AI. The source code for DREAMS is open-source, facilitating broad adoption in healthcare AI, research, and ethical AI development.
{"title":"DREAMS: A python framework for training deep learning models on EEG data with model card reporting for medical applications","authors":"Rabindra Khadka ,&nbsp;Pedro G. Lind ,&nbsp;Anis Yazidi ,&nbsp;Asma Belhadi","doi":"10.1016/j.softx.2025.102140","DOIUrl":"10.1016/j.softx.2025.102140","url":null,"abstract":"<div><div>Electroencephalography (EEG) provides a non-invasive way to observe brain activity in real time. Deep learning has enhanced EEG analysis, enabling meaningful pattern detection for clinical and research purposes. However, most existing frameworks for EEG data analysis are either focused on preprocessing techniques or deep learning model development, often overlooking the crucial need for structured documentation and model interpretability. In this paper, we introduce DREAMS (Deep REport for AI ModelS), a Python-based framework designed to generate automated model cards for deep learning models applied to EEG data. Unlike generic model reporting tools, DREAMS is specifically tailored for EEG-based deep learning applications, incorporating domain-specific metadata, preprocessing details, performance metrics, and uncertainty quantification. The framework seamlessly integrates with deep learning pipelines, providing structured YAML-based documentation. We evaluate DREAMS through two case studies: an EEG emotion classification task using the FACED dataset and a abnormal EEG classification task using the Temple University Hospital (TUH) Abnormal dataset. These evaluations demonstrate how the generated model card enhances transparency by documenting model performance, dataset biases, and interpretability limitations. Unlike existing model documentation approaches, DREAMS provides visualized performance metrics, dataset alignment details, and model uncertainty estimations, making it a valuable tool for researchers and clinicians working with EEG-based AI. The source code for DREAMS is open-source, facilitating broad adoption in healthcare AI, research, and ethical AI development.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102140"},"PeriodicalIF":2.4,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143865085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HITS: Hyperplanes intersection tabu search for maximum score estimation
IF 2.4 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-04-24 DOI: 10.1016/j.softx.2025.102164
Kostas Florios , Alexandros Louka , Yannis Bilias
A tabu search algorithm is proposed for the maximum score estimator computation, where the focus is on large sample size and large number of estimated parameters. This is a deterministic algorithm rather than a stochastic one. The tabu search is much faster than the Simulated Annealing, while providing solutions of about the same quality. The software is provided as a Fortran console program and a C++ Graphical User Interface application. It is demonstrated using an empirical study concerning labor force participation of married women. Comparison with Mixed Integer Programming is also provided.
{"title":"HITS: Hyperplanes intersection tabu search for maximum score estimation","authors":"Kostas Florios ,&nbsp;Alexandros Louka ,&nbsp;Yannis Bilias","doi":"10.1016/j.softx.2025.102164","DOIUrl":"10.1016/j.softx.2025.102164","url":null,"abstract":"<div><div>A tabu search algorithm is proposed for the maximum score estimator computation, where the focus is on large sample size and large number of estimated parameters. This is a deterministic algorithm rather than a stochastic one. The tabu search is much faster than the Simulated Annealing, while providing solutions of about the same quality. The software is provided as a Fortran console program and a C++ Graphical User Interface application. It is demonstrated using an empirical study concerning labor force participation of married women. Comparison with Mixed Integer Programming is also provided.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102164"},"PeriodicalIF":2.4,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143865086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WSim4ABM: Agent-based Modelling simulation Web service with Message-broker middleware and Annotation processing library
IF 2.4 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-04-23 DOI: 10.1016/j.softx.2025.102173
Duguma Yeshitla Habtemariam, Youngjin Kim, Minsoo Kim, Jihwan Lee
Agent-based modelling is a widely used paradigm for simulating Complex Systems representing real-world phenomena. High-Performance Computing (HPC) resources are essential to model such systems on a large scale. However, many existing Agent-based Modelling Simulation (ABMS) tools do not optimize simultaneous multi-user access to HPC resources because they are often built as monolithic software. An ABMS web service that is deployable on HPC resources is proposed to address this issue using MASON as its simulation core. The outcomes of this research include workflows that include Gradle and Annotation processing which assist the modelling experience of users, integration of message broker for scalability and robustness, and a web interface for managing user accounts, running simulations, and obtaining visualizations.
{"title":"WSim4ABM: Agent-based Modelling simulation Web service with Message-broker middleware and Annotation processing library","authors":"Duguma Yeshitla Habtemariam,&nbsp;Youngjin Kim,&nbsp;Minsoo Kim,&nbsp;Jihwan Lee","doi":"10.1016/j.softx.2025.102173","DOIUrl":"10.1016/j.softx.2025.102173","url":null,"abstract":"<div><div>Agent-based modelling is a widely used paradigm for simulating Complex Systems representing real-world phenomena. High-Performance Computing (HPC) resources are essential to model such systems on a large scale. However, many existing Agent-based Modelling Simulation (ABMS) tools do not optimize simultaneous multi-user access to HPC resources because they are often built as monolithic software. An ABMS web service that is deployable on HPC resources is proposed to address this issue using MASON as its simulation core. The outcomes of this research include workflows that include Gradle and Annotation processing which assist the modelling experience of users, integration of message broker for scalability and robustness, and a web interface for managing user accounts, running simulations, and obtaining visualizations.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102173"},"PeriodicalIF":2.4,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143860444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tampa Bay red tide tweet dashboard: Using Twitter/X to inform understanding of harmful algal blooms in the Tampa Bay region
IF 2.4 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-04-23 DOI: 10.1016/j.softx.2025.102160
Fehmi Neffati , Andrey Skripnikov , Seamus Jackson , Tania Roy , Marcus Beck
Harmful algal blooms (HABs) of Karenia brevis, more commonly known as “red tide”, have been increasing in frequency and severity, presenting recurring environmental issues for Florida’s Gulf coast. While local resource managers typically use field-based measurements to assess the direct effects of red tide, e.g., dead fish counts and beach reports of respiratory irritation, alternative data sources that leverage social media to assess public perception and awareness have received less attention. With the exponential growth of social media over the past 15 years, these alternative data sources present potentially valuable opportunities to fill knowledge gaps in regard to public discourse around red tide. Using Twitter/X as the social media platform, we created a dashboard that summarizes text data and posting activity on red tide, focusing on the Tampa Bay area, which experienced substantial bloom events over the past few years. The dashboard provides multiple analytical summaries of the text data, including word clouds of most frequent terms, a heatmap of the most mentioned counties, and time series of posting frequency by term. This paper describes the dashboard architecture, deployment, functionality, and use cases. The dashboard was co-developed with regional stakeholders and researchers and is expected to have utility for local resource management organizations, along with a broader research community interested in studying HAB events. The final product is a novel source of information that produces additional insights into public knowledge and sentiment on red tide that can complement more conventional forms of in situ monitoring.
{"title":"Tampa Bay red tide tweet dashboard: Using Twitter/X to inform understanding of harmful algal blooms in the Tampa Bay region","authors":"Fehmi Neffati ,&nbsp;Andrey Skripnikov ,&nbsp;Seamus Jackson ,&nbsp;Tania Roy ,&nbsp;Marcus Beck","doi":"10.1016/j.softx.2025.102160","DOIUrl":"10.1016/j.softx.2025.102160","url":null,"abstract":"<div><div>Harmful algal blooms (HABs) of Karenia brevis, more commonly known as “red tide”, have been increasing in frequency and severity, presenting recurring environmental issues for Florida’s Gulf coast. While local resource managers typically use field-based measurements to assess the direct effects of red tide, e.g., dead fish counts and beach reports of respiratory irritation, alternative data sources that leverage social media to assess public perception and awareness have received less attention. With the exponential growth of social media over the past 15 years, these alternative data sources present potentially valuable opportunities to fill knowledge gaps in regard to public discourse around red tide. Using Twitter/X as the social media platform, we created a dashboard that summarizes text data and posting activity on red tide, focusing on the Tampa Bay area, which experienced substantial bloom events over the past few years. The dashboard provides multiple analytical summaries of the text data, including word clouds of most frequent terms, a heatmap of the most mentioned counties, and time series of posting frequency by term. This paper describes the dashboard architecture, deployment, functionality, and use cases. The dashboard was co-developed with regional stakeholders and researchers and is expected to have utility for local resource management organizations, along with a broader research community interested in studying HAB events. The final product is a novel source of information that produces additional insights into public knowledge and sentiment on red tide that can complement more conventional forms of in situ monitoring.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102160"},"PeriodicalIF":2.4,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143860443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
India Policy Insights: A geospatial and temporal data science and visualization platform and architecture 印度政策洞察:地理空间和时间数据科学与可视化平台和架构
IF 2.4 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-04-22 DOI: 10.1016/j.softx.2025.102149
Devika Jain , Joaquin Kachinovsky , Gonzalo Rodriguez , Junyi Chen , Rockli Kim , S V Subramanian
The Geographic Insights Lab at Harvard University developed India Policy Insights (IPI), a spatio-temporal visualization platform for policymakers. IPI provides insights from 122 indicators across population, health, and socioeconomic metrics spanning 720 districts, 543 parliamentary constituencies, and 600,000 villages in India. Its applications include breastfeeding campaigns,policy development, and government reporting. It is fully deployed on Microsoft Azure using Docker, which ensures scalability and reproducibility. Built on an open-source stack with React,.NET, and PostGIS, it processes, stores, visualizes, and queries geospatial big data. This paper highlights IPI's architecture and methodologies for tackling public policy challenges.
哈佛大学地理洞察实验室为政策制定者开发了时空可视化平台 "印度政策洞察"(IPI)。IPI 从印度 720 个县、543 个议会选区和 60 万个村庄的 122 个指标中提供了人口、健康和社会经济指标方面的见解。其应用包括母乳喂养运动、政策制定和政府报告。它使用 Docker 完全部署在 Microsoft Azure 上,从而确保了可扩展性和可重复性。它基于 React、.NET 和 PostGIS 的开源堆栈构建,可处理、存储、可视化和查询地理空间大数据。本文重点介绍了 IPI 应对公共政策挑战的架构和方法。
{"title":"India Policy Insights: A geospatial and temporal data science and visualization platform and architecture","authors":"Devika Jain ,&nbsp;Joaquin Kachinovsky ,&nbsp;Gonzalo Rodriguez ,&nbsp;Junyi Chen ,&nbsp;Rockli Kim ,&nbsp;S V Subramanian","doi":"10.1016/j.softx.2025.102149","DOIUrl":"10.1016/j.softx.2025.102149","url":null,"abstract":"<div><div>The Geographic Insights Lab at Harvard University developed India Policy Insights (IPI), a spatio-temporal visualization platform for policymakers. IPI provides insights from 122 indicators across population, health, and socioeconomic metrics spanning 720 districts, 543 parliamentary constituencies, and 600,000 villages in India. Its applications include breastfeeding campaigns,policy development, and government reporting. It is fully deployed on Microsoft Azure using Docker, which ensures scalability and reproducibility. Built on an open-source stack with React,.NET, and PostGIS, it processes, stores, visualizes, and queries geospatial big data. This paper highlights IPI's architecture and methodologies for tackling public policy challenges.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102149"},"PeriodicalIF":2.4,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143855458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
bs-scheduler: A Batch Size Scheduler library compatible with PyTorch DataLoaders
IF 2.4 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-04-22 DOI: 10.1016/j.softx.2025.102162
George Stoica, Mihaela Elena Breabăn
Deep learning models involve computationally intensive training experiments. Increasing the batch size improves the training speed and hardware efficiency by enabling deep neural networks to ingest and process more data in parallel. Inspired by learning rate adaptation policies that yield good results, methods that gradually adjust the batch size have been developed. These methods enhance hardware efficiency without compromising generalization performance. Despite their potential, such methods have not gained widespread popularity or adoption: unlike widely used learning rate policies, for which there is built-in support in most of the deep learning frameworks, the use of batch size adaptation policies requires custom implementations. We introduce an open-source package that implements batch size adaptation policies, which can be seamlessly integrated into deep learning training pipelines. This facilitates more efficient experimentation and accelerates research workflows.
{"title":"bs-scheduler: A Batch Size Scheduler library compatible with PyTorch DataLoaders","authors":"George Stoica,&nbsp;Mihaela Elena Breabăn","doi":"10.1016/j.softx.2025.102162","DOIUrl":"10.1016/j.softx.2025.102162","url":null,"abstract":"<div><div>Deep learning models involve computationally intensive training experiments. Increasing the batch size improves the training speed and hardware efficiency by enabling deep neural networks to ingest and process more data in parallel. Inspired by learning rate adaptation policies that yield good results, methods that gradually adjust the batch size have been developed. These methods enhance hardware efficiency without compromising generalization performance. Despite their potential, such methods have not gained widespread popularity or adoption: unlike widely used learning rate policies, for which there is built-in support in most of the deep learning frameworks, the use of batch size adaptation policies requires custom implementations. We introduce an open-source package that implements batch size adaptation policies, which can be seamlessly integrated into deep learning training pipelines. This facilitates more efficient experimentation and accelerates research workflows.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102162"},"PeriodicalIF":2.4,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143855459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depex: A software for analysing and reasoning about vulnerabilities in software projects dependencies
IF 2.4 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-04-22 DOI: 10.1016/j.softx.2025.102152
Antonio Germán Márquez, Ángel Jesús Varela-Vaca, María Teresa Gómez López, José A. Galindo, David Benavides
This paper presents Depex, a tool that allows developers to reason over the entire configuration space of the dependencies of an open-source software repository. The dependency information is extracted from the repository requirements files and the package managers of the dependencies, generating a graph that includes information regarding security vulnerabilities affecting the dependencies. The dependency graph allows automatic reasoning through the creation of a Boolean satisfiability model based on Satisfiability Modulo Theories (SMT). Automatic reasoning lets operations such as identifying the safest dependency configuration or validating if a particular configuration is secure. To demonstrate the impact of the proposal, it has been evaluated on more than 300 real open-source repositories of Python Package Index (PyPI), Node Package Manager (NPM) and Maven Central (Maven), as well as compared with current commercial tools on the market.
{"title":"Depex: A software for analysing and reasoning about vulnerabilities in software projects dependencies","authors":"Antonio Germán Márquez,&nbsp;Ángel Jesús Varela-Vaca,&nbsp;María Teresa Gómez López,&nbsp;José A. Galindo,&nbsp;David Benavides","doi":"10.1016/j.softx.2025.102152","DOIUrl":"10.1016/j.softx.2025.102152","url":null,"abstract":"<div><div>This paper presents Depex, a tool that allows developers to reason over the entire configuration space of the dependencies of an open-source software repository. The dependency information is extracted from the repository requirements files and the package managers of the dependencies, generating a graph that includes information regarding security vulnerabilities affecting the dependencies. The dependency graph allows automatic reasoning through the creation of a Boolean satisfiability model based on Satisfiability Modulo Theories (SMT). Automatic reasoning lets operations such as identifying the safest dependency configuration or validating if a particular configuration is secure. To demonstrate the impact of the proposal, it has been evaluated on more than 300 real open-source repositories of Python Package Index (PyPI), Node Package Manager (NPM) and Maven Central (Maven), as well as compared with current commercial tools on the market.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102152"},"PeriodicalIF":2.4,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143860526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chopin: An open source R-language tool to support spatial analysis on parallelizable infrastructure
IF 2.4 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-04-21 DOI: 10.1016/j.softx.2025.102167
Insang Song , Kyle P. Messier
This study introduces chopin, an R package that lowers the technical barriers to parallelizing geocomputation. Supporting popular R spatial-analysis libraries, chopin exploits parallel computing by partitioning data involved in each task. Partitioning can occur with regular grids, hierarchical units, or multiple file inputs, accommodating diverse input types and ensuring interoperability. This approach scales geospatial covariate calculations to match available processing power, from laptop computers to high-performance computers, reducing execution times proportional to the number of processing units. chopin is expected to benefit a broad range of research communities working with large-scale geospatial data, providing an efficient, flexible, and accessible tool for scaling geospatial computations.
{"title":"Chopin: An open source R-language tool to support spatial analysis on parallelizable infrastructure","authors":"Insang Song ,&nbsp;Kyle P. Messier","doi":"10.1016/j.softx.2025.102167","DOIUrl":"10.1016/j.softx.2025.102167","url":null,"abstract":"<div><div>This study introduces <span>chopin</span>, an R package that lowers the technical barriers to parallelizing geocomputation. Supporting popular R spatial-analysis libraries, <span>chopin</span> exploits parallel computing by partitioning data involved in each task. Partitioning can occur with regular grids, hierarchical units, or multiple file inputs, accommodating diverse input types and ensuring interoperability. This approach scales geospatial covariate calculations to match available processing power, from laptop computers to high-performance computers, reducing execution times proportional to the number of processing units. <span>chopin</span> is expected to benefit a broad range of research communities working with large-scale geospatial data, providing an efficient, flexible, and accessible tool for scaling geospatial computations.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102167"},"PeriodicalIF":2.4,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143852266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SEPO-IR: Software-based evaluation process for calculating infection rate
IF 2.4 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-04-17 DOI: 10.1016/j.softx.2025.102159
Hongseok Oh , Hanseul Oh , Jaemin Jeong , Soochong Kim , Kyungchang Jeong , Sang-Hwan Hyun , Ji-Hoon Jeong , Young-Duk Seo , Euijong Lee
Pathology is crucial for understanding and treating diseases, and is heavily based on objective and quantitative criteria. While advances in immunohistochemistry (IHC) and digital pathology (DP) have significantly improved methods for quantitative disease detection, existing research has primarily focused on the detection of abnormal biomarkers. As a result, the quantitative assessment of infection extent has frequently been overlooked owing to technical difficulties, particularly in feature extraction. To address these issues, we propose an automated image-based system for calculating tissue infection rates. This system accurately determines the proportion of infected areas, reducing human bias and increasing efficiency, resulting in more reliable diagnostics and treatment planning. Validation of the proposed method shows a very high correlation with pathologists’ assessments. Furthermore, this software is an easy-to-use application that can significantly improve DP research.
{"title":"SEPO-IR: Software-based evaluation process for calculating infection rate","authors":"Hongseok Oh ,&nbsp;Hanseul Oh ,&nbsp;Jaemin Jeong ,&nbsp;Soochong Kim ,&nbsp;Kyungchang Jeong ,&nbsp;Sang-Hwan Hyun ,&nbsp;Ji-Hoon Jeong ,&nbsp;Young-Duk Seo ,&nbsp;Euijong Lee","doi":"10.1016/j.softx.2025.102159","DOIUrl":"10.1016/j.softx.2025.102159","url":null,"abstract":"<div><div>Pathology is crucial for understanding and treating diseases, and is heavily based on objective and quantitative criteria. While advances in immunohistochemistry (IHC) and digital pathology (DP) have significantly improved methods for quantitative disease detection, existing research has primarily focused on the detection of abnormal biomarkers. As a result, the quantitative assessment of infection extent has frequently been overlooked owing to technical difficulties, particularly in feature extraction. To address these issues, we propose an automated image-based system for calculating tissue infection rates. This system accurately determines the proportion of infected areas, reducing human bias and increasing efficiency, resulting in more reliable diagnostics and treatment planning. Validation of the proposed method shows a very high correlation with pathologists’ assessments. Furthermore, this software is an easy-to-use application that can significantly improve DP research.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102159"},"PeriodicalIF":2.4,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143839737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QuaDS: A qualitative/quantitative descriptive statistics Python module
IF 2.4 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-04-16 DOI: 10.1016/j.softx.2025.102158
A. Bouanich , A. El Ghaziri , P. Santagostini , A. Pernet , C. Landès , J. Bourbeillon
In this research, we introduce a new Python bioinformatics tool. QuaDS (Quantitative/Qualitative Description Statistics) is a pipeline tailored to describe a factor (a qualitative variable of interest) in heterogeneous datasets consisting of qualitative and quantitative variables. This pipeline separately analyze s the variables related to the factor using appropriate statistical tests. The QuaDS pipeline offers an interactive visualization that describes the factor. Several parameters can be defined by the user to ensure the most personalized results based on their data.
{"title":"QuaDS: A qualitative/quantitative descriptive statistics Python module","authors":"A. Bouanich ,&nbsp;A. El Ghaziri ,&nbsp;P. Santagostini ,&nbsp;A. Pernet ,&nbsp;C. Landès ,&nbsp;J. Bourbeillon","doi":"10.1016/j.softx.2025.102158","DOIUrl":"10.1016/j.softx.2025.102158","url":null,"abstract":"<div><div>In this research, we introduce a new Python bioinformatics tool. QuaDS (Quantitative/Qualitative Description Statistics) is a pipeline tailored to describe a factor (a qualitative variable of interest) in heterogeneous datasets consisting of qualitative and quantitative variables. This pipeline separately analyze s the variables related to the factor using appropriate statistical tests. The QuaDS pipeline offers an interactive visualization that describes the factor. Several parameters can be defined by the user to ensure the most personalized results based on their data.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102158"},"PeriodicalIF":2.4,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143833342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
SoftwareX
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1