Pub Date : 2025-10-01DOI: 10.1016/j.simpa.2025.100791
Adam McArthur , Stephanie Wichuk , Stephen Burnside , Andrew Kirby , Alexander Scammon , Damian Sol , Abhilash Hareendranathan , Jacob L. Jaremko
Developmental dysplasia of the hip (DDH) poses significant diagnostic challenges, hindering timely intervention. Current screening methodologies lack standardization, and AI-driven studies suffer from reproducibility issues due to limited data and code availability. To address these limitations, we introduce Retuve, an open-source framework for multi-modality DDH analysis, encompassing both ultrasound (US) and X-ray imaging. Retuve provides a complete and reproducible workflow, offering open datasets comprising expert-annotated US and X-ray images, pre-trained models with training code and weights, and a user-friendly Python Application Programming Interface (API). The framework integrates segmentation and landmark detection models, enabling automated measurement of key diagnostic parameters such as the alpha angle and acetabular index. By adhering to open-source principles, Retuve promotes transparency, collaboration, and accessibility in DDH research. This framework can democratize DDH screening, facilitate early diagnosis, and improve patient outcomes by enabling widespread screening and early intervention. The GitHub repository/code can be found here: https://github.com/radoss-org/retuve
{"title":"Retuve: Automated multi-modality analysis of hip dysplasia with open source AI","authors":"Adam McArthur , Stephanie Wichuk , Stephen Burnside , Andrew Kirby , Alexander Scammon , Damian Sol , Abhilash Hareendranathan , Jacob L. Jaremko","doi":"10.1016/j.simpa.2025.100791","DOIUrl":"10.1016/j.simpa.2025.100791","url":null,"abstract":"<div><div>Developmental dysplasia of the hip (<strong>DDH</strong>) poses significant diagnostic challenges, hindering timely intervention. Current screening methodologies lack standardization, and AI-driven studies suffer from reproducibility issues due to limited data and code availability. To address these limitations, we introduce Retuve, an open-source framework for multi-modality <strong>DDH</strong> analysis, encompassing both ultrasound (<strong>US</strong>) and X-ray imaging. Retuve provides a complete and reproducible workflow, offering open datasets comprising expert-annotated <strong>US</strong> and X-ray images, pre-trained models with training code and weights, and a user-friendly Python Application Programming Interface (<strong>API</strong>). The framework integrates segmentation and landmark detection models, enabling automated measurement of key diagnostic parameters such as the alpha angle and acetabular index. By adhering to open-source principles, Retuve promotes transparency, collaboration, and accessibility in <strong>DDH</strong> research. This framework can democratize <strong>DDH</strong> screening, facilitate early diagnosis, and improve patient outcomes by enabling widespread screening and early intervention. The GitHub repository/code can be found here: <span><span>https://github.com/radoss-org/retuve</span><svg><path></path></svg></span></div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"26 ","pages":"Article 100791"},"PeriodicalIF":1.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145221951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01DOI: 10.1016/j.simpa.2025.100785
Zhe Wang , Zunfan Chen , Zhigang Wang , Sheng Yang , Xiaolin Yang , Heinrich Herre , Yan Zhu
This paper presents Py2ONTO-Edit, an ontology editing tool that integrates the low-level functionality of Owlready2 to simplify the extraction and translation of ontology terms. It offers two extraction methods: 1. Global extraction method. 2. Selective-depth extraction method. Another key feature is the translation of ontology terms using multiple translation packages to add non-English labels (e.g., Chinese, French, German) to the ontology. This paper presents two main contributions: 1. Implementation of flexible features for term extraction. 2. Enabling of multilingual translation of ontology terms. Py2ONTO-Edit is an easy-to-use Python tool for developers focused on ontology term reuse and translation.
{"title":"Py2ONTO-Edit: A python-based tool for ontology term extraction and translation","authors":"Zhe Wang , Zunfan Chen , Zhigang Wang , Sheng Yang , Xiaolin Yang , Heinrich Herre , Yan Zhu","doi":"10.1016/j.simpa.2025.100785","DOIUrl":"10.1016/j.simpa.2025.100785","url":null,"abstract":"<div><div>This paper presents Py2ONTO-Edit, an ontology editing tool that integrates the low-level functionality of Owlready2 to simplify the extraction and translation of ontology terms. It offers two extraction methods: 1. Global extraction method. 2. Selective-depth extraction method. Another key feature is the translation of ontology terms using multiple translation packages to add non-English labels (e.g., Chinese, French, German) to the ontology. This paper presents two main contributions: 1. Implementation of flexible features for term extraction. 2. Enabling of multilingual translation of ontology terms. Py2ONTO-Edit is an easy-to-use Python tool for developers focused on ontology term reuse and translation.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"26 ","pages":"Article 100785"},"PeriodicalIF":1.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145221948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, an innovative architecture based on deep neural networks is presented. Initially, node and layer features are extracted as feature vectors. Each vector is then passed through a deep multilayer perceptron (MLP) network for enrichment. Using the Hadamard product, these vectors are multiplied element-wise to form a matrix. In the next step, to analyze feature interactions, this matrix is fed into a series of Transformer encoders arranged sequentially. Finally, an MLP network is used as a regression model to predict the influence power of the nodes.
{"title":"Deep learning framework with Hadamard-based feature fusion for node influence power prediction","authors":"Ali Seyfi , Asgarali Bouyer , Amin Golzari Oskouei , Bahman Arasteh , Leila Hassani","doi":"10.1016/j.simpa.2025.100793","DOIUrl":"10.1016/j.simpa.2025.100793","url":null,"abstract":"<div><div>In this paper, an innovative architecture based on deep neural networks is presented. Initially, node and layer features are extracted as feature vectors. Each vector is then passed through a deep multilayer perceptron (MLP) network for enrichment. Using the Hadamard product, these vectors are multiplied element-wise to form a matrix. In the next step, to analyze feature interactions, this matrix is fed into a series of Transformer encoders arranged sequentially. Finally, an MLP network is used as a regression model to predict the influence power of the nodes.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"26 ","pages":"Article 100793"},"PeriodicalIF":1.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145466178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-25DOI: 10.1016/j.simpa.2025.100788
Sergio R. Geninatti , Manuel Ortiz-Lopez , José Luis Ávila-Jiménez , José M. Flores , Francisco J. Rodriguez-Lozano
The task of evaluating hives is arduous and time-consuming for beekeepers. One of the tasks involves evaluating the honey in the combs to determine the available surface area, as this is directly related to the health of the hive. Currently, there are very few software tools specifically designed for beekeeping that help alleviate the work of beekeepers. Therefore, this paper presents HoneySeg, a Python-based application for calculating honey area and segmenting honey zones in images of honeycombs. It is an open-source tool designed specifically for beekeeping that does not require prior training for use by the beekeeper.
{"title":"HoneySeg: A segmentation tool for detecting honey areas in honeycombs","authors":"Sergio R. Geninatti , Manuel Ortiz-Lopez , José Luis Ávila-Jiménez , José M. Flores , Francisco J. Rodriguez-Lozano","doi":"10.1016/j.simpa.2025.100788","DOIUrl":"10.1016/j.simpa.2025.100788","url":null,"abstract":"<div><div>The task of evaluating hives is arduous and time-consuming for beekeepers. One of the tasks involves evaluating the honey in the combs to determine the available surface area, as this is directly related to the health of the hive. Currently, there are very few software tools specifically designed for beekeeping that help alleviate the work of beekeepers. Therefore, this paper presents <em>HoneySeg</em>, a Python-based application for calculating honey area and segmenting honey zones in images of honeycombs. It is an open-source tool designed specifically for beekeeping that does not require prior training for use by the beekeeper.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"26 ","pages":"Article 100788"},"PeriodicalIF":1.2,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-16DOI: 10.1016/j.simpa.2025.100784
Baijian Wu, Gang Yu
STFATool is a professional signal-processing application implemented in Python. It integrates several state-of-the-art sparse time–frequency analysis algorithms, including Synchroextracting Transform, Transient-Extracting Transform, Multisynchrosqueezing Transform, and Time-Reassigned Multisynchrosqueezing Transform. It provides a user-friendly interface, users can import signals for detailed time–frequency feature visualization and processing, enabling efficient extraction of critical signal characteristics.
{"title":"STFATool: A Sparse Time–Frequency Analysis Toolkit for non-stationary signals","authors":"Baijian Wu, Gang Yu","doi":"10.1016/j.simpa.2025.100784","DOIUrl":"10.1016/j.simpa.2025.100784","url":null,"abstract":"<div><div>STFATool is a professional signal-processing application implemented in Python. It integrates several state-of-the-art sparse time–frequency analysis algorithms, including Synchroextracting Transform, Transient-Extracting Transform, Multisynchrosqueezing Transform, and Time-Reassigned Multisynchrosqueezing Transform. It provides a user-friendly interface, users can import signals for detailed time–frequency feature visualization and processing, enabling efficient extraction of critical signal characteristics.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"26 ","pages":"Article 100784"},"PeriodicalIF":1.2,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145107849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01DOI: 10.1016/j.simpa.2025.100783
Hardik Ruparel, Tatsat Patel
Retrieval-Augmented Generation (RAG) systems enhance large language models (LLMs) with external knowledge retrieval but incur significant compute and latency costs. In distributed RAG deployments, semantically similar queries routed to different nodes — each with its own cache — can lead to redundant processing. We present RAGCacheSim, a discrete-event simulator for evaluating caching strategies such as Centralized Exact-match Cache (CEC), Independent Semantic Caches (IC), and Distributed Semantic Cache Coordination (DSC). It reports metrics like cache hit rate, average query latency, and coordination overhead. Built using SimPy, FastEmbed, and pybloom_live, it helps researchers optimize distributed RAG architectures.
{"title":"RAGCacheSim: A discrete-event simulator for evaluating caching strategies in Retrieval-Augmented Generation systems","authors":"Hardik Ruparel, Tatsat Patel","doi":"10.1016/j.simpa.2025.100783","DOIUrl":"10.1016/j.simpa.2025.100783","url":null,"abstract":"<div><div>Retrieval-Augmented Generation (RAG) systems enhance large language models (LLMs) with external knowledge retrieval but incur significant compute and latency costs. In distributed RAG deployments, semantically similar queries routed to different nodes — each with its own cache — can lead to redundant processing. We present <em>RAGCacheSim</em>, a discrete-event simulator for evaluating caching strategies such as Centralized Exact-match Cache (CEC), Independent Semantic Caches (IC), and Distributed Semantic Cache Coordination (DSC). It reports metrics like cache hit rate, average query latency, and coordination overhead. Built using <span>SimPy</span>, <span>FastEmbed</span>, and <span>pybloom_live</span>, it helps researchers optimize distributed RAG architectures.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"26 ","pages":"Article 100783"},"PeriodicalIF":1.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01DOI: 10.1016/j.simpa.2025.100776
Santana Yuda Pradata , Muhammad Alfian Amrizal , Ahmad Ridwan Tresna Nugraha , Reza Pulungan
Wireless sensor networks (WSNs) are crucial for various real-life applications, from environmental and health monitoring systems to home and industrial automation. However, these networks face challenges in failure-prone environments, where sensor nodes must conserve energy while ensuring data reliability. We introduce FaultNet-Sim, a multithreaded simulator that facilitates the development of optimization strategies for balancing energy consumption and data reliability by tuning data transfer intervals in WSNs. The simulator can model different failure conditions and various time-division multiple access (TDMA)-based scheduling techniques, allowing users to analyze the trade-offs between data loss and energy consumption. With customizable parameters, FaultNet-Sim is a valuable tool for researchers looking to improve the resilience and efficiency of WSNs in real-world applications.
{"title":"FaultNet-Sim: A C++ simulator for failure-prone wireless sensor networks","authors":"Santana Yuda Pradata , Muhammad Alfian Amrizal , Ahmad Ridwan Tresna Nugraha , Reza Pulungan","doi":"10.1016/j.simpa.2025.100776","DOIUrl":"10.1016/j.simpa.2025.100776","url":null,"abstract":"<div><div>Wireless sensor networks (WSNs) are crucial for various real-life applications, from environmental and health monitoring systems to home and industrial automation. However, these networks face challenges in failure-prone environments, where sensor nodes must conserve energy while ensuring data reliability. We introduce FaultNet-Sim, a multithreaded simulator that facilitates the development of optimization strategies for balancing energy consumption and data reliability by tuning data transfer intervals in WSNs. The simulator can model different failure conditions and various time-division multiple access (TDMA)-based scheduling techniques, allowing users to analyze the trade-offs between data loss and energy consumption. With customizable parameters, FaultNet-Sim is a valuable tool for researchers looking to improve the resilience and efficiency of WSNs in real-world applications.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"25 ","pages":"Article 100776"},"PeriodicalIF":1.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01DOI: 10.1016/j.simpa.2025.100782
Md. Masudur Rahman, Zenun Chowdhury, Raqeebir Rab
Program comprehensibility plays a significant role in software maintenance by enhancing code readability. Although inherently subjective, various methods to assess comprehensibility have emerged in recent years. Most of these approaches focus on structural characteristics of source code, such as lines of code, number of identifiers, cyclomatic complexity, etc. However, textual elements are equally vital, as these directly influence how humans interpret and understand code. In this paper, we present an approach that evaluates program comprehensibility based on the textual readability of source code — reflecting how it is perceived by human readers. We developed a tool to implement this proposed approach and validated its effectiveness by comparing its output with manual evaluations of code comprehensibility. The results showed complete agreement, indicating that the tool produces comprehensibility scores. This tool can support developers by identifying segments of code that are harder to comprehend, enabling targeted refactoring efforts to improve overall readability.
{"title":"A tool for measuring program comprehensibility using readability-driven metrics","authors":"Md. Masudur Rahman, Zenun Chowdhury, Raqeebir Rab","doi":"10.1016/j.simpa.2025.100782","DOIUrl":"10.1016/j.simpa.2025.100782","url":null,"abstract":"<div><div>Program comprehensibility plays a significant role in software maintenance by enhancing code readability. Although inherently subjective, various methods to assess comprehensibility have emerged in recent years. Most of these approaches focus on structural characteristics of source code, such as lines of code, number of identifiers, cyclomatic complexity, etc. However, textual elements are equally vital, as these directly influence how humans interpret and understand code. In this paper, we present an approach that evaluates program comprehensibility based on the textual readability of source code — reflecting how it is perceived by human readers. We developed a tool to implement this proposed approach and validated its effectiveness by comparing its output with manual evaluations of code comprehensibility. The results showed complete agreement, indicating that the tool produces comprehensibility scores. This tool can support developers by identifying segments of code that are harder to comprehend, enabling targeted refactoring efforts to improve overall readability.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"25 ","pages":"Article 100782"},"PeriodicalIF":1.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144917254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ‘gps2gtfs’ package addresses a critical need for converting raw Global Positioning System (GPS) trajectory data from public transit vehicles into the widely used GTFS (General Transit Feed Specification) format. This transformation enables various software applications to efficiently utilize real-time transit data for purposes such as tracking, scheduling, and arrival time prediction. Developed in Python, ‘gps2gtfs’ employs techniques like geo-buffer mapping, parallel processing, and data filtering to manage challenges associated with raw GPS data, including high volume, discontinuities, and localization errors. This open-source package, available on GitHub and PyPI, enhances the development of intelligent transportation solutions and fosters improved public transit systems globally.
{"title":"GPS-2-GTFS: A Python package to process and transform raw GPS data of public transit to GTFS format","authors":"Shiveswarran Ratneswaran , Uthayasanker Thayasivam , Sivakumar Thillaiambalam","doi":"10.1016/j.simpa.2025.100780","DOIUrl":"10.1016/j.simpa.2025.100780","url":null,"abstract":"<div><div>The ‘gps2gtfs’ package addresses a critical need for converting raw Global Positioning System (GPS) trajectory data from public transit vehicles into the widely used GTFS (General Transit Feed Specification) format. This transformation enables various software applications to efficiently utilize real-time transit data for purposes such as tracking, scheduling, and arrival time prediction. Developed in Python, ‘gps2gtfs’ employs techniques like geo-buffer mapping, parallel processing, and data filtering to manage challenges associated with raw GPS data, including high volume, discontinuities, and localization errors. This open-source package, available on GitHub and PyPI, enhances the development of intelligent transportation solutions and fosters improved public transit systems globally.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"25 ","pages":"Article 100780"},"PeriodicalIF":1.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144756883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01DOI: 10.1016/j.simpa.2025.100781
Christian Ghiaus
ELECTRE Tri-B is a sorting and classification method for multiple-criteria decision-making (MCDM) in which alternatives are assigned to categories. The categories are completely ordered and defined by base (or reference) profiles. The pELECTRE Tri software implements a probabilistic extension of the ELECTRE Tri-B method designed to handle uncertainty in both the decision matrix values and the base profiles delimiting the categories. Its modular architecture enables step-by-step workflows from data input to results output, ensuring flexibility and transparency in the decision-making process. Implemented as a Python module, pELECTRE Tri requires no installation and can be executed locally or online. The software is supported by comprehensive documentation, including tutorials, how-to guides, theoretical explanations, and a user reference manual.
{"title":"pELECTRE Tri: A computational framework and Python module for probabilistic ELECTRE Tri-B multiple-criteria decision-making","authors":"Christian Ghiaus","doi":"10.1016/j.simpa.2025.100781","DOIUrl":"10.1016/j.simpa.2025.100781","url":null,"abstract":"<div><div>ELECTRE Tri-B is a sorting and classification method for multiple-criteria decision-making (MCDM) in which alternatives are assigned to categories. The categories are completely ordered and defined by base (or reference) profiles. The <em>pELECTRE Tri</em> software implements a probabilistic extension of the ELECTRE Tri-B method designed to handle uncertainty in both the decision matrix values and the base profiles delimiting the categories. Its modular architecture enables step-by-step workflows from data input to results output, ensuring flexibility and transparency in the decision-making process. Implemented as a Python module, <em>pELECTRE Tri</em> requires no installation and can be executed locally or online. The software is supported by comprehensive documentation, including tutorials, how-to guides, theoretical explanations, and a user reference manual.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"25 ","pages":"Article 100781"},"PeriodicalIF":1.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144893306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}