Pub Date : 2024-10-30DOI: 10.1016/j.softx.2024.101948
Diana Borrego, Irene Barba, Carmelo Del Valle, Miguel Toro
This paper introduces the DPGraphJ package, a collection of reusable Java functions to solve optimisation problems using a dynamic programming algorithm. The latter is based on a recursive schema that follows a top-down approach and uses the memoisation technique. This algorithm is a reusable software component that is generic and efficient. Moreover, it has been developed by paying special attention to good practices in the design of software. For using DPGraphJ, the problem to be solved needs to be modelled as an AND/OR graph. In the DPGraphJ package, we provide 5 academic case studies with detailed comments. We strongly believe that our proposal can be helpful for several kinds of users, such as students, researchers, and practitioners.
{"title":"DPGraphJ: A Java package for the implementation of dynamic programming algorithms","authors":"Diana Borrego, Irene Barba, Carmelo Del Valle, Miguel Toro","doi":"10.1016/j.softx.2024.101948","DOIUrl":"10.1016/j.softx.2024.101948","url":null,"abstract":"<div><div>This paper introduces the DPGraphJ package, a collection of reusable Java functions to solve optimisation problems using a dynamic programming algorithm. The latter is based on a recursive schema that follows a top-down approach and uses the memoisation technique. This algorithm is a reusable software component that is generic and efficient. Moreover, it has been developed by paying special attention to good practices in the design of software. For using DPGraphJ, the problem to be solved needs to be modelled as an AND/OR graph. In the DPGraphJ package, we provide 5 academic case studies with detailed comments. We strongly believe that our proposal can be helpful for several kinds of users, such as students, researchers, and practitioners.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101948"},"PeriodicalIF":2.4,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a tool, GView (Generic View), that is tailored to assist the investigation of possible attack vectors by providing guided analysis for a broad range of file types using automatic artifact identification, extraction, inference & coherent correlation, and meaningful & intuitive views at different levels of granularity w.r.t. revealed information. GView simplifies the analysis of every payload in a complex attack, streamlining the workflow for security researchers, and increasing the accuracy of the analysis. The ’generic’ aspect derives from the fact that it accommodates various file types and also features multiple visualization modes (that can be automatically configured for each specific file type). Our results show that the analysis time of an attack is significantly reduced by GView, compared to conventional tools used in forensics.
{"title":"GView: A versatile assistant for security researchers","authors":"Raul Zaharia , Dragoş Gavriluţ , Gheorghiţă Mutu , Dorel Lucanu","doi":"10.1016/j.softx.2024.101940","DOIUrl":"10.1016/j.softx.2024.101940","url":null,"abstract":"<div><div>We propose a tool, GView (Generic View), that is tailored to assist the investigation of possible attack vectors by providing guided analysis for a broad range of file types using <em>automatic artifact identification, extraction, inference<!--> <em>&</em> <!-->coherent correlation, and meaningful<!--> <em>&</em> <!-->intuitive views at different levels of granularity</em> w.r.t. revealed information. GView simplifies the analysis of every payload in a complex attack, streamlining the workflow for security researchers, and increasing the accuracy of the analysis. The ’generic’ aspect derives from the fact that it accommodates various file types and also features multiple visualization modes (that can be automatically configured for each specific file type). Our results show that the analysis time of an attack is significantly reduced by GView, compared to conventional tools used in forensics.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101940"},"PeriodicalIF":2.4,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142540260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.softx.2024.101946
Piotr Lechowicz , Aleksandra Knapińska , Adam Włodarczyk , Krzysztof Walkowiak
Traffic Weaver is a Python package developed to generate a semi-synthetic signal (time series) with finer granularity, based on averaged time series, in a manner that, upon averaging, closely matches the original signal provided. The key components utilized to generate the signal encompass oversampling, recreating from average with a given strategy, stretching to match the integral of the original time series, interpolating, smoothing, repeating, applying trend, and adding noise. The primary motivation behind Traffic Weaver is to furnish semi-synthetic time-varying traffic in telecommunication networks, facilitating the development and validation of traffic prediction models, as well as aiding in the deployment of network optimization algorithms tailored for time-varying traffic.
{"title":"Traffic weaver: Semi-synthetic time-varying traffic generator based on averaged time series","authors":"Piotr Lechowicz , Aleksandra Knapińska , Adam Włodarczyk , Krzysztof Walkowiak","doi":"10.1016/j.softx.2024.101946","DOIUrl":"10.1016/j.softx.2024.101946","url":null,"abstract":"<div><div>Traffic Weaver is a Python package developed to generate a semi-synthetic signal (time series) with finer granularity, based on averaged time series, in a manner that, upon averaging, closely matches the original signal provided. The key components utilized to generate the signal encompass oversampling, recreating from average with a given strategy, stretching to match the integral of the original time series, interpolating, smoothing, repeating, applying trend, and adding noise. The primary motivation behind Traffic Weaver is to furnish semi-synthetic time-varying traffic in telecommunication networks, facilitating the development and validation of traffic prediction models, as well as aiding in the deployment of network optimization algorithms tailored for time-varying traffic.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101946"},"PeriodicalIF":2.4,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digitalisation is crucial for industries since it improves the efficiency of operations. Small and Medium-sized Enterprises face additional hurdles in their operations due to limited resources and weaker networks which necessitate digitalisation more to stay competitive. This study aids the digital transformation of SMEs through A matchmakIng tool for pairing SMEs with suitabLE digital solutions and their providers (AISLE). AISLE conducts a systematic mapping and matching of non-technical and technical requirements/functionalities needed by an SME for a digital solution. It involves a list of non-technical characteristics desired for each digital solution which is identified through semi-structured interviews with industry experts, and matching rules adopted from the literature. The AISLE tool was tested by an SME which demonstrated AISLE's effectiveness in identifying digital solutions in a practical and easy-to-use manner.
{"title":"AISLE: A matchmaking tool for pairing SMEs with digital solutions","authors":"Gokcen Yilmaz , Francisco Raziel Treviño Almaguer , Gregory Hawkridge , Duncan McFarlane","doi":"10.1016/j.softx.2024.101941","DOIUrl":"10.1016/j.softx.2024.101941","url":null,"abstract":"<div><div>Digitalisation is crucial for industries since it improves the efficiency of operations. Small and Medium-sized Enterprises face additional hurdles in their operations due to limited resources and weaker networks which necessitate digitalisation more to stay competitive. This study aids the digital transformation of SMEs through <strong>A</strong> matchmak<strong>I</strong>ng tool for pairing <strong>S</strong>MEs with suitab<strong>LE</strong> digital solutions and their providers (AISLE). AISLE conducts a systematic mapping and matching of non-technical and technical requirements/functionalities needed by an SME for a digital solution. It involves a list of non-technical characteristics desired for each digital solution which is identified through semi-structured interviews with industry experts, and matching rules adopted from the literature. The AISLE tool was tested by an SME which demonstrated AISLE's effectiveness in identifying digital solutions in a practical and easy-to-use manner.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101941"},"PeriodicalIF":2.4,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142532535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1016/j.softx.2024.101934
Parham Dehghani, Matthew J. DiDomizio
HFITS is a software tool that supports experimental measurements of heat flux over planar surfaces using infrared thermography. This technique enables spatially and temporally resolved heat flux measurements at a higher resolution than arrays of traditional point sensors. The target audience is researchers and engineers in thermal engineering disciplines. Developed in Python with a graphical front end, the software is accessible both to advanced users as well as to users with a more fundamental knowledge of complex thermogram manipulation and heat transfer analysis methods. HFITS consists of two main components: pre-processing of infrared thermograms (obtained from heat transfer experiments), and inverse heat transfer analysis (to deduce heat flux over the planar surface in those experiments). The software offers comprehensive functionalities, including support for metadata handling, a graphical interface for selection of regions of interest, the ability to import additional temperature measurements to enhance convective heat transfer estimates, and the exporting of both computed field data and contour videos. This open-source software broadens access to advanced experimental and analytical techniques to support thermal analyses in a wide range of engineering and research applications.
{"title":"HFITS: An analysis tool for calculating heat flux to planar surfaces using infrared thermography","authors":"Parham Dehghani, Matthew J. DiDomizio","doi":"10.1016/j.softx.2024.101934","DOIUrl":"10.1016/j.softx.2024.101934","url":null,"abstract":"<div><div>HFITS is a software tool that supports experimental measurements of heat flux over planar surfaces using infrared thermography. This technique enables spatially and temporally resolved heat flux measurements at a higher resolution than arrays of traditional point sensors. The target audience is researchers and engineers in thermal engineering disciplines. Developed in Python with a graphical front end, the software is accessible both to advanced users as well as to users with a more fundamental knowledge of complex thermogram manipulation and heat transfer analysis methods. HFITS consists of two main components: pre-processing of infrared thermograms (obtained from heat transfer experiments), and inverse heat transfer analysis (to deduce heat flux over the planar surface in those experiments). The software offers comprehensive functionalities, including support for metadata handling, a graphical interface for selection of regions of interest, the ability to import additional temperature measurements to enhance convective heat transfer estimates, and the exporting of both computed field data and contour videos. This open-source software broadens access to advanced experimental and analytical techniques to support thermal analyses in a wide range of engineering and research applications.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101934"},"PeriodicalIF":2.4,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142532534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1016/j.softx.2024.101933
Darlan Noetzold , Anubis Graciela de Moraes Rossetto , Luis Augusto Silva , Paul Crocker , Valderi Reis Quietinho Leithardt
This research presents software for empirically analyzing Java Virtual Machine (JVM) parameter configurations to enhance web application performance. Using tools like JMeter and cAdvisor in a controlled hardware environment, it collects and analyzes performance metrics. Tailored JVM settings for high request loads improved CPU efficiency by 20% and reduced memory usage by 15% compared to standard configurations. For I/O intensive operations with large files, optimized JVM configurations decreased response times by 30% and CPU usage by 25%. These findings highlight the impact of tailored JVM settings on application responsiveness and resource management, providing valuable guidance for developers and engineers.
{"title":"JVM optimization: An empirical analysis of JVM configurations for enhanced web application performance","authors":"Darlan Noetzold , Anubis Graciela de Moraes Rossetto , Luis Augusto Silva , Paul Crocker , Valderi Reis Quietinho Leithardt","doi":"10.1016/j.softx.2024.101933","DOIUrl":"10.1016/j.softx.2024.101933","url":null,"abstract":"<div><div>This research presents software for empirically analyzing Java Virtual Machine (JVM) parameter configurations to enhance web application performance. Using tools like JMeter and cAdvisor in a controlled hardware environment, it collects and analyzes performance metrics. Tailored JVM settings for high request loads improved CPU efficiency by 20% and reduced memory usage by 15% compared to standard configurations. For I/O intensive operations with large files, optimized JVM configurations decreased response times by 30% and CPU usage by 25%. These findings highlight the impact of tailored JVM settings on application responsiveness and resource management, providing valuable guidance for developers and engineers.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101933"},"PeriodicalIF":2.4,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142532530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1016/j.softx.2024.101923
Ciro Benito Raggio , Paolo Zaffino , Maria Francesca Spadea
Limited medical image data hinders the training of deep learning (DL) models in the biomedical field. Image augmentation can reduce the data-scarcity problem by generating variations of existing images. However, currently implemented methods require coding, excluding non-programmer users from this opportunity.
We therefore present ImageAugmenter, an easy-to-use and open-source module for 3D Slicer imaging computing platform. It offers a simple and intuitive interface for applying over 20 simultaneous MONAI Transforms (spatial, intensity, etc.) to medical image datasets, all without programming.
ImageAugmenter makes accessible medical image augmentation, enabling a wider range of users to improve the performance of DL models in medical image analysis by increasing the number of samples available for training.
{"title":"ImageAugmenter: A user-friendly 3D Slicer tool for medical image augmentation","authors":"Ciro Benito Raggio , Paolo Zaffino , Maria Francesca Spadea","doi":"10.1016/j.softx.2024.101923","DOIUrl":"10.1016/j.softx.2024.101923","url":null,"abstract":"<div><div>Limited medical image data hinders the training of deep learning (DL) models in the biomedical field. Image augmentation can reduce the data-scarcity problem by generating variations of existing images. However, currently implemented methods require coding, excluding non-programmer users from this opportunity.</div><div>We therefore present <em>ImageAugmenter</em>, an easy-to-use and open-source module for 3D Slicer imaging computing platform. It offers a simple and intuitive interface for applying over 20 simultaneous MONAI Transforms (spatial, intensity, etc.) to medical image datasets, all without programming.</div><div><em>ImageAugmenter</em> makes accessible medical image augmentation, enabling a wider range of users to improve the performance of DL models in medical image analysis by increasing the number of samples available for training.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101923"},"PeriodicalIF":2.4,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142532529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1016/j.softx.2024.101932
Nibras Abo Alzahab , Giulia Rafaiani , Massimo Battaglioni , Ana Cavalli , Franco Chiaraluce , Marco Baldi
As biometric authentication has been increasingly integrated into cutting-edge technology, it is interesting to study how its level of trust and interoperability across multiple devices can be increased. They can actually be enhanced through decentralization, particularly by using blockchain technology. Since transaction data on the blockchain are open and readable by all parties, a high level of user trust is achieved, enhancing transparency and interoperability across the network. The software we propose bridges the gap between the security of biometric information and the transparency of blockchain and decentralized technologies. Specifically, the software is a decentralized application (dApp), based on the Ethereum blockchain, which relies on a smart contract to manage its logic. The logic of the smart contract employs the fuzzy commitment scheme (FCS) to securely hash biometric templates, while always maintaining fault tolerance thanks to error correction codes (ECC). This mechanism ensures data integrity within a transparent, decentralized framework. The proposed dApp enhances biometric authentication by supporting both the enrollment and authentication processes. Its smart contract enables managing access control within this decentralized infrastructure. In practical applications, the proposed system can demonstrate its potential as a secure and decentralized alternative to traditional centralized systems.
{"title":"BiometricIdentity dApp: Decentralized biometric authentication based on fuzzy commitment and blockchain","authors":"Nibras Abo Alzahab , Giulia Rafaiani , Massimo Battaglioni , Ana Cavalli , Franco Chiaraluce , Marco Baldi","doi":"10.1016/j.softx.2024.101932","DOIUrl":"10.1016/j.softx.2024.101932","url":null,"abstract":"<div><div>As biometric authentication has been increasingly integrated into cutting-edge technology, it is interesting to study how its level of trust and interoperability across multiple devices can be increased. They can actually be enhanced through decentralization, particularly by using blockchain technology. Since transaction data on the blockchain are open and readable by all parties, a high level of user trust is achieved, enhancing transparency and interoperability across the network. The software we propose bridges the gap between the security of biometric information and the transparency of blockchain and decentralized technologies. Specifically, the software is a decentralized application (dApp), based on the Ethereum blockchain, which relies on a smart contract to manage its logic. The logic of the smart contract employs the fuzzy commitment scheme (FCS) to securely hash biometric templates, while always maintaining fault tolerance thanks to error correction codes (ECC). This mechanism ensures data integrity within a transparent, decentralized framework. The proposed dApp enhances biometric authentication by supporting both the enrollment and authentication processes. Its smart contract enables managing access control within this decentralized infrastructure. In practical applications, the proposed system can demonstrate its potential as a secure and decentralized alternative to traditional centralized systems.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101932"},"PeriodicalIF":2.4,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142532528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-19DOI: 10.1016/j.softx.2024.101920
Piotr Jurkiewicz
This article introduces the latest version of the flow-models framework for IP network flow analysis. Key improvements include support for Dask to enable parallel computing, dataset reduction techniques for efficient training, and new modules for entropy analysis and granular flow table simulations. The codebase has been refined, with improved documentation and the incorporation of automated testing via ruff. The framework is now compatible with forthcoming releases of Python and NumPy, making it a useful resource for researchers and professionals involved in network flow analysis and machine learning-driven traffic classification.
{"title":"flow-models 2.2: Efficient and parallel elephant flow modeling with machine learning","authors":"Piotr Jurkiewicz","doi":"10.1016/j.softx.2024.101920","DOIUrl":"10.1016/j.softx.2024.101920","url":null,"abstract":"<div><div>This article introduces the latest version of the <span>flow-models</span> framework for IP network flow analysis. Key improvements include support for Dask to enable parallel computing, dataset reduction techniques for efficient training, and new modules for entropy analysis and granular flow table simulations. The codebase has been refined, with improved documentation and the incorporation of automated testing via ruff. The framework is now compatible with forthcoming releases of Python and NumPy, making it a useful resource for researchers and professionals involved in network flow analysis and machine learning-driven traffic classification.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101920"},"PeriodicalIF":2.4,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142532533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-18DOI: 10.1016/j.softx.2024.101926
José Gerardo Tamez-Peña
Multicollinearity among observed variables may have a large impact on statistical modeling and the discovery of associations between the observed variables and clinical outcomes. A viable method to address the multicollinearity is to find a suitable linear transform that mitigates the degree of collinearity. The Iterative Linear Association Analysis (ILAA) method was developed to explore the association among observed variables and to return a suitable linear transformation matrix based on variable residualization that effectively mitigates the degree of multicollinearity via controlling the maximum correlation measure present in the transformed dataset. This paper presents the software implementation of the ILAA method as an R function inside the FRESA.CAD 3.4.7 R package, hence providing researchers with a simple tool to explore tabular data in a new interpretable latent space.
观测变量之间的多重共线性可能会对统计建模和发现观测变量与临床结果之间的关联产生很大影响。解决多重共线性的可行方法是找到一种合适的线性变换,以减轻共线性的程度。迭代线性关联分析(ILAA)方法的开发旨在探索观察变量之间的关联,并在变量残差化的基础上返回一个合适的线性变换矩阵,通过控制变换后数据集中存在的最大关联度,有效减轻多重共线性的程度。本文介绍了 ILAA 方法的软件实现,它是 FRESA.CAD 3.4.7 R 软件包中的一个 R 函数,从而为研究人员提供了一个在新的可解释潜空间中探索表格数据的简单工具。
{"title":"FRESA.CAD::ILAA: Estimating the exploratory residualization transform","authors":"José Gerardo Tamez-Peña","doi":"10.1016/j.softx.2024.101926","DOIUrl":"10.1016/j.softx.2024.101926","url":null,"abstract":"<div><div>Multicollinearity among observed variables may have a large impact on statistical modeling and the discovery of associations between the observed variables and clinical outcomes. A viable method to address the multicollinearity is to find a suitable linear transform that mitigates the degree of collinearity. The Iterative Linear Association Analysis (ILAA) method was developed to explore the association among observed variables and to return a suitable linear transformation matrix based on variable residualization that effectively mitigates the degree of multicollinearity via controlling the maximum correlation measure present in the transformed dataset. This paper presents the software implementation of the ILAA method as an R function inside the FRESA.CAD 3.4.7 R package, hence providing researchers with a simple tool to explore tabular data in a new interpretable latent space.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"28 ","pages":"Article 101926"},"PeriodicalIF":2.4,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142532527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}