Pub Date : 2024-04-09DOI: 10.1016/j.simpa.2024.100644
Carlos García-Aroca, Ma. Asunción Martínez-Mayoral, Javier Morales-Socuéllamos, José Vicente Segura-Heras
alPCA is a software coded in R and designed to automatically combine predictions from a collection of individual forecasting methods that integrate it. It employs three categories of weights derived from the PCA scores, and decision rules to determine the optimal combination of these methods. alPCA serves as an automated component within the artificial intelligence toolkit for monthly time series processing with the objective of obtaining the best forecast.
alPCA 是一款用 R 代码编写的软件,旨在自动合并来自一系列单独预测方法的预测结果,并将其整合在一起。alPCA 是人工智能工具包中的一个自动组件,用于月度时间序列处理,目的是获得最佳预测。
{"title":"alPCA: An automatic software for the selection and combination of forecasts in monthly series","authors":"Carlos García-Aroca, Ma. Asunción Martínez-Mayoral, Javier Morales-Socuéllamos, José Vicente Segura-Heras","doi":"10.1016/j.simpa.2024.100644","DOIUrl":"https://doi.org/10.1016/j.simpa.2024.100644","url":null,"abstract":"<div><p>alPCA is a software coded in R and designed to automatically combine predictions from a collection of individual forecasting methods that integrate it. It employs three categories of weights derived from the PCA scores, and decision rules to determine the optimal combination of these methods. alPCA serves as an automated component within the artificial intelligence toolkit for monthly time series processing with the objective of obtaining the best forecast.</p></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"20 ","pages":"Article 100644"},"PeriodicalIF":2.1,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665963824000320/pdfft?md5=7b465d8975048ac2c3a64c040483d585&pid=1-s2.0-S2665963824000320-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140543273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.1016/j.simpa.2024.100643
Mayank Patel , Minal Bhise
Most websites and applications are hosted on a public or private cloud. In-house deployments also require dealing with system resources. Researchers have started considering resources utilized by application workloads to estimate and reduce application running costs. RAW-HF (Resource Availability & Workload aware Hybrid Framework) framework tries to analyze two types of resource utilization; (1) System Resource Utilization and (2) Resource Utilized by each Query task. The RAW-HF code tries to provide a lightweight solution to monitor & analyze the system and DBMS process resource utilization. It filters the required data in real time to find available resources and allocate query-specific resources based on their complexity by utilizing less than 2% CPU resources.
{"title":"RAW-HF framework to monitor and allocate resources in real time for database management systems","authors":"Mayank Patel , Minal Bhise","doi":"10.1016/j.simpa.2024.100643","DOIUrl":"https://doi.org/10.1016/j.simpa.2024.100643","url":null,"abstract":"<div><p>Most websites and applications are hosted on a public or private cloud. In-house deployments also require dealing with system resources. Researchers have started considering resources utilized by application workloads to estimate and reduce application running costs. RAW-HF (Resource Availability & Workload aware Hybrid Framework) framework tries to analyze two types of resource utilization; (1) System Resource Utilization and (2) Resource Utilized by each Query task. The RAW-HF code tries to provide a lightweight solution to monitor & analyze the system and DBMS process resource utilization. It filters the required data in real time to find available resources and allocate query-specific resources based on their complexity by utilizing less than 2% CPU resources.</p></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"20 ","pages":"Article 100643"},"PeriodicalIF":2.1,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665963824000319/pdfft?md5=cf5181eea96c0c7d3f6ac66866a6077c&pid=1-s2.0-S2665963824000319-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140605813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DNNs are widely used for complex tasks like image and signal processing, and they are in increasing demand for implementation on Internet of Things (IoT) devices. For these devices, optimizing DNN models is a necessary task. Generally, standard optimization approaches require specialists to manually fine-tune hyper-parameters to find a good trade-off between efficiency and accuracy. In this paper, we propose OptDNN, a software that employs innovative and automatic approaches to determine optimal hyper-parameters for pruning, clustering, and quantization. The models optimized by OptDNN have a smaller memory footprint, faster inference time, and a similar accuracy to the original models.
{"title":"OptDNN: Automatic deep neural networks optimizer for edge computing","authors":"Luca Giovannesi, Gabriele Proietti Mattia, Roberto Beraldi","doi":"10.1016/j.simpa.2024.100641","DOIUrl":"https://doi.org/10.1016/j.simpa.2024.100641","url":null,"abstract":"<div><p>DNNs are widely used for complex tasks like image and signal processing, and they are in increasing demand for implementation on Internet of Things (IoT) devices. For these devices, optimizing DNN models is a necessary task. Generally, standard optimization approaches require specialists to manually fine-tune hyper-parameters to find a good trade-off between efficiency and accuracy. In this paper, we propose OptDNN, a software that employs innovative and automatic approaches to determine optimal hyper-parameters for pruning, clustering, and quantization. The models optimized by OptDNN have a smaller memory footprint, faster inference time, and a similar accuracy to the original models.</p></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"20 ","pages":"Article 100641"},"PeriodicalIF":2.1,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665963824000290/pdfft?md5=9408edc33cd6715a12afa1a8f06365fc&pid=1-s2.0-S2665963824000290-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140543982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01DOI: 10.1016/j.simpa.2024.100634
Ricardo A. Correia
Search engine data is a prime source of insights on information-seeking behaviour and such information is instrumental for the scientific study of human culture and behaviour. The gtrendsAPI R software package aims to facilitate programmatic access to data available from the Google Trends API. Here, I introduce the functions available through this software package and provide worked examples of how to use it. I also discuss some the potential research applications and caveats of this software and the data available through it.
搜索引擎数据是洞察信息搜索行为的主要来源,这些信息有助于对人类文化和行为进行科学研究。gtrendsAPI R 软件包旨在促进对谷歌趋势 API 数据的编程访问。在此,我将介绍该软件包的可用功能,并提供如何使用它的实例。我还将讨论该软件的一些潜在研究应用和注意事项,以及通过该软件获得的数据。
{"title":"gtrendsAPI: An R wrapper for the Google Trends API","authors":"Ricardo A. Correia","doi":"10.1016/j.simpa.2024.100634","DOIUrl":"https://doi.org/10.1016/j.simpa.2024.100634","url":null,"abstract":"<div><p>Search engine data is a prime source of insights on information-seeking behaviour and such information is instrumental for the scientific study of human culture and behaviour. The gtrendsAPI R software package aims to facilitate programmatic access to data available from the Google Trends API. Here, I introduce the functions available through this software package and provide worked examples of how to use it. I also discuss some the potential research applications and caveats of this software and the data available through it.</p></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"20 ","pages":"Article 100634"},"PeriodicalIF":2.1,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665963824000228/pdfft?md5=38103b321777d6eb8d5b2d3503237b9b&pid=1-s2.0-S2665963824000228-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140350987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-29DOI: 10.1016/j.simpa.2024.100635
Nico Migenda , Ralf Möller , Wolfram Schenck
Neural Gas Principal Component Analysis (NGPCA) is an online clustering algorithm. An NGPCA model is a mixture of local PCA units and combines dimensionality reduction with vector quantization. Recently, NGPCA has been extended with an adaptive learning rate and an adaptive potential function for accurate and efficient clustering of high-dimensional and non-stationary data streams. The algorithm achieved highly competitive results on clustering benchmark datasets compared to the state of the art. Our implementation of the algorithm was developed in MATLAB and is available as open source. This code can be easily applied to the clustering of stationary and non-stationary data.
{"title":"NGPCA: Clustering of high-dimensional and non-stationary data streams","authors":"Nico Migenda , Ralf Möller , Wolfram Schenck","doi":"10.1016/j.simpa.2024.100635","DOIUrl":"https://doi.org/10.1016/j.simpa.2024.100635","url":null,"abstract":"<div><p>Neural Gas Principal Component Analysis (NGPCA) is an online clustering algorithm. An NGPCA model is a mixture of local PCA units and combines dimensionality reduction with vector quantization. Recently, NGPCA has been extended with an adaptive learning rate and an adaptive potential function for accurate and efficient clustering of high-dimensional and non-stationary data streams. The algorithm achieved highly competitive results on clustering benchmark datasets compared to the state of the art. Our implementation of the algorithm was developed in MATLAB and is available as open source. This code can be easily applied to the clustering of stationary and non-stationary data.</p></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"20 ","pages":"Article 100635"},"PeriodicalIF":2.1,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266596382400023X/pdfft?md5=6784f267af3874ee2a02d381441cd5f4&pid=1-s2.0-S266596382400023X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140344005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-28DOI: 10.1016/j.simpa.2024.100636
Alberto Durán-López , Daniel Bolaños-Martinez , Luisa Delgado-Márquez , Maria Bermudez-Edo
License Plate Recognition (LPR) sensors often fail to detect vehicles or to identify all plate numbers correctly. This noise results in missing digits or an incomplete route of a vehicle, for example, missing one node (LPR camera) in the route. Addressing these issues, RouteRecoverer creates the route followed by a vehicle while efficiently recovering absent LPR plate digits, and filling gaps in routes. For example, when a vehicle is detected by LPR A and C, with the only route between them being B, our tool seamlessly retrieves the missing information, improving the data output.
车牌识别 (LPR) 传感器经常无法检测到车辆或正确识别所有车牌号码。这种噪音会导致数字缺失或车辆路线不完整,例如,路线中缺少一个节点(LPR 摄像头)。为解决这些问题,RouteRecoverer 在创建车辆行驶路线的同时,还能有效恢复缺失的 LPR 车牌号码,并填补路线中的空白。例如,当 LPR A 和 C 检测到一辆车,而它们之间的唯一路线是 B 时,我们的工具会无缝检索缺失的信息,从而改进数据输出。
{"title":"RouteRecoverer: A tool to create routes and recover noisy license plate number data","authors":"Alberto Durán-López , Daniel Bolaños-Martinez , Luisa Delgado-Márquez , Maria Bermudez-Edo","doi":"10.1016/j.simpa.2024.100636","DOIUrl":"https://doi.org/10.1016/j.simpa.2024.100636","url":null,"abstract":"<div><p>License Plate Recognition (LPR) sensors often fail to detect vehicles or to identify all plate numbers correctly. This noise results in missing digits or an incomplete route of a vehicle, for example, missing one node (LPR camera) in the route. Addressing these issues, RouteRecoverer creates the route followed by a vehicle while efficiently recovering absent LPR plate digits, and filling gaps in routes. For example, when a vehicle is detected by LPR A and C, with the only route between them being B, our tool seamlessly retrieves the missing information, improving the data output.</p></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"20 ","pages":"Article 100636"},"PeriodicalIF":2.1,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665963824000241/pdfft?md5=e306abe4e84fdb1089a0d95afe7b85ce&pid=1-s2.0-S2665963824000241-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140344003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-28DOI: 10.1016/j.simpa.2024.100640
Iman Farahbakhsh , Benyamin Barani Nia , Erkan Oterkus
SPHMPS 1.0, developed within a Lagrangian framework, offers a robust solution for modeling multi-structure collision problems involving large plastic deformation and inherent thermal effects. Utilizing its innovative algorithm, SPHMPS 1.0 emerges as a versatile tool for researchers in the field of fluid-rigid-elastic structure interactions. By providing a comprehensive framework tailored to address these complex phenomena, SPHMPS 1.0 facilitates reproducible, extendable, and efficient research endeavors. Implemented in Fortran, its flexible algorithm ensures adaptability to a wide range of applications requiring solutions for fluid-rigid-elastic structure interaction problems.
{"title":"SPHMPS 1.0: A Smoothed-Particle-Hydrodynamics Multi-Physics Solver","authors":"Iman Farahbakhsh , Benyamin Barani Nia , Erkan Oterkus","doi":"10.1016/j.simpa.2024.100640","DOIUrl":"https://doi.org/10.1016/j.simpa.2024.100640","url":null,"abstract":"<div><p><span>SPHMPS 1.0</span>, developed within a Lagrangian framework, offers a robust solution for modeling multi-structure collision problems involving large plastic deformation and inherent thermal effects. Utilizing its innovative algorithm, <span>SPHMPS 1.0</span> emerges as a versatile tool for researchers in the field of fluid-rigid-elastic structure interactions. By providing a comprehensive framework tailored to address these complex phenomena, <span>SPHMPS 1.0</span> facilitates reproducible, extendable, and efficient research endeavors. Implemented in Fortran, its flexible algorithm ensures adaptability to a wide range of applications requiring solutions for fluid-rigid-elastic structure interaction problems.</p></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"20 ","pages":"Article 100640"},"PeriodicalIF":2.1,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665963824000289/pdfft?md5=ac912c805094a33ebeadeadf162d6861&pid=1-s2.0-S2665963824000289-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140344004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-28DOI: 10.1016/j.simpa.2024.100638
Albenis Pérez-Alarcón , José C. Fernández-Alvarez , Raquel Nieto , Luis Gimeno
LATTIN is a Python-based tool for Lagrangian atmospheric moisture and heat tracking. It can read input data from the Lagrangian FLEXPART and FLEXPART-WRF models. Features include parallel reading of atmospheric parcel trajectories and user custom threshold criteria. It complements and improves existing tools by including several tracking approaches and also by its non-dependence on the horizontal resolution of the input or output grid. LATTIN provides a compact tool for Lagrangian atmospheric moisture and heat tracking, which will support a wide range of research to understand future changes in the hydrological cycle and extreme temperature events.
{"title":"LATTIN: A Python-based tool for Lagrangian atmospheric moisture and heat tracking","authors":"Albenis Pérez-Alarcón , José C. Fernández-Alvarez , Raquel Nieto , Luis Gimeno","doi":"10.1016/j.simpa.2024.100638","DOIUrl":"10.1016/j.simpa.2024.100638","url":null,"abstract":"<div><p>LATTIN is a Python-based tool for Lagrangian atmospheric moisture and heat tracking. It can read input data from the Lagrangian FLEXPART and FLEXPART-WRF models. Features include parallel reading of atmospheric parcel trajectories and user custom threshold criteria. It complements and improves existing tools by including several tracking approaches and also by its non-dependence on the horizontal resolution of the input or output grid. LATTIN provides a compact tool for Lagrangian atmospheric moisture and heat tracking, which will support a wide range of research to understand future changes in the hydrological cycle and extreme temperature events.</p></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"20 ","pages":"Article 100638"},"PeriodicalIF":2.1,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665963824000265/pdfft?md5=ae93a9af43608c43d7657b057f462988&pid=1-s2.0-S2665963824000265-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140404766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-28DOI: 10.1016/j.simpa.2024.100637
Hugo Gobato Souto , Ismail Baris , Storm Koert Heuvel , Amir Moradi
This paper presents a user-friendly version of Persistent Homology (PH) graph code to model financial market structures and changes. By leveraging Topological Data Analysis (TDA), the code offers an effective approach for analyzing high-dimensional stock data, enabling the identification of persistent topological features indicative of market changes. The code’s potential applications in financial stability prediction, investment strategy development, and educational advancement are discussed. This contribution aims to facilitate the adoption of PH techniques in finance, promising significant implications for academic research and practical market analysis.
{"title":"FinTDA: Python package for estimating market change through persistent homology diagrams","authors":"Hugo Gobato Souto , Ismail Baris , Storm Koert Heuvel , Amir Moradi","doi":"10.1016/j.simpa.2024.100637","DOIUrl":"https://doi.org/10.1016/j.simpa.2024.100637","url":null,"abstract":"<div><p>This paper presents a user-friendly version of Persistent Homology (PH) graph code to model financial market structures and changes. By leveraging Topological Data Analysis (TDA), the code offers an effective approach for analyzing high-dimensional stock data, enabling the identification of persistent topological features indicative of market changes. The code’s potential applications in financial stability prediction, investment strategy development, and educational advancement are discussed. This contribution aims to facilitate the adoption of PH techniques in finance, promising significant implications for academic research and practical market analysis.</p></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"20 ","pages":"Article 100637"},"PeriodicalIF":2.1,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665963824000253/pdfft?md5=b5e5b1e3f98db2a5510af1445206fc0c&pid=1-s2.0-S2665963824000253-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140346819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-27DOI: 10.1016/j.simpa.2024.100639
Hugo Gobato Souto, Amir Moradi
This paper presents user-friendly code for the implementation of a loss function for neural network time series models that exploits the topological structures of financial data. By leveraging the recently-discovered presence of topological features present in financial time series data, the code offers a more effective approach for creating forecasting models for such data given the fact that it allows neural network models to not only learn temporal patterns of the data, but also topological patterns. This paper aims to facilitate the adoption of the loss function proposed by Souto and Moradi (2024a) in financial time series by practitioners and researchers.
{"title":"Wasserstein distance loss function for financial time series deep learning","authors":"Hugo Gobato Souto, Amir Moradi","doi":"10.1016/j.simpa.2024.100639","DOIUrl":"10.1016/j.simpa.2024.100639","url":null,"abstract":"<div><p>This paper presents user-friendly code for the implementation of a loss function for neural network time series models that exploits the topological structures of financial data. By leveraging the recently-discovered presence of topological features present in financial time series data, the code offers a more effective approach for creating forecasting models for such data given the fact that it allows neural network models to not only learn temporal patterns of the data, but also topological patterns. This paper aims to facilitate the adoption of the loss function proposed by Souto and Moradi (2024a) in financial time series by practitioners and researchers.</p></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"20 ","pages":"Article 100639"},"PeriodicalIF":2.1,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665963824000277/pdfft?md5=eb86aef4201b8909e13d426764a1aa6a&pid=1-s2.0-S2665963824000277-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140405601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}