Konstantinos I. Roumeliotis, Nikolaos D. Tselikas, Dimitrios K. Nasiopoulos
This paper presents a pioneering methodology for refining product recommender systems, introducing a synergistic integration of unsupervised models—K-means clustering, content-based filtering (CBF), and hierarchical clustering—with the cutting-edge GPT-4 large language model (LLM). Its innovation lies in utilizing GPT-4 for model evaluation, harnessing its advanced natural language understanding capabilities to enhance the precision and relevance of product recommendations. A flask-based API simplifies its implementation for e-commerce owners, allowing for the seamless training and evaluation of the models using CSV-formatted product data. The unique aspect of this approach lies in its ability to empower e-commerce with sophisticated unsupervised recommender system algorithms, while the GPT model significantly contributes to refining the semantic context of product features, resulting in a more personalized and effective product recommendation system. The experimental results underscore the superiority of this integrated framework, marking a significant advancement in the field of recommender systems and providing businesses with an efficient and scalable solution to optimize their product recommendations.
{"title":"Precision-Driven Product Recommendation Software: Unsupervised Models, Evaluated by GPT-4 LLM for Enhanced Recommender Systems","authors":"Konstantinos I. Roumeliotis, Nikolaos D. Tselikas, Dimitrios K. Nasiopoulos","doi":"10.3390/software3010004","DOIUrl":"https://doi.org/10.3390/software3010004","url":null,"abstract":"This paper presents a pioneering methodology for refining product recommender systems, introducing a synergistic integration of unsupervised models—K-means clustering, content-based filtering (CBF), and hierarchical clustering—with the cutting-edge GPT-4 large language model (LLM). Its innovation lies in utilizing GPT-4 for model evaluation, harnessing its advanced natural language understanding capabilities to enhance the precision and relevance of product recommendations. A flask-based API simplifies its implementation for e-commerce owners, allowing for the seamless training and evaluation of the models using CSV-formatted product data. The unique aspect of this approach lies in its ability to empower e-commerce with sophisticated unsupervised recommender system algorithms, while the GPT model significantly contributes to refining the semantic context of product features, resulting in a more personalized and effective product recommendation system. The experimental results underscore the superiority of this integrated framework, marking a significant advancement in the field of recommender systems and providing businesses with an efficient and scalable solution to optimize their product recommendations.","PeriodicalId":516628,"journal":{"name":"Software","volume":"24 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140412436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nawa Raj Pokhrel, K. Dahal, R. Rimal, H. Bhandari, Binod Rimal
Deep-SDM is a unified layer framework built on TensorFlow/Keras and written in ingPython 3.12. The framework aligns with the modular engineering principles for the design and development strategy. Transparency, reproducibility, and recombinability are the framework’s primary design criteria. The platform can extract valuable insights from numerical and text data and utilize them to predict future values by implementing long short-term memory (LSTM), gated recurrent unit (GRU), and convolution neural network (CNN). Its end-to-end machine learning pipeline involves a sequence of tasks, including data exploration, input preparation, model construction, hyperparameter tuning, performance evaluations, visualization of results, and statistical analysis. The complete process is systematic and carefully organized, from data import to model selection, encapsulating it into a unified whole. The multiple subroutines work together to provide a user-friendly and conducive pipeline that is easy to use. We utilized the Deep-SDM framework to predict the Nepal Stock Exchange (NEPSE) index to validate its reproducibility and robustness and observed impressive results.
{"title":"Deep-SDM: A Unified Computational Framework for Sequential Data Modeling Using Deep Learning Models","authors":"Nawa Raj Pokhrel, K. Dahal, R. Rimal, H. Bhandari, Binod Rimal","doi":"10.3390/software3010003","DOIUrl":"https://doi.org/10.3390/software3010003","url":null,"abstract":"Deep-SDM is a unified layer framework built on TensorFlow/Keras and written in ingPython 3.12. The framework aligns with the modular engineering principles for the design and development strategy. Transparency, reproducibility, and recombinability are the framework’s primary design criteria. The platform can extract valuable insights from numerical and text data and utilize them to predict future values by implementing long short-term memory (LSTM), gated recurrent unit (GRU), and convolution neural network (CNN). Its end-to-end machine learning pipeline involves a sequence of tasks, including data exploration, input preparation, model construction, hyperparameter tuning, performance evaluations, visualization of results, and statistical analysis. The complete process is systematic and carefully organized, from data import to model selection, encapsulating it into a unified whole. The multiple subroutines work together to provide a user-friendly and conducive pipeline that is easy to use. We utilized the Deep-SDM framework to predict the Nepal Stock Exchange (NEPSE) index to validate its reproducibility and robustness and observed impressive results.","PeriodicalId":516628,"journal":{"name":"Software","volume":"108 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140422544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet-based distributed systems dominate contemporary software applications. To enable these applications to operate securely, software developers must mitigate the threats posed by malicious actors. For instance, the developers must identify vulnerabilities in the software and eliminate them. However, to do so manually is a costly and time-consuming process. To reduce these costs, we designed and implemented Code Auto-Remediation for Enhanced Security (CARES), a web application that automatically identifies and remediates the two most common types of vulnerabilities in Java-based web applications: SQL injection (SQLi) and Cross-Site Scripting (XSS). As is shown by a case study presented in this paper, CARES mitigates these vulnerabilities by refactoring the Java code using the Intercepting Filter design pattern. The flexible, microservice-based CARES design can be readily extended to support other injection vulnerabilities, remediation design patterns, and programming languages.
{"title":"Automating Structured Query Language Injection and Cross-Site Scripting Vulnerability Remediation in Code","authors":"Kedar Sambhus, Yi Liu","doi":"10.3390/software3010002","DOIUrl":"https://doi.org/10.3390/software3010002","url":null,"abstract":"Internet-based distributed systems dominate contemporary software applications. To enable these applications to operate securely, software developers must mitigate the threats posed by malicious actors. For instance, the developers must identify vulnerabilities in the software and eliminate them. However, to do so manually is a costly and time-consuming process. To reduce these costs, we designed and implemented Code Auto-Remediation for Enhanced Security (CARES), a web application that automatically identifies and remediates the two most common types of vulnerabilities in Java-based web applications: SQL injection (SQLi) and Cross-Site Scripting (XSS). As is shown by a case study presented in this paper, CARES mitigates these vulnerabilities by refactoring the Java code using the Intercepting Filter design pattern. The flexible, microservice-based CARES design can be readily extended to support other injection vulnerabilities, remediation design patterns, and programming languages.","PeriodicalId":516628,"journal":{"name":"Software","volume":"45 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140509487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
George Murazvu, Siôn Parkinson, Saad Khan, Na Liu, G. Allen
Automated software testing is a crucial yet resource-intensive aspect of software development. This burden on resources affects widespread adoption, with expertise and cost being the primary challenges preventing adoption. This paper focuses on automated testing driven by manually created test cases, acknowledging its advantages while critically analysing its implications across various development stages that are affecting its adoption. Additionally, it analyses the differences in perception between those in nontechnical and technical roles, where nontechnical roles (e.g., management) predominantly strive to reduce costs and delivery time, whereas technical roles are often driven by quality and completeness. This study investigates the difference in attitudes toward automated testing (AtAT), specifically focusing on why it is not adopted. This article presents a survey conducted among software industry professionals that spans various roles to determine common trends and draw conclusions. A two-stage approach is presented, comprising a comprehensive descriptive analysis and the use of Principal Component Analysis. In total, 81 participants received a series of 22 questions, and their responses were compared against job role types and experience levels. In summary, six key findings are presented that cover expertise, time, cost, tools and techniques, utilisation, organisation, and capacity.
{"title":"A Survey on Factors Preventing the Adoption of Automated Software Testing: A Principal Component Analysis Approach","authors":"George Murazvu, Siôn Parkinson, Saad Khan, Na Liu, G. Allen","doi":"10.3390/software3010001","DOIUrl":"https://doi.org/10.3390/software3010001","url":null,"abstract":"Automated software testing is a crucial yet resource-intensive aspect of software development. This burden on resources affects widespread adoption, with expertise and cost being the primary challenges preventing adoption. This paper focuses on automated testing driven by manually created test cases, acknowledging its advantages while critically analysing its implications across various development stages that are affecting its adoption. Additionally, it analyses the differences in perception between those in nontechnical and technical roles, where nontechnical roles (e.g., management) predominantly strive to reduce costs and delivery time, whereas technical roles are often driven by quality and completeness. This study investigates the difference in attitudes toward automated testing (AtAT), specifically focusing on why it is not adopted. This article presents a survey conducted among software industry professionals that spans various roles to determine common trends and draw conclusions. A two-stage approach is presented, comprising a comprehensive descriptive analysis and the use of Principal Component Analysis. In total, 81 participants received a series of 22 questions, and their responses were compared against job role types and experience levels. In summary, six key findings are presented that cover expertise, time, cost, tools and techniques, utilisation, organisation, and capacity.","PeriodicalId":516628,"journal":{"name":"Software","volume":"23 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139640778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}