Thanh-Binh Trinh, H. Nguyen, Dinh-Hai Nguyen, Van-Khanh To, Ninh-Thuan Truong
As a kind of software system, the Event-Based Systems (EBS) respond to events rather than executing a predefined sequence of instructions. Events usually occur in real time, so it is crucial that they are processed in the correct order and within temporal constraints. The objective of this work is to propose an approach to check if events of EBS at runtime preserve the specification of temporal constraints. To form the approach by logic process, we have formalized the EBS model, through which, we have proved that the complexity of the checking algorithms is only polynomial. The approach has been implemented as a tool (VER) to check EBS at runtime automatically. The results of the proposed method are illustrated by checking a real-world Event Driven Architecture (EDA) application, an Intelligent transportation system.
{"title":"Checking Temporal Constraints of Events in EBS at Runtime","authors":"Thanh-Binh Trinh, H. Nguyen, Dinh-Hai Nguyen, Van-Khanh To, Ninh-Thuan Truong","doi":"10.2478/cait-2024-0005","DOIUrl":"https://doi.org/10.2478/cait-2024-0005","url":null,"abstract":"\u0000 As a kind of software system, the Event-Based Systems (EBS) respond to events rather than executing a predefined sequence of instructions. Events usually occur in real time, so it is crucial that they are processed in the correct order and within temporal constraints. The objective of this work is to propose an approach to check if events of EBS at runtime preserve the specification of temporal constraints. To form the approach by logic process, we have formalized the EBS model, through which, we have proved that the complexity of the checking algorithms is only polynomial. The approach has been implemented as a tool (VER) to check EBS at runtime automatically. The results of the proposed method are illustrated by checking a real-world Event Driven Architecture (EDA) application, an Intelligent transportation system.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140281015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Facial Expression Recognition (FER) is a fundamental component of human communication with numerous potential applications. Convolutional neural networks, particularly those employing advanced architectures like Densely connected Networks (DenseNets), have demonstrated remarkable success in FER. Additionally, attention mechanisms have been harnessed to enhance feature extraction by focusing on critical image regions. This can induce more efficient models for image classification. This study introduces an efficient DenseNet model that utilizes a fusion of channel and spatial attention for FER, which capitalizes on the respective strengths to enhance feature extraction while also reducing model complexity in terms of parameters. The model is evaluated across five popular datasets: JAFFE, CK+, OuluCASIA, KDEF, and RAF-DB. The results indicate an accuracy of at least 99.94% for four lab-controlled datasets, which surpasses the accuracy of all other compared methods. Furthermore, the model demonstrates an accuracy of 83.18% with training from scratch on the real-world RAF-DB dataset.
{"title":"Efficient DenseNet Model with Fusion of Channel and Spatial Attention for Facial Expression Recognition","authors":"Dương Thăng Long","doi":"10.2478/cait-2024-0010","DOIUrl":"https://doi.org/10.2478/cait-2024-0010","url":null,"abstract":"\u0000 Facial Expression Recognition (FER) is a fundamental component of human communication with numerous potential applications. Convolutional neural networks, particularly those employing advanced architectures like Densely connected Networks (DenseNets), have demonstrated remarkable success in FER. Additionally, attention mechanisms have been harnessed to enhance feature extraction by focusing on critical image regions. This can induce more efficient models for image classification. This study introduces an efficient DenseNet model that utilizes a fusion of channel and spatial attention for FER, which capitalizes on the respective strengths to enhance feature extraction while also reducing model complexity in terms of parameters. The model is evaluated across five popular datasets: JAFFE, CK+, OuluCASIA, KDEF, and RAF-DB. The results indicate an accuracy of at least 99.94% for four lab-controlled datasets, which surpasses the accuracy of all other compared methods. Furthermore, the model demonstrates an accuracy of 83.18% with training from scratch on the real-world RAF-DB dataset.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140273357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things (IoT) will soon penetrate every aspect of human life. Several threats and vulnerabilities are present due to the different devices and protocols used in an IoT system. Conventional cryptographic primitives or algorithms cannot run efficiently and are unsuitable for resource-constrained devices in IoT. Hence, a recently developed area of cryptography, known as lightweight cryptography, has been introduced, and over the years, numerous lightweight algorithms have been suggested. This paper gives a comprehensive overview of the lightweight cryptography field and considers various popular lightweight cryptographic algorithms proposed and evaluated over the past years for analysis. Different taxonomies of the algorithms and other associated concepts were also provided, which helps new researchers gain a quick overview of the field. Finally, a set of 11 selected ultra-lightweight algorithms are analyzed based on the software implementations, and their evaluation is carried out using different metrics.
{"title":"A Survey on Lightweight Cryptographic Algorithms in IoT","authors":"Suryateja Satya Pericherla, K. Venkata Rao","doi":"10.2478/cait-2024-0002","DOIUrl":"https://doi.org/10.2478/cait-2024-0002","url":null,"abstract":"\u0000 The Internet of Things (IoT) will soon penetrate every aspect of human life. Several threats and vulnerabilities are present due to the different devices and protocols used in an IoT system. Conventional cryptographic primitives or algorithms cannot run efficiently and are unsuitable for resource-constrained devices in IoT. Hence, a recently developed area of cryptography, known as lightweight cryptography, has been introduced, and over the years, numerous lightweight algorithms have been suggested. This paper gives a comprehensive overview of the lightweight cryptography field and considers various popular lightweight cryptographic algorithms proposed and evaluated over the past years for analysis. Different taxonomies of the algorithms and other associated concepts were also provided, which helps new researchers gain a quick overview of the field. Finally, a set of 11 selected ultra-lightweight algorithms are analyzed based on the software implementations, and their evaluation is carried out using different metrics.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140280396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fake social media profiles are responsible for various cyber-attacks, spreading fake news, identity theft, business and payment fraud, abuse, and more. This paper aims to explore the potential of Machine Learning in detecting fake social media profiles by employing various Machine Learning algorithms, including the Dummy Classifier, Support Vector Classifier (SVC), Support Vector Classifier (SVC) kernels, Random Forest classifier, Random Forest Regressor, Decision Tree Classifier, Decision Tree Regressor, MultiLayer Perceptron classifier (MLP), MultiLayer Perceptron (MLP) Regressor, Naïve Bayes classifier, and Logistic Regression. For a comprehensive evaluation of the performance and accuracy of different models in detecting fake social media profiles, it is essential to consider confusion matrices, sampling techniques, and various metric calculations. Additionally, incorporating extended computations such as root mean squared error, mean absolute error, mean squared error and cross-validation accuracy can further enhance the overall performance of the models.
{"title":"Leveraging Machine Learning for Fraudulent Social Media Profile Detection","authors":"Soorya Ramdas, Neenu N. T. Agnes","doi":"10.2478/cait-2024-0007","DOIUrl":"https://doi.org/10.2478/cait-2024-0007","url":null,"abstract":"\u0000 Fake social media profiles are responsible for various cyber-attacks, spreading fake news, identity theft, business and payment fraud, abuse, and more. This paper aims to explore the potential of Machine Learning in detecting fake social media profiles by employing various Machine Learning algorithms, including the Dummy Classifier, Support Vector Classifier (SVC), Support Vector Classifier (SVC) kernels, Random Forest classifier, Random Forest Regressor, Decision Tree Classifier, Decision Tree Regressor, MultiLayer Perceptron classifier (MLP), MultiLayer Perceptron (MLP) Regressor, Naïve Bayes classifier, and Logistic Regression. For a comprehensive evaluation of the performance and accuracy of different models in detecting fake social media profiles, it is essential to consider confusion matrices, sampling techniques, and various metric calculations. Additionally, incorporating extended computations such as root mean squared error, mean absolute error, mean squared error and cross-validation accuracy can further enhance the overall performance of the models.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140279658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In today’s world, Electronic Health Records (EHR) are highly segregated and available only within the organization with which the patient is associated. If a patient has to visit another hospital there is no secure way for hospitals to communicate and share medical records. Hence, people are always asked to redo tests that have been done earlier in different hospitals. This leads to monetary, time, and resource loss. Even if the organizations are ready to share data, there are no secure methods for sharing without disturbing data privacy, integrity, and confidentiality. When health data are stored or transferred via unsecured means there are always possibilities for adversaries to initiate an attack and modify them. To overcome these hurdles and secure the storage and sharing of health records, blockchain, a very disruptive technology can be integrated with the healthcare system for EHR management. This paper surveys recent works on the distributed, decentralized systems for EHR storage in healthcare organizations.
{"title":"A Review on State-of-Art Blockchain Schemes for Electronic Health Records Management","authors":"Jayapriya Jayabalan, N. Jeyanthi","doi":"10.2478/cait-2024-0003","DOIUrl":"https://doi.org/10.2478/cait-2024-0003","url":null,"abstract":"\u0000 In today’s world, Electronic Health Records (EHR) are highly segregated and available only within the organization with which the patient is associated. If a patient has to visit another hospital there is no secure way for hospitals to communicate and share medical records. Hence, people are always asked to redo tests that have been done earlier in different hospitals. This leads to monetary, time, and resource loss. Even if the organizations are ready to share data, there are no secure methods for sharing without disturbing data privacy, integrity, and confidentiality. When health data are stored or transferred via unsecured means there are always possibilities for adversaries to initiate an attack and modify them. To overcome these hurdles and secure the storage and sharing of health records, blockchain, a very disruptive technology can be integrated with the healthcare system for EHR management. This paper surveys recent works on the distributed, decentralized systems for EHR storage in healthcare organizations.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140270818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Alma’aitah, Addy Quraan, Fatima N. AL-Aswadi, Rami Suleiman Alkhawaldeh, M. Alazab, A. Awajan
Modern organizations are currently wrestling with strenuous challenges relating to the management of heterogeneous big data, which combines data from various sources and varies in type, format, and content. The heterogeneity of the data makes it difficult to analyze and integrate. This paper presents big data warehousing and federation as viable approaches for handling big data complexity. It discusses their respective advantages and disadvantages as strategies for integrating, managing, and analyzing heterogeneous big data. Data integration is crucial for organizations to manipulate organizational data. Organizations have to weigh the benefits and drawbacks of both data integration approaches to identify the one that responds to their organizational needs and objectives. This paper aw well presents an adequate analysis of these two data integration approaches and identifies challenges associated with the selection of either approach. Thorough understanding and awareness of the merits and demits of these two approaches are crucial for practitioners, researchers, and decision-makers to select the approach that enables them to handle complex data, boost their decision-making process, and best align with their needs and expectations.
{"title":"Integration Approaches for Heterogeneous Big Data: A Survey","authors":"W. Alma’aitah, Addy Quraan, Fatima N. AL-Aswadi, Rami Suleiman Alkhawaldeh, M. Alazab, A. Awajan","doi":"10.2478/cait-2024-0001","DOIUrl":"https://doi.org/10.2478/cait-2024-0001","url":null,"abstract":"\u0000 Modern organizations are currently wrestling with strenuous challenges relating to the management of heterogeneous big data, which combines data from various sources and varies in type, format, and content. The heterogeneity of the data makes it difficult to analyze and integrate. This paper presents big data warehousing and federation as viable approaches for handling big data complexity. It discusses their respective advantages and disadvantages as strategies for integrating, managing, and analyzing heterogeneous big data. Data integration is crucial for organizations to manipulate organizational data. Organizations have to weigh the benefits and drawbacks of both data integration approaches to identify the one that responds to their organizational needs and objectives. This paper aw well presents an adequate analysis of these two data integration approaches and identifies challenges associated with the selection of either approach. Thorough understanding and awareness of the merits and demits of these two approaches are crucial for practitioners, researchers, and decision-makers to select the approach that enables them to handle complex data, boost their decision-making process, and best align with their needs and expectations.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140273302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Z. Milivojevic, B. Prlincevic, Milan Cekić, Dijana Kostić
People with Color Vision Deficiency (CVD), which arises as a deformation of the M cones in the eye, cannot detect the color green in the image (deutan anomaly). In the first part of the paper, deutan anomalous is described. After that, the image recoloring algorithm, which enables Deutan CVD people to see a wider spectrum in images, is described. Then, the effect of the Recoloring algorithm on images with inserted watermark is analyzed. An experiment has been carried out, in which the effect of the Recoloring algorithm on the quality of extracted watermark and Recoloring image is studied. In addition, the robustness of the inserted watermark in relation to spatial transformations (rotation, scaling) and compression algorithms has been tested. By applying objective measures and visual inspection of the quality of extracted watermark and recoloring image, the optimal insertion factor α is determined. All results are presented in the form of pictures, tables and graphics.
色觉缺陷(CVD)是由眼睛中的 M 锥体变形引起的,色觉缺陷患者无法检测到图像中的绿色(deutan 异常)。本文的第一部分介绍了 "deutan 异常"。然后,介绍了图像再着色算法,该算法可使德坦 CVD 患者在图像中看到更宽的光谱。然后,分析了重调色算法对插入水印的图像的影响。还进行了一项实验,研究重新加色算法对提取的水印和重新加色图像质量的影响。此外,还测试了插入水印对空间变换(旋转、缩放)和压缩算法的稳健性。通过对提取的水印和重新调色图像的质量进行客观测量和目测,确定了最佳插入因子 α。所有结果均以图片、表格和图形的形式呈现。
{"title":"Degradation Recoloring Deutan CVD Image from Block SVD Watermark","authors":"Z. Milivojevic, B. Prlincevic, Milan Cekić, Dijana Kostić","doi":"10.2478/cait-2024-0008","DOIUrl":"https://doi.org/10.2478/cait-2024-0008","url":null,"abstract":"\u0000 People with Color Vision Deficiency (CVD), which arises as a deformation of the M cones in the eye, cannot detect the color green in the image (deutan anomaly). In the first part of the paper, deutan anomalous is described. After that, the image recoloring algorithm, which enables Deutan CVD people to see a wider spectrum in images, is described. Then, the effect of the Recoloring algorithm on images with inserted watermark is analyzed. An experiment has been carried out, in which the effect of the Recoloring algorithm on the quality of extracted watermark and Recoloring image is studied. In addition, the robustness of the inserted watermark in relation to spatial transformations (rotation, scaling) and compression algorithms has been tested. By applying objective measures and visual inspection of the quality of extracted watermark and recoloring image, the optimal insertion factor α is determined. All results are presented in the form of pictures, tables and graphics.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140271669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this research, we propose two new image steganography techniques focusing on increasing image-embedding capacity. The methods will encrypt and hide secret information in the edge area. We utilized two hybrid methods for the edge detection of the images. The first method combines the Laplacian of Gaussian (LoG) with the wavelet transform algorithm and the second method mixes the LOG and Canny. The Combining was performed using addWeighted. The text message will be encrypted using the GIFT cipher method for further security and low computation. For the effectiveness evaluation of the proposed method, various evaluation metrics were used such as embedding capacity, PSNR, MSE, and SSIM. The obtained results indicate that the proposed method has a greater embedding capacity in comparison with other methods, while still maintaining high levels of imperceptibility in the cover image.
{"title":"Hybrid Edge Detection Methods in Image Steganography for High Embedding Capacity","authors":"Marwah Habiban, Fatima R. Hamade, Nadia A. Mohsin","doi":"10.2478/cait-2024-0009","DOIUrl":"https://doi.org/10.2478/cait-2024-0009","url":null,"abstract":"\u0000 In this research, we propose two new image steganography techniques focusing on increasing image-embedding capacity. The methods will encrypt and hide secret information in the edge area. We utilized two hybrid methods for the edge detection of the images. The first method combines the Laplacian of Gaussian (LoG) with the wavelet transform algorithm and the second method mixes the LOG and Canny. The Combining was performed using addWeighted. The text message will be encrypted using the GIFT cipher method for further security and low computation. For the effectiveness evaluation of the proposed method, various evaluation metrics were used such as embedding capacity, PSNR, MSE, and SSIM. The obtained results indicate that the proposed method has a greater embedding capacity in comparison with other methods, while still maintaining high levels of imperceptibility in the cover image.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140273808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dena Kadhim Muhsen, Ahmed T. Sadiq, Firas Abdulrazzaq Raheem
With the advancement of the robotics world, many path-planning algorithms have been proposed. One of the important algorithms is the Rapidly Exploring Random Tree (RRT) but with the drawback of not guaranteeing the optimal path. This paper solves this problem by proposing a Memorized RRT Optimization Algorithm (MRRTO Algorithm) using memory as an optimization step. The algorithm obtains a single path from the start point, and another from the target point to store only the last visited new node. The method for computing the nearest node depends on the position, when a new node is added, the RRT function checks if there is another node closer to the new node rather than that is closer to the goal point. Simulation results with different environments show that the MRRTO outperforms the original RRT Algorithm, graph algorithms, and metaheuristic algorithms in terms of reducing time consumption, path length, and number of nodes used.
{"title":"Memorized Rapidly Exploring Random Tree Optimization (MRRTO): An Enhanced Algorithm for Robot Path Planning","authors":"Dena Kadhim Muhsen, Ahmed T. Sadiq, Firas Abdulrazzaq Raheem","doi":"10.2478/cait-2024-0011","DOIUrl":"https://doi.org/10.2478/cait-2024-0011","url":null,"abstract":"\u0000 With the advancement of the robotics world, many path-planning algorithms have been proposed. One of the important algorithms is the Rapidly Exploring Random Tree (RRT) but with the drawback of not guaranteeing the optimal path. This paper solves this problem by proposing a Memorized RRT Optimization Algorithm (MRRTO Algorithm) using memory as an optimization step. The algorithm obtains a single path from the start point, and another from the target point to store only the last visited new node. The method for computing the nearest node depends on the position, when a new node is added, the RRT function checks if there is another node closer to the new node rather than that is closer to the goal point. Simulation results with different environments show that the MRRTO outperforms the original RRT Algorithm, graph algorithms, and metaheuristic algorithms in terms of reducing time consumption, path length, and number of nodes used.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140279160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Continuous Software Process Improvement (SPI) is essential for achieving and maintaining high-quality software products. Web-based software enterprises, comprising a substantial proportion of global businesses and forming a cornerstone of the world’s industrial economy, are actively pursuing SPI initiatives. While these companies recognize the critical role of process enhancement in achieving success, they face challenges in implementing SPI due to the distinctive characteristics of Web-based software projects. This study aims to identify, validate, and prioritize the sustainability success factors that positively influence SPI implementation efforts in Web-based software projects. Data have been meticulously gathered through a systematic literature review and quantitatively through a survey questionnaire. The findings of this research empower Web-based software enterprises to refine their management strategies for evaluating and bolstering SPI practices within the Web-based software projects domain.
{"title":"Success Factors for Conducting Software-Process Improvement in Web-Based Software Projects","authors":"Thamer Al-Rousan","doi":"10.2478/cait-2024-0004","DOIUrl":"https://doi.org/10.2478/cait-2024-0004","url":null,"abstract":"\u0000 Continuous Software Process Improvement (SPI) is essential for achieving and maintaining high-quality software products. Web-based software enterprises, comprising a substantial proportion of global businesses and forming a cornerstone of the world’s industrial economy, are actively pursuing SPI initiatives. While these companies recognize the critical role of process enhancement in achieving success, they face challenges in implementing SPI due to the distinctive characteristics of Web-based software projects. This study aims to identify, validate, and prioritize the sustainability success factors that positively influence SPI implementation efforts in Web-based software projects. Data have been meticulously gathered through a systematic literature review and quantitatively through a survey questionnaire. The findings of this research empower Web-based software enterprises to refine their management strategies for evaluating and bolstering SPI practices within the Web-based software projects domain.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140280552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}