Pub Date : 2024-01-26DOI: 10.1134/s0361768823090098
V. Sahakyan, A. Vardanyan
Abstract
This scientific paper explores the operation of a multiprocessor task servicing system. Tasks are received into the system at random intervals and are characterized by several stochastic parameters, including the number of processors required for their execution, the maximum allowable busy time for these processors, and the permissible waiting time in the task queue. The organization of task servicing in this system follows a first-in, first-out (FIFO) approach, ensuring uninterrupted processing. The key servicing process involves periodically selecting the first task in the queue and assessing its feasibility for immediate execution. If the task meets the necessary criteria, it is dispatched for processing. This process continues iteratively until a task is found, the parameters of which prevent immediate servicing. It is important to note that tasks in the queue have a limited window of time within which they can be serviced; otherwise, they may exit the system without service.
This paper focuses on systems characterized by exponential distributions for random variables related to task arrivals, servicing times, and waiting restrictions. A system of equations is derived that describes the system’s steady-state behavior. These equations enable the calculation of probabilities associated with the system’s various states. Additionally, the paper provides insights into the probability distributions of virtual waiting times for tasks that arrive in the system at any given moment.
{"title":"A Computational Approach for Evaluating Steady-State Probabilities and Virtual Waiting Time of a Multiprocessor Queuing System","authors":"V. Sahakyan, A. Vardanyan","doi":"10.1134/s0361768823090098","DOIUrl":"https://doi.org/10.1134/s0361768823090098","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>This scientific paper explores the operation of a multiprocessor task servicing system. Tasks are received into the system at random intervals and are characterized by several stochastic parameters, including the number of processors required for their execution, the maximum allowable busy time for these processors, and the permissible waiting time in the task queue. The organization of task servicing in this system follows a first-in, first-out (FIFO) approach, ensuring uninterrupted processing. The key servicing process involves periodically selecting the first task in the queue and assessing its feasibility for immediate execution. If the task meets the necessary criteria, it is dispatched for processing. This process continues iteratively until a task is found, the parameters of which prevent immediate servicing. It is important to note that tasks in the queue have a limited window of time within which they can be serviced; otherwise, they may exit the system without service.</p><p>This paper focuses on systems characterized by exponential distributions for random variables related to task arrivals, servicing times, and waiting restrictions. A system of equations is derived that describes the system’s steady-state behavior. These equations enable the calculation of probabilities associated with the system’s various states. Additionally, the paper provides insights into the probability distributions of virtual waiting times for tasks that arrive in the system at any given moment.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"9 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-26DOI: 10.1134/s0361768823090086
V. Poghosyan, S. Poghosyan, A. Lazyan, A. Atashyan, D. Hayrapetyan, Y. Alaverdyan, H. Astsatryan
Abstract
Unmanned aerial vehicles (UAV) swarms offer a cost-effective, time-efficient data collection and analysis solution across various applications. The study presents a cutting-edge self-organizing UAV swarm simulation platform empowered by collective artificial intelligence designed to facilitate terrain monitoring and optimize task performance using a fleet of UAVs. The cloud-based multi-user platform provides users with interactive features for seamless user collaboration and real-time video viewing for collective exploration of dynamic terrain imagery, allowing users to generate requests seamlessly from the QT interface. The UAV map configurator facilitates the creation and modification of UAV swarm navigation maps, optimizing their behavior and performance. Additionally, the parameter gossip system fosters communication and coordination among swarm members, while the QT service layer ensures secure data transfer to cloud servers. This integrated data fuels the formation of essential swarm and target tasks, determining key parameters such as swarm participant count, initial relative coordinate positions, and statuses (imager and/or strike). The server employs advanced algorithms to achieve these functionalities, including the research road graph based on the rotor-router model and the comprehensive information exchange graph using the gossip/broadcast model. These algorithms work synergistically within the server environment, enabling efficient task planning and coordination among the UAV swarm. Furthermore, the platform allows for the seamless transmission of the formed target tasks to the memory of individual swarm participants, enhancing their decision-making capabilities and overall swarm performance.
{"title":"Self-Organizing Multi-User UAV Swarm Simulation Platform","authors":"V. Poghosyan, S. Poghosyan, A. Lazyan, A. Atashyan, D. Hayrapetyan, Y. Alaverdyan, H. Astsatryan","doi":"10.1134/s0361768823090086","DOIUrl":"https://doi.org/10.1134/s0361768823090086","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Unmanned aerial vehicles (UAV) swarms offer a cost-effective, time-efficient data collection and analysis solution across various applications. The study presents a cutting-edge self-organizing UAV swarm simulation platform empowered by collective artificial intelligence designed to facilitate terrain monitoring and optimize task performance using a fleet of UAVs. The cloud-based multi-user platform provides users with interactive features for seamless user collaboration and real-time video viewing for collective exploration of dynamic terrain imagery, allowing users to generate requests seamlessly from the QT interface. The UAV map configurator facilitates the creation and modification of UAV swarm navigation maps, optimizing their behavior and performance. Additionally, the parameter gossip system fosters communication and coordination among swarm members, while the QT service layer ensures secure data transfer to cloud servers. This integrated data fuels the formation of essential swarm and target tasks, determining key parameters such as swarm participant count, initial relative coordinate positions, and statuses (imager and/or strike). The server employs advanced algorithms to achieve these functionalities, including the research road graph based on the rotor-router model and the comprehensive information exchange graph using the gossip/broadcast model. These algorithms work synergistically within the server environment, enabling efficient task planning and coordination among the UAV swarm. Furthermore, the platform allows for the seamless transmission of the formed target tasks to the memory of individual swarm participants, enhancing their decision-making capabilities and overall swarm performance.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"22 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-26DOI: 10.1134/s0361768823090037
D. G. Asatryan, M. E. Haroutunian, G. S. Sazhumyan, A. V. Kupriyanov, R. A. Paringer, D. V. Kirsh
Abstract
Binarization of historical documents is a rather complex task that is being intensively studied by researchers all over the world. A large number of approaches, procedures, and binarization algorithms have been proposed, but methods that work equally well in all cases have not yet been proposed. The literature offers various criteria for assessing the quality of the binarization result. In the case of binarization of ancient handwritten texts, the criterion for the quality of the binarization algorithm is the degree of readability of the text using a visual method or technical means. One of the approaches proposed in the literature to improve the quality of the binarization result is pre-processing the original image using filtering methods, morphological analysis, spectral analysis, etc. This article proposes a hybrid binarization method, consisting of an arbitrary global or adaptive binarization algorithm and a special segmentation procedure for selecting segments of certain sizes. The proposed procedure makes it possible to identify objects of certain sizes in an image, in particular artifacts that exist in a binarized image. This work experimentally explores the possibility of improving the quality of a binary image by applying the proposed procedure.
{"title":"Hybrid Binarization Method for Historical Handwritten Documents","authors":"D. G. Asatryan, M. E. Haroutunian, G. S. Sazhumyan, A. V. Kupriyanov, R. A. Paringer, D. V. Kirsh","doi":"10.1134/s0361768823090037","DOIUrl":"https://doi.org/10.1134/s0361768823090037","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Binarization of historical documents is a rather complex task that is being intensively studied by researchers all over the world. A large number of approaches, procedures, and binarization algorithms have been proposed, but methods that work equally well in all cases have not yet been proposed. The literature offers various criteria for assessing the quality of the binarization result. In the case of binarization of ancient handwritten texts, the criterion for the quality of the binarization algorithm is the degree of readability of the text using a visual method or technical means. One of the approaches proposed in the literature to improve the quality of the binarization result is pre-processing the original image using filtering methods, morphological analysis, spectral analysis, etc. This article proposes a hybrid binarization method, consisting of an arbitrary global or adaptive binarization algorithm and a special segmentation procedure for selecting segments of certain sizes. The proposed procedure makes it possible to identify objects of certain sizes in an image, in particular artifacts that exist in a binarized image. This work experimentally explores the possibility of improving the quality of a binary image by applying the proposed procedure.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"26 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-26DOI: 10.1134/s0361768823090074
F. V. Niskov, E. A. Kutovoy, Sh. F. Kurmangaleev
Abstract
Code analysis for defect detection is very important in the modern world, especially in the case of complex multi-thread applications. An example of a tool, suitable for software of high complexity, is the famous S2E, which allows for full-system emulation with symbolic execution. This paper presents several major enhancements for S2E, including: firstly, support for multiple virtual cores, allowing to have parallel speed-up; secondly, on this basis, a race checker plugin to detect defects of this sort in multi-thread programs. This development has concerned such interesting points of research as scheduling in multi-core emulation and race detection with symbolic execution.
{"title":"Enhanced S2E for Analysis of Multi-Thread Software","authors":"F. V. Niskov, E. A. Kutovoy, Sh. F. Kurmangaleev","doi":"10.1134/s0361768823090074","DOIUrl":"https://doi.org/10.1134/s0361768823090074","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Code analysis for defect detection is very important in the modern world, especially in the case of complex multi-thread applications. An example of a tool, suitable for software of high complexity, is the famous S2E, which allows for full-system emulation with symbolic execution. This paper presents several major enhancements for S2E, including: firstly, support for multiple virtual cores, allowing to have parallel speed-up; secondly, on this basis, a race checker plugin to detect defects of this sort in multi-thread programs. This development has concerned such interesting points of research as scheduling in multi-core emulation and race detection with symbolic execution.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"16 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1134/s0361768823080054
L. Canchari, P. Angeleri, A. Dávila
Abstract
The Peruvian government adopted the ISO/IEC 12207 standard and established its mandatory implementation in public entities to improve the quality of the software products. In this context, software requirements validation tasks were introduced to improve the quality of the software product. In this study, the relationship between the improvement of software requirement quality and the software product quality in use was explored and analyzed. Analysis was based on the design of software product quality-in-use and the measure of metrics from ISO/IEC 25010 standard in two software products. The results show that the validation activities introduced in the software requirements stage have a positive relationship with the quality in use of the software products analyzed. In the software studied, it can be said that the improvement of the quality of the requirements has contributed to the improvement of the quality in use of software products. In this case, it has increased time efficiency to complete tasks by 45%, reduced errors for a task by 40%, the number of tasks with errors by 47%, the cost of time to perform tasks by 29%, and unnecessary actions by 53%. In addition, overall satisfaction, user pleasure, information quality, and interfaces quality are the metrics that significantly improve.
{"title":"Requirements Validation in the Information System Software Development Lifecycle: A Software Quality in Use Evaluation","authors":"L. Canchari, P. Angeleri, A. Dávila","doi":"10.1134/s0361768823080054","DOIUrl":"https://doi.org/10.1134/s0361768823080054","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The Peruvian government adopted the ISO/IEC 12207 standard and established its mandatory implementation in public entities to improve the quality of the software products. In this context, software requirements validation tasks were introduced to improve the quality of the software product. In this study, the relationship between the improvement of software requirement quality and the software product quality in use was explored and analyzed. Analysis was based on the design of software product quality-in-use and the measure of metrics from ISO/IEC 25010 standard in two software products. The results show that the validation activities introduced in the software requirements stage have a positive relationship with the quality in use of the software products analyzed. In the software studied, it can be said that the improvement of the quality of the requirements has contributed to the improvement of the quality in use of software products. In this case, it has increased time efficiency to complete tasks by 45%, reduced errors for a task by 40%, the number of tasks with errors by 47%, the cost of time to perform tasks by 29%, and unnecessary actions by 53%. In addition, overall satisfaction, user pleasure, information quality, and interfaces quality are the metrics that significantly improve.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"163 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1134/s0361768823080078
Sofía Isabel Fernández Gregorio, Luis G. Montané-Jiménez, Carmen Mezura Godoy, Viviana Yarel Rosales-Morales
Abstract
When a person suffers a stroke, they require rehabilitation to recover from the consequences caused by this event. In order to carry out the rehabilitation, a multidisciplinary team of specialists intervenes, providing care from diverse areas such as neurology, nutrition, psychology, and physiotherapy. In the rehabilitation process, physicians interact with medical computing software and devices. The interactions represent medical activities that follow rehabilitation. However, there is no clear conception of how specialists coordinate to carry out medical activities. There are no specific communication channels that allow multidisciplinary collaboration for complete rehabilitation. This paper presents a systematic review of the state of the art that addresses this problem. On the other hand, a collaborative software architecture is proposed to support this area, which allows the monitoring medical activities from human-computer multimodal interactions. The architecture is composed of 3 layers: (1) to perceive the interactions and to monitor the activities, (2) to control the multidisciplinary access and share the information, and (3) to analyze and evaluate the execution of the multidisciplinary activities. Evaluating how the activities are carried out will help the physicians make decisions regarding the execution of the treatment plan. Therefore, we propose an activities representation diagram that facilitates this evaluation. Finally, we developed a prototype with a user-centered design that perceives human-computer interactions supported by the architecture.
{"title":"Architecture for Groupware Oriented to Collaborative Medical Activities in the Rehabilitation of Strokes","authors":"Sofía Isabel Fernández Gregorio, Luis G. Montané-Jiménez, Carmen Mezura Godoy, Viviana Yarel Rosales-Morales","doi":"10.1134/s0361768823080078","DOIUrl":"https://doi.org/10.1134/s0361768823080078","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>When a person suffers a stroke, they require rehabilitation to recover from the consequences caused by this event. In order to carry out the rehabilitation, a multidisciplinary team of specialists intervenes, providing care from diverse areas such as neurology, nutrition, psychology, and physiotherapy. In the rehabilitation process, physicians interact with medical computing software and devices. The interactions represent medical activities that follow rehabilitation. However, there is no clear conception of how specialists coordinate to carry out medical activities. There are no specific communication channels that allow multidisciplinary collaboration for complete rehabilitation. This paper presents a systematic review of the state of the art that addresses this problem. On the other hand, a collaborative software architecture is proposed to support this area, which allows the monitoring medical activities from human-computer multimodal interactions. The architecture is composed of 3 layers: (1) to perceive the interactions and to monitor the activities, (2) to control the multidisciplinary access and share the information, and (3) to analyze and evaluate the execution of the multidisciplinary activities. Evaluating how the activities are carried out will help the physicians make decisions regarding the execution of the treatment plan. Therefore, we propose an activities representation diagram that facilitates this evaluation. Finally, we developed a prototype with a user-centered design that perceives human-computer interactions supported by the architecture.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"26 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1134/s0361768823080170
Kenia Nieto-Benitez, Noe Alejandro Castro-Sanchez, Hector Jimenez Salazar, Gemma Bel-Enguix, Dante Mújica Vargas, Juan Gabriel González Serna, Nimrod González Franco
Abstract
Political speeches frequently use fallacies to sway voters during electoral campaigns. This study presents an approach for implementing machine learning models to automatically identify a specific type of fallacy known as an “appeal to emotion” fallacy. The objective is to establish a set of elements that enable the application of fallacy mining, as in existing literature, fallacies are typically identified manually, and there is no established structure for applying mining techniques. Our method utilizes features derived from an emotion lexicon to differentiate between valid arguments and fallacies, and we employed Support Vector Machine and Multilayer Perceptron models. Our results indicate that the Multilayer Perceptron model achieved an F1‑score of 0.60 in identifying fallacies. Based on our analysis, we recommend the use of lexical dictionaries to effectively identify “appeal to emotion” fallacies.
摘要 在竞选期间,政治演讲经常使用谬误来左右选民。本研究提出了一种实施机器学习模型的方法,用于自动识别一种特定类型的谬误,即 "诉诸情感 "谬误。我们的目标是建立一套能够应用谬误挖掘的要素,因为在现有文献中,谬误通常是由人工识别的,而且没有应用挖掘技术的既定结构。我们的方法利用从情感词典中提取的特征来区分有效论据和谬误,并采用了支持向量机和多层感知器模型。结果表明,多层感知器模型在识别谬误方面的 F1 分数为 0.60。根据我们的分析,我们建议使用词汇词典来有效识别 "诉诸情感 "谬误。
{"title":"Elements for Automatic Identification of Fallacies in Mexican Election Campaign Political Speeches","authors":"Kenia Nieto-Benitez, Noe Alejandro Castro-Sanchez, Hector Jimenez Salazar, Gemma Bel-Enguix, Dante Mújica Vargas, Juan Gabriel González Serna, Nimrod González Franco","doi":"10.1134/s0361768823080170","DOIUrl":"https://doi.org/10.1134/s0361768823080170","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Political speeches frequently use fallacies to sway voters during electoral campaigns. This study presents an approach for implementing machine learning models to automatically identify a specific type of fallacy known as an “appeal to emotion” fallacy. The objective is to establish a set of elements that enable the application of fallacy mining, as in existing literature, fallacies are typically identified manually, and there is no established structure for applying mining techniques. Our method utilizes features derived from an emotion lexicon to differentiate between valid arguments and fallacies, and we employed Support Vector Machine and Multilayer Perceptron models. Our results indicate that the Multilayer Perceptron model achieved an F1‑score of 0.60 in identifying fallacies. Based on our analysis, we recommend the use of lexical dictionaries to effectively identify “appeal to emotion” fallacies.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"13 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139562358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1134/s0361768823080030
A. B. Batkhin, Z. Kh. Khaidarov
Abstract
The study of formal stability of equilibrium positions of a multiparametric Hamiltonian system in a generic case is traditionally carried out using its normal form under the condition of the absence of resonances of small orders. In this paper we propose a method of symbolic computation of the condition of existence of a resonance of arbitrary order for a system with three degrees of freedom. It is shown that this condition for each resonant vector can be represented as a rational algebraic curve. By methods of computer algebra the rational parametrization of this curve for the case of an arbitrary resonance is obtained. A model example of some two-parameter system of pendulum type is considered.
{"title":"Symbolic Computation of an Arbitrary-Order Resonance Condition in a Hamiltonian System","authors":"A. B. Batkhin, Z. Kh. Khaidarov","doi":"10.1134/s0361768823080030","DOIUrl":"https://doi.org/10.1134/s0361768823080030","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The study of formal stability of equilibrium positions of a multiparametric Hamiltonian system in a generic case is traditionally carried out using its normal form under the condition of the absence of resonances of small orders. In this paper we propose a method of symbolic computation of the condition of existence of a resonance of arbitrary order for a system with three degrees of freedom. It is shown that this condition for each resonant vector can be represented as a rational algebraic curve. By methods of computer algebra the rational parametrization of this curve for the case of an arbitrary resonance is obtained. A model example of some two-parameter system of pendulum type is considered.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"10 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1134/s0361768823080157
J. Navas-Su, A. Gonzalez-Torres, M. Hernandez-Vasquez, J. Solano-Cordero, F. Hernandez-Castro, A. Bener
Abstract
Software development can be a time-consuming and costly process that requires a significant amount of effort. Developers are often tasked with completing programming tasks or making modifications to existing code without increasing overall complexity. It is essential for them to understand the dependencies between the program components before implementing any changes. However, as code evolves, it becomes increasingly challenging for project managers to detect indirect coupling links between components. These hidden links can complicate the system, cause inaccurate effort estimates, and compromise the quality of the code. To address these challenges, this study aims to provide a set of measures that leverage measurement theory and hidden links between software components to expand the scope, effectiveness, and utility of accepted software metrics. The research focuses on two primary topics: (1) how indirect coupling measurements can aid developers with maintenance tasks and (2) how indirect coupling metrics can quantify software complexity and size, leveraging weighted differences across techniques. The study presents a comprehensive set of measures designed to assist developers and project managers with project management and maintenance activities. Using the power of indirect coupling measurements, these measures can enhance the quality and efficiency of software development and maintenance processes.
{"title":"A Metrics Suite for Measuring Indirect Coupling Complexity","authors":"J. Navas-Su, A. Gonzalez-Torres, M. Hernandez-Vasquez, J. Solano-Cordero, F. Hernandez-Castro, A. Bener","doi":"10.1134/s0361768823080157","DOIUrl":"https://doi.org/10.1134/s0361768823080157","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Software development can be a time-consuming and costly process that requires a significant amount of effort. Developers are often tasked with completing programming tasks or making modifications to existing code without increasing overall complexity. It is essential for them to understand the dependencies between the program components before implementing any changes. However, as code evolves, it becomes increasingly challenging for project managers to detect indirect coupling links between components. These hidden links can complicate the system, cause inaccurate effort estimates, and compromise the quality of the code. To address these challenges, this study aims to provide a set of measures that leverage measurement theory and hidden links between software components to expand the scope, effectiveness, and utility of accepted software metrics. The research focuses on two primary topics: (1) how indirect coupling measurements can aid developers with maintenance tasks and (2) how indirect coupling metrics can quantify software complexity and size, leveraging weighted differences across techniques. The study presents a comprehensive set of measures designed to assist developers and project managers with project management and maintenance activities. Using the power of indirect coupling measurements, these measures can enhance the quality and efficiency of software development and maintenance processes.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"5 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1134/s0361768823080169
Gayatri Nayak, Swadhin Kumar Barisal, Mitrabinda Ray
Abstract
The convergence rate has been widely accepted as a performance measure for choosing a better metaheuristic algorithm. So, we propose a novel technique to improve the performance of the existing Grey Wolf Optimization (GWO) algorithm in terms of its convergence rate. The proposed approach also prioritizes the test cases that are obtained after executing the input benchmark programs. This paper has three technical contributions. In our first contribution, we generate test cases for the input benchmark programs. Our second contribution prioritizes test cases using an improved version of the existing GWO algorithm (CGWO). Our third contribution analyzes the obtained result and compares it with state-of-the-art metaheuristic techniques. This work is validated after running the proposed model on six benchmark programs. The obtained results show that our proposed approach has achieved 48% better APFD score for the prioritized order of test cases than the non-prioritized order. We also achieved a better convergence rate, which takes around 4000 fewer iterations, when compared with the existing methods on the same platform.
{"title":"CGWO: An Improved Grey Wolf Optimization Technique for Test Case Prioritization","authors":"Gayatri Nayak, Swadhin Kumar Barisal, Mitrabinda Ray","doi":"10.1134/s0361768823080169","DOIUrl":"https://doi.org/10.1134/s0361768823080169","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The convergence rate has been widely accepted as a performance measure for choosing a better metaheuristic algorithm. So, we propose a novel technique to improve the performance of the existing Grey Wolf Optimization (GWO) algorithm in terms of its convergence rate. The proposed approach also prioritizes the test cases that are obtained after executing the input benchmark programs. This paper has three technical contributions. In our first contribution, we generate test cases for the input benchmark programs. Our second contribution prioritizes test cases using an improved version of the existing GWO algorithm (CGWO). Our third contribution analyzes the obtained result and compares it with state-of-the-art metaheuristic techniques. This work is validated after running the proposed model on six benchmark programs. The obtained results show that our proposed approach has achieved 48% better APFD score for the prioritized order of test cases than the non-prioritized order. We also achieved a better convergence rate, which takes around 4000 fewer iterations, when compared with the existing methods on the same platform.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"161 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}