Pub Date : 2024-01-24DOI: 10.1134/s0361768823080169
Gayatri Nayak, Swadhin Kumar Barisal, Mitrabinda Ray
Abstract
The convergence rate has been widely accepted as a performance measure for choosing a better metaheuristic algorithm. So, we propose a novel technique to improve the performance of the existing Grey Wolf Optimization (GWO) algorithm in terms of its convergence rate. The proposed approach also prioritizes the test cases that are obtained after executing the input benchmark programs. This paper has three technical contributions. In our first contribution, we generate test cases for the input benchmark programs. Our second contribution prioritizes test cases using an improved version of the existing GWO algorithm (CGWO). Our third contribution analyzes the obtained result and compares it with state-of-the-art metaheuristic techniques. This work is validated after running the proposed model on six benchmark programs. The obtained results show that our proposed approach has achieved 48% better APFD score for the prioritized order of test cases than the non-prioritized order. We also achieved a better convergence rate, which takes around 4000 fewer iterations, when compared with the existing methods on the same platform.
{"title":"CGWO: An Improved Grey Wolf Optimization Technique for Test Case Prioritization","authors":"Gayatri Nayak, Swadhin Kumar Barisal, Mitrabinda Ray","doi":"10.1134/s0361768823080169","DOIUrl":"https://doi.org/10.1134/s0361768823080169","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The convergence rate has been widely accepted as a performance measure for choosing a better metaheuristic algorithm. So, we propose a novel technique to improve the performance of the existing Grey Wolf Optimization (GWO) algorithm in terms of its convergence rate. The proposed approach also prioritizes the test cases that are obtained after executing the input benchmark programs. This paper has three technical contributions. In our first contribution, we generate test cases for the input benchmark programs. Our second contribution prioritizes test cases using an improved version of the existing GWO algorithm (CGWO). Our third contribution analyzes the obtained result and compares it with state-of-the-art metaheuristic techniques. This work is validated after running the proposed model on six benchmark programs. The obtained results show that our proposed approach has achieved 48% better APFD score for the prioritized order of test cases than the non-prioritized order. We also achieved a better convergence rate, which takes around 4000 fewer iterations, when compared with the existing methods on the same platform.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"161 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1134/s0361768823080224
Gabor Attila Tibor, Jozsef Katona
Abstract
Today, users can’t even imagine the creative and advanced methods of disguising and hiding our data. However, the free software tools available for such purposes are often outdated or rudimentary in terms of functionality, and sometimes even contain vulnerabilities. The purpose of this article is to design and implement an easy-to-use and secure data hiding application that meets modern expectations and requirements, and also has a functionality that also returns the detectability level of the data to the user. The study first explores and evaluates currently available free software based on a certain set of criteria. After that, we describe in detail the development of a multi-platform stegenographic application with a new function, focusing on the methods and algorithms used. After successful implementation, the finished application is evaluated and compared with the tested, freely available software based on the set criteria.
{"title":"Development of Multi-Platform Steganographic Software Based on Random-LSB","authors":"Gabor Attila Tibor, Jozsef Katona","doi":"10.1134/s0361768823080224","DOIUrl":"https://doi.org/10.1134/s0361768823080224","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Today, users can’t even imagine the creative and advanced methods of disguising and hiding our data. However, the free software tools available for such purposes are often outdated or rudimentary in terms of functionality, and sometimes even contain vulnerabilities. The purpose of this article is to design and implement an easy-to-use and secure data hiding application that meets modern expectations and requirements, and also has a functionality that also returns the detectability level of the data to the user. The study first explores and evaluates currently available free software based on a certain set of criteria. After that, we describe in detail the development of a multi-platform stegenographic application with a new function, focusing on the methods and algorithms used. After successful implementation, the finished application is evaluated and compared with the tested, freely available software based on the set criteria.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"9 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1134/s0361768823080133
S. E. Martínez García, C. Alberto Fernández-y-Fernández, E. G. Ramos Pérez
Abstract
The requirements phase is the core of software development, if it is not carried out correctly it can cause its failure. To combat this problem, analysts have used requirements engineering (ER, for its acronym in English), which is characterized by producing a list of quality requirements called requirements specification (RS, for its acronym in English). The SR performs the requirements classification activity, which consists of identifying the class to which each requirement belongs so that analysts face the challenge of classifying them properly. This work is focused on improving the performance of the classification of non-functional requirements (NFR); that is, with the help of a convolutional neural network. It also seeks to show the importance of preprocessing, the implementation of sampling strategies, and the use of previously trained matrices such as Fasttext, Glove, and Word2vec. The results were obtained by evaluating the metrics Recall, Precision, and F1 with an average increase of up to 30% over related work. Finally, the evaluation of the model is presented with respect to the pre-trained matrices with the ANOVA analysis.
摘要 需求阶段是软件开发的核心,如果执行不当,就会导致软件开发失败。为了解决这个问题,分析人员使用了需求工程(ER,英文缩写),其特点是生成一份高质量的需求列表,称为需求规格(RS,英文缩写)。需求规格说明书进行需求分类活动,包括确定每个需求所属的类别,以便分析人员面临对需求进行适当分类的挑战。这项工作的重点是在卷积神经网络的帮助下,提高非功能性需求(NFR)的分类性能。它还试图说明预处理、实施采样策略和使用先前训练过的矩阵(如 Fasttext、Glove 和 Word2vec)的重要性。通过对 Recall、Precision 和 F1 等指标进行评估,得出的结果比相关工作平均提高了 30%。最后,通过方差分析对模型与预训练矩阵进行了评估。
{"title":"Classification of Non-functional Requirements Using Convolutional Neural Networks","authors":"S. E. Martínez García, C. Alberto Fernández-y-Fernández, E. G. Ramos Pérez","doi":"10.1134/s0361768823080133","DOIUrl":"https://doi.org/10.1134/s0361768823080133","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The requirements phase is the core of software development, if it is not carried out correctly it can cause its failure. To combat this problem, analysts have used requirements engineering (ER, for its acronym in English), which is characterized by producing a list of quality requirements called requirements specification (RS, for its acronym in English). The SR performs the requirements classification activity, which consists of identifying the class to which each requirement belongs so that analysts face the challenge of classifying them properly. This work is focused on improving the performance of the classification of non-functional requirements (NFR); that is, with the help of a convolutional neural network. It also seeks to show the importance of preprocessing, the implementation of sampling strategies, and the use of previously trained matrices such as Fasttext, Glove, and Word2vec. The results were obtained by evaluating the metrics Recall, Precision, and F1 with an average increase of up to 30% over related work. Finally, the evaluation of the model is presented with respect to the pre-trained matrices with the ANOVA analysis.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"4 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1134/s0361768823080200
Z. Stojanov, I. Hristoski, J. Stojanov, A. Stojkov
Abstract
The development and adoption of microservices, as one of the most promising directions for developing heterogeneous distributed software systems, have been driven by dynamic changes in business and technology. In addition to the development of new applications, a significant aspect of microservices is the migration from legacy monolithic systems to microservice architectures. Such development trends are accompanied by an increase in the number of primary and secondary publications addressing microservices, highlighting the need to systematize research at a higher level. The objective of this study is to comprehensively analyze secondary studies in the field of microservices from the following five aspects: (1) publishing trends, (2) quality trends of secondary studies, (3) research trends, (4) domains of implementation, and (5) future research directions. The study follows the guidelines for conducting a systematic literature review. The findings were derived from 44 secondary studies published in the period from January 2016 to January 2023. These studies were organized and analyzed to address the five proposed research questions pertaining to the study objectives. The findings suggest that the most promising research directions are related to the development, implementation, and validation of new approaches, methods, and tools that encompass all the phases of the life cycle. Additionally, these research directions have applications in a variety of business and human life domains. Recommendations for further literature reviews relate to improvement of quality assessment of selected studies, more detailed review of architecture quality attributes, inquiry of human factor issues, and certain maintenance and operation issues. From the methodological aspect, recommendations relate to using social science qualitative methods for more detailed analysis of selected studies, and inclusion of gray literature that will bring the real experience of experts from industry.
{"title":"A Tertiary Study on Microservices: Research Trends and Recommendations","authors":"Z. Stojanov, I. Hristoski, J. Stojanov, A. Stojkov","doi":"10.1134/s0361768823080200","DOIUrl":"https://doi.org/10.1134/s0361768823080200","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The development and adoption of microservices, as one of the most promising directions for developing heterogeneous distributed software systems, have been driven by dynamic changes in business and technology. In addition to the development of new applications, a significant aspect of microservices is the migration from legacy monolithic systems to microservice architectures. Such development trends are accompanied by an increase in the number of primary and secondary publications addressing microservices, highlighting the need to systematize research at a higher level. The objective of this study is to comprehensively analyze secondary studies in the field of microservices from the following five aspects: (1) publishing trends, (2) quality trends of secondary studies, (3) research trends, (4) domains of implementation, and (5) future research directions. The study follows the guidelines for conducting a systematic literature review. The findings were derived from 44 secondary studies published in the period from January 2016 to January 2023. These studies were organized and analyzed to address the five proposed research questions pertaining to the study objectives. The findings suggest that the most promising research directions are related to the development, implementation, and validation of new approaches, methods, and tools that encompass all the phases of the life cycle. Additionally, these research directions have applications in a variety of business and human life domains. Recommendations for further literature reviews relate to improvement of quality assessment of selected studies, more detailed review of architecture quality attributes, inquiry of human factor issues, and certain maintenance and operation issues. From the methodological aspect, recommendations relate to using social science qualitative methods for more detailed analysis of selected studies, and inclusion of gray literature that will bring the real experience of experts from industry.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"58 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1134/s0361768823080182
J. Robles, G. Borrego, R. Palacio, F. E. Castillo-Barrera
Abstract
Agile software development companies considered small entities (VSE) face a new reality of remote development. However, remote communication has generated many videos because video calls are often recorded for later reference. The architectural knowledge contained in videos, derived from virtual meetings, is essential for companies facing the knowledge vaporization problem. However, only some proposals in the literature can potentially manage AK in videos. The present article proposes a solution to recover this architectural knowledge contained in videos, using an ontology as a classification scheme. We based our proposal on the concept of architectural knowledge condensation and proposed a condensation cycle from it. Finally, we validated our ontology to manage architectural knowledge following the Methontology guidelines. Implementing an ontology as a classification scheme represents a step forward to achieving the condensation of architectural knowledge in an agile development environment for VSE.
摘要 被视为小型实体(VSE)的敏捷软件开发公司面临着远程开发的新现实。然而,远程通信产生了许多视频,因为视频通话经常被录制下来供日后参考。对于面临知识蒸发问题的公司来说,虚拟会议产生的视频中所包含的架构知识至关重要。然而,文献中只有一些建议可以对视频中的 AK 进行潜在管理。本文提出了一种解决方案,利用本体作为分类方案,恢复视频中包含的建筑知识。我们的建议基于建筑知识浓缩的概念,并由此提出了一个浓缩循环。最后,我们根据 Methontology 指南验证了本体对建筑知识的管理。将本体作为分类方案来实施,代表着在 VSE 的敏捷开发环境中实现建筑知识浓缩向前迈进了一步。
{"title":"Supporting the Architectural Knowledge Condensation in a Co-Localized Agile Environment for Small Entities Using an Ontology","authors":"J. Robles, G. Borrego, R. Palacio, F. E. Castillo-Barrera","doi":"10.1134/s0361768823080182","DOIUrl":"https://doi.org/10.1134/s0361768823080182","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Agile software development companies considered small entities (VSE) face a new reality of remote development. However, remote communication has generated many videos because video calls are often recorded for later reference. The architectural knowledge contained in videos, derived from virtual meetings, is essential for companies facing the knowledge vaporization problem. However, only some proposals in the literature can potentially manage AK in videos. The present article proposes a solution to recover this architectural knowledge contained in videos, using an ontology as a classification scheme. We based our proposal on the concept of architectural knowledge condensation and proposed a condensation cycle from it. Finally, we validated our ontology to manage architectural knowledge following the Methontology guidelines. Implementing an ontology as a classification scheme represents a step forward to achieving the condensation of architectural knowledge in an agile development environment for VSE.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"195 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To better extract the features from text instances with various shapes, a scene text detector using High Resolution Net (HRNet) and spatial attention mechanism is proposed in this paper. Specifically, we use HRNetv2-W18 as the backbone network to extract the text feature in text instances with complex shapes. Considering that the scene text instance is usually small, to avoid too small feature size, we optimize HRNet through deformable convolution and Smooth Maximum Unit (SMU) activation function, so that the network can retain more detail information and location information of the text instance. In addition, a Text Region Attention Module (TRAM) is added after the backbone to make it pay more attention to the text location information and a loss function is used to TRAM, so that the network can learn the features better. The experimental results illustrate that the proposed method can compete with the state-of-the-art methods. Code is available at: https://github.com/zhangyan1005/HR-DBNet.
摘要 为了更好地从形状各异的文本实例中提取特征,本文提出了一种使用高分辨率网络(HRNet)和空间注意力机制的场景文本检测器。具体来说,我们使用 HRNetv2-W18 作为骨干网络来提取形状复杂的文本实例中的文本特征。考虑到场景文本实例通常较小,为避免特征尺寸过小,我们通过可变形卷积和平滑最大单元(Smooth Maximum Unit,SMU)激活函数对 HRNet 进行了优化,使网络能够保留更多文本实例的细节信息和位置信息。此外,我们还在骨干网之后添加了文本区域关注模块(TRAM),使其更加关注文本位置信息,并为 TRAM 使用了损失函数,从而使网络能够更好地学习特征。实验结果表明,所提出的方法可以与最先进的方法相媲美。代码见:https://github.com/zhangyan1005/HR-DBNet。
{"title":"Scene Text Detection Using HRNet and Spatial Attention Mechanism","authors":"Qingsong Tang, Zhangyan Jiang, Bolin Pan, Jinting Guo, Wuming Jiang","doi":"10.1134/s0361768823080212","DOIUrl":"https://doi.org/10.1134/s0361768823080212","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>To better extract the features from text instances with various shapes, a scene text detector using High Resolution Net (HRNet) and spatial attention mechanism is proposed in this paper. Specifically, we use HRNetv2-W18 as the backbone network to extract the text feature in text instances with complex shapes. Considering that the scene text instance is usually small, to avoid too small feature size, we optimize HRNet through deformable convolution and Smooth Maximum Unit (SMU) activation function, so that the network can retain more detail information and location information of the text instance. In addition, a Text Region Attention Module (TRAM) is added after the backbone to make it pay more attention to the text location information and a loss function is used to TRAM, so that the network can learn the features better. The experimental results illustrate that the proposed method can compete with the state-of-the-art methods. Code is available at: https://github.com/zhangyan1005/HR-DBNet.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"53 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1134/s036176882308011x
K. I. Kostenko
Abstract
The concept of a regular memory area for an intelligent system (IS) is considered. The formalized description of the memory of a separate IS component is based on an infinite saturated binary tree. Knowledge is stored in special memory subareas in the form of semantic hierarchies. This knowledge constitutes the memory content, represented by its semantic structure. This structure integrates knowledge generated and transformed by morphisms and evolutions of knowledge, which, in turn, are used to implement the IS goals. The system of knowledge morphisms and knowledge evolution is used for IS modeling. This system allows one to describe the areas of initial data and values for these morphisms and evolution using regular expressions. The family of these sets generalizes the system of classes of morphism domains. These classes are developed for the knowledge formalisms and knowledge processing flowcharts used. The application of regular expressions to describe memory structures of IS components makes it possible to construct high-level mathematical models for big and complex intelligent systems. These models allow one to develop distributed memory control schemes for knowledge processing flows and processes in intelligent systems.
摘要 考虑了智能系统(IS)常规内存区域的概念。对独立 IS 组件内存的形式化描述以无限饱和二叉树为基础。知识以语义层次的形式存储在特殊的内存子区域中。这些知识构成了记忆内容,由其语义结构表示。这种结构整合了由知识的形态和演化产生和转化的知识,反过来,这些知识又被用来实现信息系统的目标。知识形态和知识演化系统用于信息系统建模。该系统允许使用正则表达式来描述这些变形和演化的初始数据和数值区域。这些集合的系列概括了形态域的类系统。这些类是为所使用的知识形式主义和知识处理流程图开发的。应用正则表达式描述信息系统组件的内存结构,可以为大型复杂智能系统构建高级数学模型。通过这些模型,我们可以为智能系统中的知识处理流程和过程开发分布式内存控制方案。
{"title":"Regular Memory Structures and Operation Domains of Intelligent Systems","authors":"K. I. Kostenko","doi":"10.1134/s036176882308011x","DOIUrl":"https://doi.org/10.1134/s036176882308011x","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The concept of a regular memory area for an intelligent system (IS) is considered. The formalized description of the memory of a separate IS component is based on an infinite saturated binary tree. Knowledge is stored in special memory subareas in the form of semantic hierarchies. This knowledge constitutes the memory content, represented by its semantic structure. This structure integrates knowledge generated and transformed by morphisms and evolutions of knowledge, which, in turn, are used to implement the IS goals. The system of knowledge morphisms and knowledge evolution is used for IS modeling. This system allows one to describe the areas of initial data and values for these morphisms and evolution using regular expressions. The family of these sets generalizes the system of classes of morphism domains. These classes are developed for the knowledge formalisms and knowledge processing flowcharts used. The application of regular expressions to describe memory structures of IS components makes it possible to construct high-level mathematical models for big and complex intelligent systems. These models allow one to develop distributed memory control schemes for knowledge processing flows and processes in intelligent systems.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"26 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1134/s0361768823080273
D. O. Zmeev, O. A. Zmeev, L. S. Ivanova
Abstract
This paper presents an extension for the Practice Library of the Essence language in the form of a practice for working with antipatterns. To represent antipatterns in a system, the Antipattern subalpha, its states, and checkpoints are proposed. To record data about an antipattern, the Antipattern Report work product and its levels of detail with checkpoints are proposed. To analyze the architecture of a system, the Inspect Architecture activity is proposed. The Fix Architecture activity represents actions for fixing architecture flaws. Code analysis is represented as the Review the Code activity, while the correction of deficiencies found during the analysis is represented as the Refactor the Code activity. The effect of the Antipattern subalpha on the state of the Software System alpha is analyzed. Some recommendations concerning the proposed activities are provided.
{"title":"Antipattern Practice for Essence Practice Library","authors":"D. O. Zmeev, O. A. Zmeev, L. S. Ivanova","doi":"10.1134/s0361768823080273","DOIUrl":"https://doi.org/10.1134/s0361768823080273","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>This paper presents an extension for the Practice Library of the Essence language in the form of a practice for working with antipatterns. To represent antipatterns in a system, the Antipattern subalpha, its states, and checkpoints are proposed. To record data about an antipattern, the Antipattern Report work product and its levels of detail with checkpoints are proposed. To analyze the architecture of a system, the Inspect Architecture activity is proposed. The Fix Architecture activity represents actions for fixing architecture flaws. Code analysis is represented as the Review the Code activity, while the correction of deficiencies found during the analysis is represented as the Refactor the Code activity. The effect of the Antipattern subalpha on the state of the Software System alpha is analyzed. Some recommendations concerning the proposed activities are provided.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"9 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1134/s036176882308008x
J. G. Hernández-Calderón, E. Benítez-Guerrero, J. R. Rojano-Cáceres, Carmen Mezura-Godoy
Abstract
This work seeks to contribute to the development of intelligent environments by presenting an approach oriented to the identification of On-Task and Off-Task behaviors in educational settings. This is accomplished by monitoring and analyzing the user-object interactions that users manifest while performing academic activities with a tangible-intangible hybrid system in a university intelligent environment configuration. With the proposal of a framework and the Orange Data Mining tool and the Neural Network, Random Forest, Naive Bayes, and Tree classification models, training and testing was carried out with the user-object interaction records of the 13 students (11 for training and two for testing) to identify representative sequences of behavior from user-object interaction records. The two models that had the best results, despite the small number of data, were the Neural Network and Naive Bayes. Although a more significant amount of data is necessary to perform a classification adequately, the process allowed exemplifying this process so that it can later be fully incorporated into an intelligent educational system.
{"title":"Mining User-Object Interaction Data for Student Modeling in Intelligent Learning Environments","authors":"J. G. Hernández-Calderón, E. Benítez-Guerrero, J. R. Rojano-Cáceres, Carmen Mezura-Godoy","doi":"10.1134/s036176882308008x","DOIUrl":"https://doi.org/10.1134/s036176882308008x","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>This work seeks to contribute to the development of intelligent environments by presenting an approach oriented to the identification of On-Task and Off-Task behaviors in educational settings. This is accomplished by monitoring and analyzing the user-object interactions that users manifest while performing academic activities with a tangible-intangible hybrid system in a university intelligent environment configuration. With the proposal of a framework and the Orange Data Mining tool and the Neural Network, Random Forest, Naive Bayes, and Tree classification models, training and testing was carried out with the user-object interaction records of the 13 students (11 for training and two for testing) to identify representative sequences of behavior from user-object interaction records. The two models that had the best results, despite the small number of data, were the Neural Network and Naive Bayes. Although a more significant amount of data is necessary to perform a classification adequately, the process allowed exemplifying this process so that it can later be fully incorporated into an intelligent educational system.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"121 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1134/s0361768823080091
Zazil Ibarra-Cuevas, Jose Nunez-Varela, Alberto Nunez-Varela, Francisco E. Martinez-Perez, Sandra E. Nava-Muñoz, Cesar A. Ramirez-Gamez, Hector G. Perez-Gonzalez
Abstract
Breast cancer is a serious threat to women’s health worldwide. Although the exact causes of this disease are still unknown, it is known that the incidence of breast cancer is associated with risk factors. Risk factors in cancer are any genetic, reproductive, hormonal, physical, biological, or lifestyle-related conditions that increase the likelihood of developing breast cancer. This research aims to identify the most relevant risk factors in patients with breast cancer in a dataset by following the Knowledge Discovery in Databases process. To determine the relevance of risk factors, this research implements two feature selection methods: the Chi-Squared test and Mutual Information; and seven classifiers are used to validate the results obtained. Our results show that the risk factors identified as the most relevant are related to the age of the patient, her menopausal status, whether she had undergone hormonal therapy, and her type of menopause.
{"title":"Determination of Relevant Risk Factors for Breast Cancer Using Feature Selection","authors":"Zazil Ibarra-Cuevas, Jose Nunez-Varela, Alberto Nunez-Varela, Francisco E. Martinez-Perez, Sandra E. Nava-Muñoz, Cesar A. Ramirez-Gamez, Hector G. Perez-Gonzalez","doi":"10.1134/s0361768823080091","DOIUrl":"https://doi.org/10.1134/s0361768823080091","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Breast cancer is a serious threat to women’s health worldwide. Although the exact causes of this disease are still unknown, it is known that the incidence of breast cancer is associated with risk factors. Risk factors in cancer are any genetic, reproductive, hormonal, physical, biological, or lifestyle-related conditions that increase the likelihood of developing breast cancer. This research aims to identify the most relevant risk factors in patients with breast cancer in a dataset by following the <i>Knowledge Discovery in Databases</i> process. To determine the relevance of risk factors, this research implements two feature selection methods: the <i>Chi-Squared test</i> and <i>Mutual Information</i>; and seven classifiers are used to validate the results obtained. Our results show that the risk factors identified as the most relevant are related to the age of the patient, her menopausal status, whether she had undergone hormonal therapy, and her type of menopause.</p>","PeriodicalId":54555,"journal":{"name":"Programming and Computer Software","volume":"18 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}