Pub Date : 2024-02-01DOI: 10.3844/jcssp.2024.218.228
Jennifer Felicia, J. Andry, Fransiskus Adikara, D. Y. Bernanda, Kevin Christianto
: Technology is a tool that is always used in every human life today. Technology cannot just be used; technology must also be studied to find out whether it has played a good role or not. The research was conducted on business processes in secondary schools where the importance of technology is often underestimated, even though technology in schools also has an important role. In particular, technology analysis in the world of education will usually analyze applications or facilities related to the learning process. The aim of this research is to analyze accounting applications that help business processes in running school continuity, which is ultimately important for business continuity. implementation of accounting applications will help schools determine the increased level of capability and transparency. The analysis was carried out using the COBIT 2019 framework, where this framework has been updated with additional design factor analysis so that the audit will be carried out based on school priorities, focus, and strategy. In this research, data collection was carried out by means of observation and interviews with foundation administrators and school directors who had power in the school and had previously given research permission to the school concerned. The results obtained are a low level of ability with a high expected level of ability, namely at level 5, based on the design factors that have been carried out. Several recommendations are provided to help secondary schools achieve expected levels in each domain.
{"title":"Leveraging COBIT 2019 to Measure the Accounting Software Implementation in High Schools for Better Transparency","authors":"Jennifer Felicia, J. Andry, Fransiskus Adikara, D. Y. Bernanda, Kevin Christianto","doi":"10.3844/jcssp.2024.218.228","DOIUrl":"https://doi.org/10.3844/jcssp.2024.218.228","url":null,"abstract":": Technology is a tool that is always used in every human life today. Technology cannot just be used; technology must also be studied to find out whether it has played a good role or not. The research was conducted on business processes in secondary schools where the importance of technology is often underestimated, even though technology in schools also has an important role. In particular, technology analysis in the world of education will usually analyze applications or facilities related to the learning process. The aim of this research is to analyze accounting applications that help business processes in running school continuity, which is ultimately important for business continuity. implementation of accounting applications will help schools determine the increased level of capability and transparency. The analysis was carried out using the COBIT 2019 framework, where this framework has been updated with additional design factor analysis so that the audit will be carried out based on school priorities, focus, and strategy. In this research, data collection was carried out by means of observation and interviews with foundation administrators and school directors who had power in the school and had previously given research permission to the school concerned. The results obtained are a low level of ability with a high expected level of ability, namely at level 5, based on the design factors that have been carried out. Several recommendations are provided to help secondary schools achieve expected levels in each domain.","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139830464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01DOI: 10.3844/jcssp.2024.218.228
Jennifer Felicia, J. Andry, Fransiskus Adikara, D. Y. Bernanda, Kevin Christianto
: Technology is a tool that is always used in every human life today. Technology cannot just be used; technology must also be studied to find out whether it has played a good role or not. The research was conducted on business processes in secondary schools where the importance of technology is often underestimated, even though technology in schools also has an important role. In particular, technology analysis in the world of education will usually analyze applications or facilities related to the learning process. The aim of this research is to analyze accounting applications that help business processes in running school continuity, which is ultimately important for business continuity. implementation of accounting applications will help schools determine the increased level of capability and transparency. The analysis was carried out using the COBIT 2019 framework, where this framework has been updated with additional design factor analysis so that the audit will be carried out based on school priorities, focus, and strategy. In this research, data collection was carried out by means of observation and interviews with foundation administrators and school directors who had power in the school and had previously given research permission to the school concerned. The results obtained are a low level of ability with a high expected level of ability, namely at level 5, based on the design factors that have been carried out. Several recommendations are provided to help secondary schools achieve expected levels in each domain.
{"title":"Leveraging COBIT 2019 to Measure the Accounting Software Implementation in High Schools for Better Transparency","authors":"Jennifer Felicia, J. Andry, Fransiskus Adikara, D. Y. Bernanda, Kevin Christianto","doi":"10.3844/jcssp.2024.218.228","DOIUrl":"https://doi.org/10.3844/jcssp.2024.218.228","url":null,"abstract":": Technology is a tool that is always used in every human life today. Technology cannot just be used; technology must also be studied to find out whether it has played a good role or not. The research was conducted on business processes in secondary schools where the importance of technology is often underestimated, even though technology in schools also has an important role. In particular, technology analysis in the world of education will usually analyze applications or facilities related to the learning process. The aim of this research is to analyze accounting applications that help business processes in running school continuity, which is ultimately important for business continuity. implementation of accounting applications will help schools determine the increased level of capability and transparency. The analysis was carried out using the COBIT 2019 framework, where this framework has been updated with additional design factor analysis so that the audit will be carried out based on school priorities, focus, and strategy. In this research, data collection was carried out by means of observation and interviews with foundation administrators and school directors who had power in the school and had previously given research permission to the school concerned. The results obtained are a low level of ability with a high expected level of ability, namely at level 5, based on the design factors that have been carried out. Several recommendations are provided to help secondary schools achieve expected levels in each domain.","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139890071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.3844/jcssp.2024.96.105
Rajalakshmi Sakthivel, Kanmani Selvadurai
: In recent explorations of biologically inspired optimization strategies, the Slime Mould Reproduction (SMR) algorithm emerges as an innovative meta-heuristic optimization technique. This algorithm is deeply rooted in the reproductive dynamics observed in slime molds, particularly the intricate balance these organisms strike between local and global spore dispersal. By replicating this balance, the SMR algorithm deftly navigates between exploration and exploitation phases, aiming to pinpoint optimal solutions across diverse problem domains. For the purpose of evaluation, the SMR algorithm was diligently tested on three engineering problems with inherent constraints: Gear train design, three-bar truss design, and welded beam design. A comprehensive comparative study indicated that the SMR algorithm outperformed esteemed optimization techniques such as Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Differential Evolution (DE), Grasshopper Optimization Algorithm (GOA), and Whale Optimization Algorithm (WOA) in these domains. While the exemplary performance of the SMR algorithm is worth noting, it is essential, in line with the No Free Lunch (NFL) theorem, to underscore that the performance of any optimization algorithm invariably depends on the particular problem it addresses. Nevertheless, the SMR algorithm's consistent triumph in benchmark tests underscores its potential as a formidable contender in the vast realm of optimization algorithms. The current exploration not only emphasizes the ever-expanding horizon of bio-inspired algorithms but also positions the SMR algorithm as a pivotal addition to the arsenal of optimization tools. Future implications and the potential scope of the SMR algorithm extend to various domains, from computational biology to intricate industrial designs. Envisioning its broader applicability, upcoming research avenues may delve into refining SMR's core procedures, borrowing insights from a broader range of biological behaviors for algorithmic ideation, and contemplating a binary version of the SMR algorithm, thereby amplifying its versatility in diverse optimization landscapes.
{"title":"Slime Mould Reproduction: A New Optimization Algorithm for Constrained Engineering Problems","authors":"Rajalakshmi Sakthivel, Kanmani Selvadurai","doi":"10.3844/jcssp.2024.96.105","DOIUrl":"https://doi.org/10.3844/jcssp.2024.96.105","url":null,"abstract":": In recent explorations of biologically inspired optimization strategies, the Slime Mould Reproduction (SMR) algorithm emerges as an innovative meta-heuristic optimization technique. This algorithm is deeply rooted in the reproductive dynamics observed in slime molds, particularly the intricate balance these organisms strike between local and global spore dispersal. By replicating this balance, the SMR algorithm deftly navigates between exploration and exploitation phases, aiming to pinpoint optimal solutions across diverse problem domains. For the purpose of evaluation, the SMR algorithm was diligently tested on three engineering problems with inherent constraints: Gear train design, three-bar truss design, and welded beam design. A comprehensive comparative study indicated that the SMR algorithm outperformed esteemed optimization techniques such as Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Differential Evolution (DE), Grasshopper Optimization Algorithm (GOA), and Whale Optimization Algorithm (WOA) in these domains. While the exemplary performance of the SMR algorithm is worth noting, it is essential, in line with the No Free Lunch (NFL) theorem, to underscore that the performance of any optimization algorithm invariably depends on the particular problem it addresses. Nevertheless, the SMR algorithm's consistent triumph in benchmark tests underscores its potential as a formidable contender in the vast realm of optimization algorithms. The current exploration not only emphasizes the ever-expanding horizon of bio-inspired algorithms but also positions the SMR algorithm as a pivotal addition to the arsenal of optimization tools. Future implications and the potential scope of the SMR algorithm extend to various domains, from computational biology to intricate industrial designs. Envisioning its broader applicability, upcoming research avenues may delve into refining SMR's core procedures, borrowing insights from a broader range of biological behaviors for algorithmic ideation, and contemplating a binary version of the SMR algorithm, thereby amplifying its versatility in diverse optimization landscapes.","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139126613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.3844/jcssp.2024.106.120
Pankaja Lakshmi P., Sivagami M.
{"title":"LT-LBP-Based Spatial Texture Feature Extraction with Deep Learning for X-Ray Images","authors":"Pankaja Lakshmi P., Sivagami M.","doi":"10.3844/jcssp.2024.106.120","DOIUrl":"https://doi.org/10.3844/jcssp.2024.106.120","url":null,"abstract":"","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139126084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.3844/jcssp.2024.52.68
Paspula Ravinder, Saravanan Srinivasan
: The medical image captioning field is one of the prominent fields nowadays. The interpretation and captioning of medical images can be a time-consuming and costly process, often requiring expert support. The growing volume of medical images makes it challenging for radiologists to handle their workload alone. However, addressing the issues of high cost and time can be achieved by automating the process of medical image captioning while assisting radiologists in improving the reliability and accuracy of the generated captions. It also provides an opportunity for new radiologists with less experience to benefit from automated support. Despite previous efforts in automating medical image captioning, there are still some unresolved issues, including generating overly detailed captions, difficulty in identifying abnormal regions in complex images, and low accuracy and reliability of some generated captions. To tackle these challenges, we suggest the new deep learning model specifically tailored for captioning medical images. Our model aims to extract features from images and generate meaningful sentences related to the identified defects with high accuracy. The approach we present utilizes a multi-model neural network that closely mimics the human visual system and automatically learns to describe the content of images. Our proposed method consists of two stages. In the first stage, known as the information extraction phase, we employ the YOLOv4
{"title":"Automated Medical Image Captioning with Soft Attention-Based LSTM Model Utilizing YOLOv4 Algorithm","authors":"Paspula Ravinder, Saravanan Srinivasan","doi":"10.3844/jcssp.2024.52.68","DOIUrl":"https://doi.org/10.3844/jcssp.2024.52.68","url":null,"abstract":": The medical image captioning field is one of the prominent fields nowadays. The interpretation and captioning of medical images can be a time-consuming and costly process, often requiring expert support. The growing volume of medical images makes it challenging for radiologists to handle their workload alone. However, addressing the issues of high cost and time can be achieved by automating the process of medical image captioning while assisting radiologists in improving the reliability and accuracy of the generated captions. It also provides an opportunity for new radiologists with less experience to benefit from automated support. Despite previous efforts in automating medical image captioning, there are still some unresolved issues, including generating overly detailed captions, difficulty in identifying abnormal regions in complex images, and low accuracy and reliability of some generated captions. To tackle these challenges, we suggest the new deep learning model specifically tailored for captioning medical images. Our model aims to extract features from images and generate meaningful sentences related to the identified defects with high accuracy. The approach we present utilizes a multi-model neural network that closely mimics the human visual system and automatically learns to describe the content of images. Our proposed method consists of two stages. In the first stage, known as the information extraction phase, we employ the YOLOv4","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139126156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
: Fuzzy logic-based quantification of usability expectation for an m-commerce mobile application is a process of measuring the usability of a mobile application by using fuzzy logic principles. The usability of any mobile application is used to find out the user experience of the mobile application by analyzing the user's expectations and preferences. Fuzzy logic always be the optimal choice for quantification. Fuzzy logic-based quantification of usability expectation assesses the user experience of an m-commerce mobile application by taking into account the user's needs, preferences, and expectations. Usability expectation also takes into account the ability of the user to understand and interact with the application, the degree to which the application meets the user's expectations, and the overall satisfaction with the application. This process helps to identify areas of improvement, enabling the developers to make necessary changes for a better user experience. This study presents to design of a usability metric framework and then quantifies the overall usability quality of an m-commerce mobile application with the help of fuzzy logic. The proposed usability metric framework is based on the Goal-Question-Metric (GQM) approach and is intended to provide a comprehensive and systematic approach to design metrics to assess the qualitative aspect of mobile phone applications. The framework has been developed and tested in an m-commerce context and provides a set of measurable criteria to quantify m-commerce mobile applications as per standard. The results of the evaluation can then be used to improve m-commerce mobile applications and to ensure that the user experience is optimized
{"title":"Fuzzy Logic-Based Quantification of Usability Expectation for M-Commerce Mobile Application by Using GQM and ISO 9241-11","authors":"Manish Mishra, Reena Dadhich","doi":"10.3844/jcssp.2024.1.9","DOIUrl":"https://doi.org/10.3844/jcssp.2024.1.9","url":null,"abstract":": Fuzzy logic-based quantification of usability expectation for an m-commerce mobile application is a process of measuring the usability of a mobile application by using fuzzy logic principles. The usability of any mobile application is used to find out the user experience of the mobile application by analyzing the user's expectations and preferences. Fuzzy logic always be the optimal choice for quantification. Fuzzy logic-based quantification of usability expectation assesses the user experience of an m-commerce mobile application by taking into account the user's needs, preferences, and expectations. Usability expectation also takes into account the ability of the user to understand and interact with the application, the degree to which the application meets the user's expectations, and the overall satisfaction with the application. This process helps to identify areas of improvement, enabling the developers to make necessary changes for a better user experience. This study presents to design of a usability metric framework and then quantifies the overall usability quality of an m-commerce mobile application with the help of fuzzy logic. The proposed usability metric framework is based on the Goal-Question-Metric (GQM) approach and is intended to provide a comprehensive and systematic approach to design metrics to assess the qualitative aspect of mobile phone applications. The framework has been developed and tested in an m-commerce context and provides a set of measurable criteria to quantify m-commerce mobile applications as per standard. The results of the evaluation can then be used to improve m-commerce mobile applications and to ensure that the user experience is optimized","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139126315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.3844/jcssp.2024.33.43
A. Alraddadi
: There have been increased levels of cybercrime in the database industry, which has hurt the confidentiality, integrity, and availability of these systems. Most organizations apply several security layers to detect and prevent database crimes. For this reason, Database Forensics (DBF) plays a very important role in capturing and discovering, who the criminal is, when the crime was committed, and which part of the database the crime occurred. Several forensic models have been proposed for the DBF field, which can be used to identify, collect, preserve, examine, analyze, and document database crimes. However, most of these models focused on specific database systems due to the variety of the database infrastructure and the multidimensional nature of the database systems. The most important part of the DBF field is the analysis process used to investigate the captured data and discover the attack. Thus, this study proposes an Integrated Reconstruction Investigation Model (IRIM) for database forensics using a metamodeling method. It consists of two main processes: The examining process and the discovering and reporting process. A real scenario has been used to validate the effectiveness of the proposed model. According to the results, the proposed model could detect database cybercrimes and allow domain forensic practitioners to capture and analyze database crimes efficiently.
{"title":"Reconstruction Investigation Model for Database Management Systems","authors":"A. Alraddadi","doi":"10.3844/jcssp.2024.33.43","DOIUrl":"https://doi.org/10.3844/jcssp.2024.33.43","url":null,"abstract":": There have been increased levels of cybercrime in the database industry, which has hurt the confidentiality, integrity, and availability of these systems. Most organizations apply several security layers to detect and prevent database crimes. For this reason, Database Forensics (DBF) plays a very important role in capturing and discovering, who the criminal is, when the crime was committed, and which part of the database the crime occurred. Several forensic models have been proposed for the DBF field, which can be used to identify, collect, preserve, examine, analyze, and document database crimes. However, most of these models focused on specific database systems due to the variety of the database infrastructure and the multidimensional nature of the database systems. The most important part of the DBF field is the analysis process used to investigate the captured data and discover the attack. Thus, this study proposes an Integrated Reconstruction Investigation Model (IRIM) for database forensics using a metamodeling method. It consists of two main processes: The examining process and the discovering and reporting process. A real scenario has been used to validate the effectiveness of the proposed model. According to the results, the proposed model could detect database cybercrimes and allow domain forensic practitioners to capture and analyze database crimes efficiently.","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139391774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.3844/jcssp.2024.44.51
Cheikhane Seyed, Mbaye Kebe, Mohamed El Moustapha El Arby, El Benany Mohamed Mahmoud, Cheikhne Mohamed Mahmoud Seyidi
: This article proposes an ML-based cyber security mechanism to optimize intrusion detection that attacks internet objects (IoT). Our approach consists of bringing together several learning methods namely supervised learning, unsupervised learning and reinforcement learning within the same Canvas. The objective is to choose among them the most optimal for classifying and predicting attacks while minimizing the impact linked to the learning costs of these attacks. In our proposed model, we have used a modular design to facilitate the implementation of the intrusion detection engine. The first Meta-learning module is used to collect metadata related to existing algorithmic parameters and learning methods in ML. As for the second module, it allows the use of a cost-sensitive learning technique so that the model is informed of the cost of intrusion detection scenarios. Therefore, among the ML classification algorithms, we choose the one whose automatic learning of intrusions is the least expensive in terms of its speed and its quality in predicting reality. This will make it possible to control the level of acceptable risk in relation to the typology of cyber-attacks. We then simulated our solution using the Weka tool. This led to questionable results, which can be subject to the evaluation of model performance. These results show that the classification quality rate is 93.66% and the classification consistency rate is 0.882 (close to unit 1). This proves the accuracy and performance of the model.
:本文提出了一种基于 ML 的网络安全机制,以优化攻击互联网对象 (IoT) 的入侵检测。我们的方法包括在同一 Canvas 中汇集几种学习方法,即监督学习、无监督学习和强化学习。我们的目标是在这些方法中选择最适合对攻击进行分类和预测的方法,同时将与这些攻击的学习成本相关的影响降至最低。在我们提出的模型中,我们采用了模块化设计,以方便入侵检测引擎的实施。第一个元学习模块用于收集与现有算法参数和 ML 学习方法相关的元数据。至于第二个模块,它允许使用对成本敏感的学习技术,以便让模型了解入侵检测场景的成本。因此,在 ML 分类算法中,我们选择其自动学习入侵的速度和预测现实的质量成本最低的算法。这样就可以根据网络攻击的类型来控制可接受的风险水平。然后,我们使用 Weka 工具模拟了我们的解决方案。这导致了一些值得商榷的结果,这些结果可以用于模型性能的评估。这些结果表明,分类质量率为 93.66%,分类一致性率为 0.882(接近单位 1)。这证明了模型的准确性和性能。
{"title":"Cybersecurity Mechanism for Automatic Detection of IoT Intrusions Using Machine Learning","authors":"Cheikhane Seyed, Mbaye Kebe, Mohamed El Moustapha El Arby, El Benany Mohamed Mahmoud, Cheikhne Mohamed Mahmoud Seyidi","doi":"10.3844/jcssp.2024.44.51","DOIUrl":"https://doi.org/10.3844/jcssp.2024.44.51","url":null,"abstract":": This article proposes an ML-based cyber security mechanism to optimize intrusion detection that attacks internet objects (IoT). Our approach consists of bringing together several learning methods namely supervised learning, unsupervised learning and reinforcement learning within the same Canvas. The objective is to choose among them the most optimal for classifying and predicting attacks while minimizing the impact linked to the learning costs of these attacks. In our proposed model, we have used a modular design to facilitate the implementation of the intrusion detection engine. The first Meta-learning module is used to collect metadata related to existing algorithmic parameters and learning methods in ML. As for the second module, it allows the use of a cost-sensitive learning technique so that the model is informed of the cost of intrusion detection scenarios. Therefore, among the ML classification algorithms, we choose the one whose automatic learning of intrusions is the least expensive in terms of its speed and its quality in predicting reality. This will make it possible to control the level of acceptable risk in relation to the typology of cyber-attacks. We then simulated our solution using the Weka tool. This led to questionable results, which can be subject to the evaluation of model performance. These results show that the classification quality rate is 93.66% and the classification consistency rate is 0.882 (close to unit 1). This proves the accuracy and performance of the model.","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139128715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.3844/jcssp.2024.121.128
Yemima Monica Geasela, D. Y. Bernanda, Johanes Fernandes, J. Andry, Christian Kurniadi Jusuf, Samuel Winata, Shierly Everlin
: This study utilizes a decision tree model in RapidMiner to analyze a dataset from Kaggle, comprising 200 student records. Among these, 70 students reported mental health issues, while 130 did not. Strikingly, a significant majority of 58 out of the 70 students with mental health concerns do not seek assistance from professionals. This study underscores the pressing issue of underutilization of mental health services among students and offers practical solutions, such as enhancing awareness and education, improving access to mental health services, providing peer support, and addressing underlying issues. The research design includes data collection methods that maintained ethical standards and the decision tree model's application for analysis. This study's contribution lies in its identification of the prevalence of students with mental health issues who do not seek help and the proposed solutions to address this critical issue.
{"title":"Analysis of Student Mental Health Dataset Using Mining Techniques","authors":"Yemima Monica Geasela, D. Y. Bernanda, Johanes Fernandes, J. Andry, Christian Kurniadi Jusuf, Samuel Winata, Shierly Everlin","doi":"10.3844/jcssp.2024.121.128","DOIUrl":"https://doi.org/10.3844/jcssp.2024.121.128","url":null,"abstract":": This study utilizes a decision tree model in RapidMiner to analyze a dataset from Kaggle, comprising 200 student records. Among these, 70 students reported mental health issues, while 130 did not. Strikingly, a significant majority of 58 out of the 70 students with mental health concerns do not seek assistance from professionals. This study underscores the pressing issue of underutilization of mental health services among students and offers practical solutions, such as enhancing awareness and education, improving access to mental health services, providing peer support, and addressing underlying issues. The research design includes data collection methods that maintained ethical standards and the decision tree model's application for analysis. This study's contribution lies in its identification of the prevalence of students with mental health issues who do not seek help and the proposed solutions to address this critical issue.","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139126848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.3844/jcssp.2024.10.32
Thomas Nagunwa
: Attackers are increasingly using Name Server IP Flux Networks (NSIFNs) to run the domain name services of their phishing websites in order to extend the duration of their phishing operations. These networks host a name server that manages the Domain Name System (DNS) records of the websites on a network of compromised machines with frequently changing IP addresses. As a result, blacklisting the machines has less impact on stopping the services, lengthening their lifespan and that of the websites they support. High detection delays and the use of fewer, lesser varied detection features limit the proposed solutions for identifying the websites hosted in these networks, making them more susceptible to detection evasions. This study suggests a novel set of highly diverse features based on DNS, network, and host behaviors for fast and highly accurate detection of phishing websites hosted in NSIFNs using a Machine Learning (ML) approach. Using a variety of traditional and deep learning ML algorithms, the prediction performance of our features was assessed in the context of binary and multi-class classification tasks. Our approach achieved optimal accuracy rates of 98.59% and 90.41% for the binary and multi-class classification tasks, respectively. Our approach is a crucial step toward monitoring NSIFN components to mitigate phishing attacks efficiently.
:攻击者越来越多地使用名称服务器 IP 流量网络(NSIFN)来运行其钓鱼网站的域名服务,以延长其钓鱼行动的持续时间。这些网络托管一个名称服务器,该服务器在IP地址经常变化的受攻击机器网络上管理网站的域名系统(DNS)记录。因此,将这些机器列入黑名单对停止服务的影响较小,从而延长了它们及其所支持网站的寿命。高检测延迟和使用较少、变化较少的检测功能限制了所提出的识别这些网络中托管的网站的解决方案,使其更容易受到检测规避的影响。本研究提出了一套基于 DNS、网络和主机行为的高度多样化的新特征,可使用机器学习 (ML) 方法快速、高度准确地检测 NSIFN 中托管的钓鱼网站。利用各种传统和深度学习 ML 算法,我们在二元和多类分类任务中评估了特征的预测性能。在二元分类和多类分类任务中,我们的方法分别实现了 98.59% 和 90.41% 的最佳准确率。我们的方法为监控 NSIFN 组件以有效缓解网络钓鱼攻击迈出了关键一步。
{"title":"Detection of Phishing Websites Hosted in Name Server Flux Networks Using Machine Learning","authors":"Thomas Nagunwa","doi":"10.3844/jcssp.2024.10.32","DOIUrl":"https://doi.org/10.3844/jcssp.2024.10.32","url":null,"abstract":": Attackers are increasingly using Name Server IP Flux Networks (NSIFNs) to run the domain name services of their phishing websites in order to extend the duration of their phishing operations. These networks host a name server that manages the Domain Name System (DNS) records of the websites on a network of compromised machines with frequently changing IP addresses. As a result, blacklisting the machines has less impact on stopping the services, lengthening their lifespan and that of the websites they support. High detection delays and the use of fewer, lesser varied detection features limit the proposed solutions for identifying the websites hosted in these networks, making them more susceptible to detection evasions. This study suggests a novel set of highly diverse features based on DNS, network, and host behaviors for fast and highly accurate detection of phishing websites hosted in NSIFNs using a Machine Learning (ML) approach. Using a variety of traditional and deep learning ML algorithms, the prediction performance of our features was assessed in the context of binary and multi-class classification tasks. Our approach achieved optimal accuracy rates of 98.59% and 90.41% for the binary and multi-class classification tasks, respectively. Our approach is a crucial step toward monitoring NSIFN components to mitigate phishing attacks efficiently.","PeriodicalId":40005,"journal":{"name":"Journal of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139125134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}