Background: The most popular time-sharing operating systems scheduling technique, whose efficiency heavily dependent on time slice selection, is the round robin CPU scheduling algorithm. The time slice works similar to the First-Come-First-Serve (FCFS) scheduling or processor sharing algorithm if it is large or extremely too small. Some of the existing research papers have an algorithm called Improved Round Robin with Highest Response Ratio Next (IRRHRRN) which made use of response ratio with a predefined time quantum of 10ms with the major aim of avoiding starvation. However, the IRRHRRN algorithm favors processes with shorter burst time than the ones with longer burst time, and gave no regard to the process arrival time, thus leading to starvation. Aim: This study tries to improve on the IRRHRRN algorithm by proposing the Modified Round Robin with Highest Response Ratio Next (MRRHRRN) CPU Scheduling Algorithm using Dynamic Time Quantum in order to reduce the problem of starvation. Method: Dynamic method of determining the time quantum was adopted. Results: The proposed algorithm was compared with four other existing algorisms such as Standard Round Robin (RR), Improved Round Robin (IRR), An Additional Improvement in Round Robin (AAIRR), and the Improved Round Robin with Highest Response Ratio Next (IRRHRRN) and it provided some promising results in terms of the Average Waiting Time of 35407.6 ms, Average Turnaround Time of 36117.6 ms, Average Response Time of 10894.8 ms and Number of Context Switch of 301for the Non-Zero Arrival Times Processes
{"title":"Modified Round Robin with Highest Response Ratio Next CPU Scheduling Algorithm using Dynamic Time Quantum","authors":"Suleiman Ebaiya Abubakar","doi":"10.56471/slujst.v6i.363","DOIUrl":"https://doi.org/10.56471/slujst.v6i.363","url":null,"abstract":"Background: The most popular time-sharing operating systems scheduling technique, whose efficiency heavily dependent on time slice selection, is the round robin CPU scheduling algorithm. The time slice works similar to the First-Come-First-Serve (FCFS) scheduling or processor sharing algorithm if it is large or extremely too small. Some of the existing research papers have an algorithm called Improved Round Robin with Highest Response Ratio Next (IRRHRRN) which made use of response ratio with a predefined time quantum of 10ms with the major aim of avoiding starvation. However, the IRRHRRN algorithm favors processes with shorter burst time than the ones with longer burst time, and gave no regard to the process arrival time, thus leading to starvation. Aim: This study tries to improve on the IRRHRRN algorithm by proposing the Modified Round Robin with Highest Response Ratio Next (MRRHRRN) CPU Scheduling Algorithm using Dynamic Time Quantum in order to reduce the problem of starvation. Method: Dynamic method of determining the time quantum was adopted. Results: The proposed algorithm was compared with four other existing algorisms such as Standard Round Robin (RR), Improved Round Robin (IRR), An Additional Improvement in Round Robin (AAIRR), and the Improved Round Robin with Highest Response Ratio Next (IRRHRRN) and it provided some promising results in terms of the Average Waiting Time of 35407.6 ms, Average Turnaround Time of 36117.6 ms, Average Response Time of 10894.8 ms and Number of Context Switch of 301for the Non-Zero Arrival Times Processes","PeriodicalId":299818,"journal":{"name":"SLU Journal of Science and Technology","volume":"354 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122763021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A rapid advancement in the electronics and telecommunications industries has been observed. As a result, Communication gadgets are becoming more powerful and are more widely adopted. The demand for high data rate transmission is increasing. This has led to spectrum scarcity in Radio Frequency (RF) systems. Consequently, it is necessary to explore other technical means to meet the increasing demand for higher data rates. Free Space Optical Communication (FSOC) is a promising technology that offers one of such means. However, the major technical challenge in the FSOC systems is that, their performance is limited by atmospheric impairments such as: absorption, scattering and turbulence caused by rain, cloud, snow, wind, dust, aerosol, and fog. This research is aimed to addressed beam divergence challenge due to fog and raindrop in a tropical climate using wider field of view technique (FoV). The result showed that, for the fog measurement using mini solar panel at the receiver, the system achieved higher SNR of 60 dB at corresponding BER of 10-4. While, with the photodiode, the system achieved an SNR of 38 dB at same BER. For the rain measurement the same procedure was adopted and the system achieved higher SNR of 90 dB and 58 dB at same BER utilizing both solar panel and photodiode respectively. The result showed that, the fog impairment attenuates optical signal more than of rain by 34 % averagely
{"title":"Beam Divergence Loss Mitigation in Free Space Optical Communication Channel Using Field of View Technique","authors":"Sule Lamido","doi":"10.56471/slujst.v6i.344","DOIUrl":"https://doi.org/10.56471/slujst.v6i.344","url":null,"abstract":"A rapid advancement in the electronics and telecommunications industries has been observed. As a result, Communication gadgets are becoming more powerful and are more widely adopted. The demand for high data rate transmission is increasing. This has led to spectrum scarcity in Radio Frequency (RF) systems. Consequently, it is necessary to explore other technical means to meet the increasing demand for higher data rates. Free Space Optical Communication (FSOC) is a promising technology that offers one of such means. However, the major technical challenge in the FSOC systems is that, their performance is limited by atmospheric impairments such as: absorption, scattering and turbulence caused by rain, cloud, snow, wind, dust, aerosol, and fog. This research is aimed to addressed beam divergence challenge due to fog and raindrop in a tropical climate using wider field of view technique (FoV). The result showed that, for the fog measurement using mini solar panel at the receiver, the system achieved higher SNR of 60 dB at corresponding BER of 10-4. While, with the photodiode, the system achieved an SNR of 38 dB at same BER. For the rain measurement the same procedure was adopted and the system achieved higher SNR of 90 dB and 58 dB at same BER utilizing both solar panel and photodiode respectively. The result showed that, the fog impairment attenuates optical signal more than of rain by 34 % averagely","PeriodicalId":299818,"journal":{"name":"SLU Journal of Science and Technology","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132216000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Two-phase Optimization of Virtual Machine Placement (VMP) Problem considers both the Online Incremental VMP (iVMP) phase in which the new arrival of dynamic requests of Virtual Machines VMs are attended to and the Offline VMP reconfiguration (VMPr) phase that performs placement recalculation. In the two-phase scheme, the first part of the two-phase approach is the iVMP, where virtual machines (VMs) can be built, changed, or destroyed at runtime. While the second phase focuses on raising the standard of solutions produced by the iVMP, several studies have been done in different literature to solve the VMP problem. However, the methods used tend to be over-forecast and have long runs of a linear trend. This affects the prediction and produces a less optimal solution. Objective: The following four objective functions are optimized using the proposed Extreme Learning Machine Prediction-Based Triggering Method for Virtual Machine Placement in Cloud Computing Datacenters in Two-Phases, which combines the advantages of both online (dynamic) and static (offline) VMP formulations: the length of the reconfiguration process, the amount of energy used, the way resources are used, and the financial expenses. This study suggests a brand-new strategy for deciding when to start the VMP reconfiguration phase. Results: The Method provides more accuracy to the predicted requests as well as reduces the total economic penalties for service Level Agreement(SLA) violations. An experimental comparison with the existing approach is conducted utilizing 400 cases. Conclusion: The results demonstrated that, in comparison to the benchmark approach, the proposed work obtained a minimum cost function with a 10.5% improvement
{"title":"Two-Phase Virtual Machine Placement in Cloud Computing Data Centers Using Extreme Learning Machine Prediction-Based Triggering Method","authors":"Nafiu Musa Muhaammad","doi":"10.56471/slujst.v6i.359","DOIUrl":"https://doi.org/10.56471/slujst.v6i.359","url":null,"abstract":"Background: Two-phase Optimization of Virtual Machine Placement (VMP) Problem considers both the Online Incremental VMP (iVMP) phase in which the new arrival of dynamic requests of Virtual Machines VMs are attended to and the Offline VMP reconfiguration (VMPr) phase that performs placement recalculation. In the two-phase scheme, the first part of the two-phase approach is the iVMP, where virtual machines (VMs) can be built, changed, or destroyed at runtime. While the second phase focuses on raising the standard of solutions produced by the iVMP, several studies have been done in different literature to solve the VMP problem. However, the methods used tend to be over-forecast and have long runs of a linear trend. This affects the prediction and produces a less optimal solution. Objective: The following four objective functions are optimized using the proposed Extreme Learning Machine Prediction-Based Triggering Method for Virtual Machine Placement in Cloud Computing Datacenters in Two-Phases, which combines the advantages of both online (dynamic) and static (offline) VMP formulations: the length of the reconfiguration process, the amount of energy used, the way resources are used, and the financial expenses. This study suggests a brand-new strategy for deciding when to start the VMP reconfiguration phase. Results: The Method provides more accuracy to the predicted requests as well as reduces the total economic penalties for service Level Agreement(SLA) violations. An experimental comparison with the existing approach is conducted utilizing 400 cases. Conclusion: The results demonstrated that, in comparison to the benchmark approach, the proposed work obtained a minimum cost function with a 10.5% improvement","PeriodicalId":299818,"journal":{"name":"SLU Journal of Science and Technology","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130248181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study was carried out to investigate information resource security threat in university libraries in Katsina State. The objectives of the paper were to identify the types of information resources threat available and the measures taken to solve the issue of information resource security threat in university libraries in Katsina state. The paper employed a qualitative methodological approach using a narrative-based design. The population of the study comprises nine (9) participants selected from the libraries under study. A narrative-based analysis was adopted to analyze the data collected. The study established that theft of library collections, mutilations of the library collection, disruptive patrons’ behavior,natural disaster and artificial disaster, poorly secured premises, library staff attitude and ignorance of security issues were types of information resources threat in the university libraries in Katsina State. The study also established that windows and doors protection, a mark of ownership, checking in and checking out, recruitment of more library staff and security personnel, and use of telecommunication system e.g.,electronic access control, CCTV, RFID were the measures taken to solve the issue of information resources threat in the university libraries in Katsina State. Based on the findings of the study, the paper therefore recommends that university libraries should have a written collection development policy that covers security measures that will help other new staff and the academic community to know what is on the ground for them to follow strictly. The study further concludes that, the libraries should provide a perimeter alarm system, electro-magnetic control system, firewall installation, radio frequency identification system, and electronic access control in the university libraries, and security entrance of every university library should be managed by well-trained and qualified professionals to save resources from any act of mismanagement or security threat of information resources
{"title":"Information Resources Security Threat in University Libraries in Katsina State","authors":"Abdulkadir Ahmed Idris","doi":"10.56471/slujst.v6i.389","DOIUrl":"https://doi.org/10.56471/slujst.v6i.389","url":null,"abstract":"This study was carried out to investigate information resource security threat in university libraries in Katsina State. The objectives of the paper were to identify the types of information resources threat available and the measures taken to solve the issue of information resource security threat in university libraries in Katsina state. The paper employed a qualitative methodological approach using a narrative-based design. The population of the study comprises nine (9) participants selected from the libraries under study. A narrative-based analysis was adopted to analyze the data collected. The study established that theft of library collections, mutilations of the library collection, disruptive patrons’ behavior,natural disaster and artificial disaster, poorly secured premises, library staff attitude and ignorance of security issues were types of information resources threat in the university libraries in Katsina State. The study also established that windows and doors protection, a mark of ownership, checking in and checking out, recruitment of more library staff and security personnel, and use of telecommunication system e.g.,electronic access control, CCTV, RFID were the measures taken to solve the issue of information resources threat in the university libraries in Katsina State. Based on the findings of the study, the paper therefore recommends that university libraries should have a written collection development policy that covers security measures that will help other new staff and the academic community to know what is on the ground for them to follow strictly. The study further concludes that, the libraries should provide a perimeter alarm system, electro-magnetic control system, firewall installation, radio frequency identification system, and electronic access control in the university libraries, and security entrance of every university library should be managed by well-trained and qualified professionals to save resources from any act of mismanagement or security threat of information resources","PeriodicalId":299818,"journal":{"name":"SLU Journal of Science and Technology","volume":"438 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127366597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the next generation of wireless networks, small base stations are being adopted to increase the system's speed and capacity. However, this adoption increases the likelihood of inter-cell interference, which can degrade performance the system. This report discusses different approaches to managing interference that can mitigate these effects. Currently, a few techniques are already in use in 4G systems, while others have been suggested but not put into practice. The purpose of this study is to conduct an analytical examination of the different and various interference management methods that could be utilized in Heterogeneous Networks (HetNets), focusing specifically on HetNets with high density of small cells as they are expected to be part of the network topology for 5G networks and beyond. The study involved implementing time-based enhanced inter-cell interference coordination (eICIC) by establishing coordination between macro and small base stations. The results showed that robust communication with low latency is essential, and backhaul connections play a critical role in achieving this. Consequently, the network's sum-rate was increased.
{"title":"Sum-Rate Systematic Intercell Interference Coordination Techniques for5GHeterogeneous Networks","authors":"H. Bello","doi":"10.56471/slujst.v6i.355","DOIUrl":"https://doi.org/10.56471/slujst.v6i.355","url":null,"abstract":"In the next generation of wireless networks, small base stations are being adopted to increase the system's speed and capacity. However, this adoption increases the likelihood of inter-cell interference, which can degrade performance the system. This report discusses different approaches to managing interference that can mitigate these effects. Currently, a few techniques are already in use in 4G systems, while others have been suggested but not put into practice. The purpose of this study is to conduct an analytical examination of the different and various interference management methods that could be utilized in Heterogeneous Networks (HetNets), focusing specifically on HetNets with high density of small cells as they are expected to be part of the network topology for 5G networks and beyond. The study involved implementing time-based enhanced inter-cell interference coordination (eICIC) by establishing coordination between macro and small base stations. The results showed that robust communication with low latency is essential, and backhaul connections play a critical role in achieving this. Consequently, the network's sum-rate was increased.","PeriodicalId":299818,"journal":{"name":"SLU Journal of Science and Technology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114511861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Free Space Optical Communication (FSOC) is an optical technology with a great chance to complement traditional wireless communication technologies. It offers essential advantages compared to Radio Frequency (RF), such as low power consumption, no frequency restrictions, no electromagnetic interference, security, and a significant increase in bandwidth. Similarly, they have proven a worthy substitute for fiber optics with notable advantages. FSOC systems are usually installed above ground level and are therefore exposed to prevailing weather conditions such as haze, fog, rain, etc. which negatively affect the optical signal transmission. There is a shortage of research on using FSO systems in tropical regions. For this reason, accurate free space optical communication channel modeling helps telecommunication operators to engineer and appropriately manage their networks. Therefore, this research work developed Free Space Optical Communication Channel Model to mitigate the effects of atmospheric attenuations by estimating the specifically induced attenuation caused by both haze and rainfall rates on the FSOC link in the Zaria geographical area using two years of measured visibility data and rainfall rates data of the study location obtained from Nigerian Meteorological (NIMET) agency, Zaria station, locatedat the Nigeria College of Aviation Technology (NCAT) Zaria and Center for Energy Research and Training (CERT), Ahmadu Bello University (ABU), Zaria. The performance of the FSOC system is analyzed and evaluated through Link Margin (LM) analysis by using the design specifications of a commercial optical transceiver (TereScope 5000). The haze-induced attenuation obtained at 850nm, 950nm, and 1550nm is 5.934dB, 6.402dB, and 3.152dB respectively. Therefore, the result shows that the 1550 nm wavelength has minimum haze-induced attenuation compared to 850nm and 950nm at a propagation link distance of 6km. Furthermore, from the results of the performance evaluation of the LM analysis for the combined effects of Geometrical Attenuation (Gatt), Haze-Induced Attenuation (HIA), and Rain-Induced Attenuation (RIA), the result shows that operating a 1550nm wavelength in transmission power greatly improves optical transmission when compared with 850nm and 950nm wavelengths. Generally, the overall outcome of the research concludes that the free space optical communication system has the robustness to handle successful wireless communication during the worst weather conditions in Zaria throughout the year for a link range of up to 6km
{"title":"Design and Performance Evaluation of Free Space Optical Communication Link in Zaria, Nigeria","authors":"A. Bukar","doi":"10.56471/slujst.v6i.357","DOIUrl":"https://doi.org/10.56471/slujst.v6i.357","url":null,"abstract":"Free Space Optical Communication (FSOC) is an optical technology with a great chance to complement traditional wireless communication technologies. It offers essential advantages compared to Radio Frequency (RF), such as low power consumption, no frequency restrictions, no electromagnetic interference, security, and a significant increase in bandwidth. Similarly, they have proven a worthy substitute for fiber optics with notable advantages. FSOC systems are usually installed above ground level and are therefore exposed to prevailing weather conditions such as haze, fog, rain, etc. which negatively affect the optical signal transmission. There is a shortage of research on using FSO systems in tropical regions. For this reason, accurate free space optical communication channel modeling helps telecommunication operators to engineer and appropriately manage their networks. Therefore, this research work developed Free Space Optical Communication Channel Model to mitigate the effects of atmospheric attenuations by estimating the specifically induced attenuation caused by both haze and rainfall rates on the FSOC link in the Zaria geographical area using two years of measured visibility data and rainfall rates data of the study location obtained from Nigerian Meteorological (NIMET) agency, Zaria station, locatedat the Nigeria College of Aviation Technology (NCAT) Zaria and Center for Energy Research and Training (CERT), Ahmadu Bello University (ABU), Zaria. The performance of the FSOC system is analyzed and evaluated through Link Margin (LM) analysis by using the design specifications of a commercial optical transceiver (TereScope 5000). The haze-induced attenuation obtained at 850nm, 950nm, and 1550nm is 5.934dB, 6.402dB, and 3.152dB respectively. Therefore, the result shows that the 1550 nm wavelength has minimum haze-induced attenuation compared to 850nm and 950nm at a propagation link distance of 6km. Furthermore, from the results of the performance evaluation of the LM analysis for the combined effects of Geometrical Attenuation (Gatt), Haze-Induced Attenuation (HIA), and Rain-Induced Attenuation (RIA), the result shows that operating a 1550nm wavelength in transmission power greatly improves optical transmission when compared with 850nm and 950nm wavelengths. Generally, the overall outcome of the research concludes that the free space optical communication system has the robustness to handle successful wireless communication during the worst weather conditions in Zaria throughout the year for a link range of up to 6km","PeriodicalId":299818,"journal":{"name":"SLU Journal of Science and Technology","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125240116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While the Constant Modulus Algorithm (CMA) is the most often used blind signal equalization or adaptation technique, it converges slowly and produces excessive Side Lobe Level (SLL) radiation, which wastes energy and interferes with other equipment. Additionally, interfering signals may be picked up by side lobes, increasing the noise level in the receiver. A normalized constant-modulus method with adjustable step size was devised, which considerably increased its convergence rate. The Blackman window was also applied to the CMA to lower the SLL. The designed beamformer improved convergence rate by 40%, as observed by lower Mean Square Error (MSE) to Signal to Interference plus Noise Ratio (SINR) values. As a result, the beam former is a viable alternative in an environment where channel conditions are constantly changing. Furthermore, the use of the Blackman window had the ability to minimize the Peak Side Lobe Level (PSLL) and resulted in a 5 dB improvement over CMA for the SLL rejection gain. As compared to the conventional CMA, the Improved CMA displayed the highest reduction in PSLL with quick convergence capabilities and saved a lot of energy wastage due to side lobes minimization. These characteristics make the ICMA a potential choice for advanced wireless applications
{"title":"An Improved Constant Modulus Algorithm for Blind Signal Adaptation in Wireless Communications","authors":"Emmanuel Adotse Otsapa","doi":"10.56471/slujst.v6i.364","DOIUrl":"https://doi.org/10.56471/slujst.v6i.364","url":null,"abstract":"While the Constant Modulus Algorithm (CMA) is the most often used blind signal equalization or adaptation technique, it converges slowly and produces excessive Side Lobe Level (SLL) radiation, which wastes energy and interferes with other equipment. Additionally, interfering signals may be picked up by side lobes, increasing the noise level in the receiver. A normalized constant-modulus method with adjustable step size was devised, which considerably increased its convergence rate. The Blackman window was also applied to the CMA to lower the SLL. The designed beamformer improved convergence rate by 40%, as observed by lower Mean Square Error (MSE) to Signal to Interference plus Noise Ratio (SINR) values. As a result, the beam former is a viable alternative in an environment where channel conditions are constantly changing. Furthermore, the use of the Blackman window had the ability to minimize the Peak Side Lobe Level (PSLL) and resulted in a 5 dB improvement over CMA for the SLL rejection gain. As compared to the conventional CMA, the Improved CMA displayed the highest reduction in PSLL with quick convergence capabilities and saved a lot of energy wastage due to side lobes minimization. These characteristics make the ICMA a potential choice for advanced wireless applications","PeriodicalId":299818,"journal":{"name":"SLU Journal of Science and Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135949922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diabetes Mellitus is a chronic and one of the deadliest diseases. Diabetes disease increases the risk of long-term complications, including heart diseases and kidney failures, among others. Undoubtedly, Diabetes Mellitus patients may live longer and lead healthier lives if the disease is detected early. Over the years, several efforts have been on more accurate and early detection procedures to safe patients of Diabetes Mellitus. Interestingly, with the applications of Information Technology to the disease diagnoses and therapy managements, more attention has been on using machine learning in the predictions and early detection of Diabetes Mellitus. Unfortunately, determining the most appropriate machine learning algorithm with the best performance in terms of optimum accuracy still remains a challenge. The study proposes a framework for Diabetes Mellitus detection using Machine Learning Algorithms. The proposed framework was evaluated using K-nearest neighbor (KNN), Random Forest (RF), and Logistic Regression (LR). Extensive experiments were conducted to analyze the performance of the framework focusing on four distinct different clinical datasets. To ensure robust, web compatible framework, Python and its popular data science related packages, Pandas, Numpy, Seaborn, Matplotlib and Pickle were used for the implementation. Significantly, using the standard datasets obtained from the National Institute of Diabetes and Kidney Disease, Random Forest was able to predict Diabetes Mellitus in the datasets with the best accuracy of 93.4 %.
{"title":"Prediction of Diabetes Mellitus using Machine Learning Algorithms: Comparative Analysis of K-Nearest Neighbor, Random Forest and Logistic Regression","authors":"A. Adeshina","doi":"10.56471/slujst.v6i.319","DOIUrl":"https://doi.org/10.56471/slujst.v6i.319","url":null,"abstract":"Diabetes Mellitus is a chronic and one of the deadliest diseases. Diabetes disease increases the risk of long-term complications, including heart diseases and kidney failures, among others. Undoubtedly, Diabetes Mellitus patients may live longer and lead healthier lives if the disease is detected early. Over the years, several efforts have been on more accurate and early detection procedures to safe patients of Diabetes Mellitus. Interestingly, with the applications of Information Technology to the disease diagnoses and therapy managements, more attention has been on using machine learning in the predictions and early detection of Diabetes Mellitus. Unfortunately, determining the most appropriate machine learning algorithm with the best performance in terms of optimum accuracy still remains a challenge. The study proposes a framework for Diabetes Mellitus detection using Machine Learning Algorithms. The proposed framework was evaluated using K-nearest neighbor (KNN), Random Forest (RF), and Logistic Regression (LR). Extensive experiments were conducted to analyze the performance of the framework focusing on four distinct different clinical datasets. To ensure robust, web compatible framework, Python and its popular data science related packages, Pandas, Numpy, Seaborn, Matplotlib and Pickle were used for the implementation. Significantly, using the standard datasets obtained from the National Institute of Diabetes and Kidney Disease, Random Forest was able to predict Diabetes Mellitus in the datasets with the best accuracy of 93.4 %.","PeriodicalId":299818,"journal":{"name":"SLU Journal of Science and Technology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124812228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Facial Recognition Technology (FRT) has become the research object of many in the recent times considering the challenges with enormous increase in human population. The face as the main axis in social relations plays an important role in the representation of human identity, which requires an increased level of security and the creation of exchange tricks for safe and recognizable evidence and innovations of individual authentication. Apparently, facial recognition is widely adopted for security reasons, and the uniqueness of human characteristics has increased the popularity of facial recognition technology systems worldwide. Law enforcement agencies faced the problem of the impossibility of proper investigation of criminal cases. Suspects are often difficult to catch and the wrong persons may be arrested due to the basis and methods adopted for the investigation. This study proposes a facial recognition system for the identification of criminals using Locality Preserving Projection (LPP) Algorithm. The development of the system involves specification of the functions resulting from the performance analysis obtained from other related systems, and the translation of the developed model into the design of the proposed system. The framework was evaluated by matching the faces of people extracted through special cameras with the images of people on a watch list. Watch lists that contain images of people, including people not suspected of wrong doings. Strategically, the implementation of the facial recognition system was achieved with Locality Preserving Projection (LPP) Algorithm to improve the feature extraction methods and dimensionality reduction techniques. Cascading Style Sheets (CSS) was used for describing the presentation of the document written in HTML language and JavaScript for the front-end for optimum web compatibility. A Python binding of the cross-platform Qt GUI toolkit and Python plugin were used to implement the graphical user interface for detecting and recognizing images and the presence of an individual upon entering the database library. Interestingly, the developed framework recorded accuracy and recognition >95% capacity under normal conditions, including lighting and distance from camera and at a reasonably cheaper cost in comparison with previously proposed Facial Recognition Technology
{"title":"Facial Recognition using Locality Preserving Projection Algorithm","authors":"A. Adeshina","doi":"10.56471/slujst.v6i.315","DOIUrl":"https://doi.org/10.56471/slujst.v6i.315","url":null,"abstract":"Facial Recognition Technology (FRT) has become the research object of many in the recent times considering the challenges with enormous increase in human population. The face as the main axis in social relations plays an important role in the representation of human identity, which requires an increased level of security and the creation of exchange tricks for safe and recognizable evidence and innovations of individual authentication. Apparently, facial recognition is widely adopted for security reasons, and the uniqueness of human characteristics has increased the popularity of facial recognition technology systems worldwide. Law enforcement agencies faced the problem of the impossibility of proper investigation of criminal cases. Suspects are often difficult to catch and the wrong persons may be arrested due to the basis and methods adopted for the investigation. This study proposes a facial recognition system for the identification of criminals using Locality Preserving Projection (LPP) Algorithm. The development of the system involves specification of the functions resulting from the performance analysis obtained from other related systems, and the translation of the developed model into the design of the proposed system. The framework was evaluated by matching the faces of people extracted through special cameras with the images of people on a watch list. Watch lists that contain images of people, including people not suspected of wrong doings. Strategically, the implementation of the facial recognition system was achieved with Locality Preserving Projection (LPP) Algorithm to improve the feature extraction methods and dimensionality reduction techniques. Cascading Style Sheets (CSS) was used for describing the presentation of the document written in HTML language and JavaScript for the front-end for optimum web compatibility. A Python binding of the cross-platform Qt GUI toolkit and Python plugin were used to implement the graphical user interface for detecting and recognizing images and the presence of an individual upon entering the database library. Interestingly, the developed framework recorded accuracy and recognition >95% capacity under normal conditions, including lighting and distance from camera and at a reasonably cheaper cost in comparison with previously proposed Facial Recognition Technology","PeriodicalId":299818,"journal":{"name":"SLU Journal of Science and Technology","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126194441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network security is the greatest challenges in our current generation. Network intrusion detection (NID) is one of the fundamental techniques used to protect computer networks from threads. However, there is contemplation on the possibility and sustainability of the traditional approaches employed especially with the current modern networks. Recently, majority of the researchers employ the application of machine learning models such as logistic regression (LR), random forest (RF), K-nearest neighbor (KNN) and Support Vector Machines (SVM) among others to address the NID problem. In this research, the Network intrusion classification system has been analyzed based on KDD Cup ’99 data set. In the classifier implementation section, the three most widely used models were employed; K-nearest neighbor, random forest and logistic regression as our classification algorithms. The attack detection accuracy and training time was used to evaluate the models. The random forest provides the best detection accuracy of 99.5% and 74 second training time provide by logistic regression
网络安全是我们这一代人面临的最大挑战。网络入侵检测(NID)是保护计算机网络免受线程攻击的基本技术之一。然而,人们对传统方法的可能性和可持续性进行了思考,特别是在当前的现代网络中。近年来,大多数研究人员采用逻辑回归(LR)、随机森林(RF)、k近邻(KNN)和支持向量机(SVM)等机器学习模型来解决NID问题。本研究对基于KDD Cup ' 99数据集的网络入侵分类系统进行了分析。在分类器实现部分,采用了三种最常用的模型;k近邻、随机森林和逻辑回归作为分类算法。利用攻击检测精度和训练时间对模型进行评价。随机森林的检测准确率为99.5%,逻辑回归的训练时间为74秒
{"title":"Models Comparison Based On Intrusion Detection Using Machine Learning","authors":"Maimuna Yusuf Ma’aji","doi":"10.56471/slujst.v6i.358","DOIUrl":"https://doi.org/10.56471/slujst.v6i.358","url":null,"abstract":"Network security is the greatest challenges in our current generation. Network intrusion detection (NID) is one of the fundamental techniques used to protect computer networks from threads. However, there is contemplation on the possibility and sustainability of the traditional approaches employed especially with the current modern networks. Recently, majority of the researchers employ the application of machine learning models such as logistic regression (LR), random forest (RF), K-nearest neighbor (KNN) and Support Vector Machines (SVM) among others to address the NID problem. In this research, the Network intrusion classification system has been analyzed based on KDD Cup ’99 data set. In the classifier implementation section, the three most widely used models were employed; K-nearest neighbor, random forest and logistic regression as our classification algorithms. The attack detection accuracy and training time was used to evaluate the models. The random forest provides the best detection accuracy of 99.5% and 74 second training time provide by logistic regression","PeriodicalId":299818,"journal":{"name":"SLU Journal of Science and Technology","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124035123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}