In the evaluation of a Photovoltaic (PV) system's performance, precise calculation of the system's parameters is essential, as these parameters significantly influence its efficiency across various sunlight intensities, temperature ranges, and distinct load conditions. Addressing the intricate non-linear optimization problem of pinpointing these PV system parameters, the current research adopts a novel metaheuristic optimization approach, called Transit Search (TS). The proposed technique was rigorously tested on a monocrystalline solar panel, which included both single and double-diode model structures. The design of the objective function within this framework aims to diminish the square root of the average squared discrepancies between theoretical and measured current outputs, while remaining within the established parameter bounds. The proficiency of the TS algorithm was highlighted by employing a variety of statistical error indicators, underlining the latter’s effectiveness. When pitted against other established optimization algorithms through comparative analysis, TS demonstrated outstanding capabilities, evidently outperforming its contemporaries in the accurate determination of PV system parameters.
{"title":"Parameter Estimation of Photovoltaic Cell using Transit Search Optimizer","authors":"Hady El Said Abdel Maksoud, Shaaban M. Shaaban","doi":"10.48084/etasr.6956","DOIUrl":"https://doi.org/10.48084/etasr.6956","url":null,"abstract":"In the evaluation of a Photovoltaic (PV) system's performance, precise calculation of the system's parameters is essential, as these parameters significantly influence its efficiency across various sunlight intensities, temperature ranges, and distinct load conditions. Addressing the intricate non-linear optimization problem of pinpointing these PV system parameters, the current research adopts a novel metaheuristic optimization approach, called Transit Search (TS). The proposed technique was rigorously tested on a monocrystalline solar panel, which included both single and double-diode model structures. The design of the objective function within this framework aims to diminish the square root of the average squared discrepancies between theoretical and measured current outputs, while remaining within the established parameter bounds. The proficiency of the TS algorithm was highlighted by employing a variety of statistical error indicators, underlining the latter’s effectiveness. When pitted against other established optimization algorithms through comparative analysis, TS demonstrated outstanding capabilities, evidently outperforming its contemporaries in the accurate determination of PV system parameters.","PeriodicalId":364936,"journal":{"name":"Engineering, Technology & Applied Science Research","volume":"68 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141280679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study carries out a comprehensive comparison of fine-tuned GPT models (GPT-2, GPT-3, GPT-3.5) and LLaMA-2 models (LLaMA-2 7B, LLaMA-2 13B, LLaMA-2 70B) in text classification, addressing dataset sizes, model scales, and task diversity. Since its inception in 2018, the GPT series has been pivotal in advancing NLP, with each iteration introducing substantial enhancements. Despite its progress, detailed analyses, especially against competitive open-source models like the LLaMA-2 series in text classification, remain scarce. The current study fills this gap by fine-tuning these models across varied datasets, focusing on enhancing task-specific performance in hate speech and offensive language detection, fake news classification, and sentiment analysis. The learning efficacy and efficiency of the GPT and LLaMA-2 models were evaluated, providing a nuanced guide to choosing optimal models for NLP tasks based on architectural benefits and adaptation efficiency with limited data and resources. In particular, even with datasets as small as 1,000 rows per class, the F1 scores for the GPT-3.5 and LLaMA-2 models exceeded 0.9, reaching 0.99 with complete datasets. Additionally, the LLaMA-2 13B and 70B models outperformed GPT-3, demonstrating their superior efficiency and effectiveness in text classification. Both the GPT and LLaMA-2 series showed commendable performance on all three tasks, underscoring their ability to handle a diversity of tasks. Based on the size, performance, and resources required for fine-tuning the model, this study identifies LLaMA-2 13B as the most optimal model for NLP tasks.
{"title":"Towards Optimal NLP Solutions: Analyzing GPT and LLaMA-2 Models Across Model Scale, Dataset Size, and Task Diversity","authors":"Ankit Kumar, Richa Sharma, Punam Bedi","doi":"10.48084/etasr.7200","DOIUrl":"https://doi.org/10.48084/etasr.7200","url":null,"abstract":"This study carries out a comprehensive comparison of fine-tuned GPT models (GPT-2, GPT-3, GPT-3.5) and LLaMA-2 models (LLaMA-2 7B, LLaMA-2 13B, LLaMA-2 70B) in text classification, addressing dataset sizes, model scales, and task diversity. Since its inception in 2018, the GPT series has been pivotal in advancing NLP, with each iteration introducing substantial enhancements. Despite its progress, detailed analyses, especially against competitive open-source models like the LLaMA-2 series in text classification, remain scarce. The current study fills this gap by fine-tuning these models across varied datasets, focusing on enhancing task-specific performance in hate speech and offensive language detection, fake news classification, and sentiment analysis. The learning efficacy and efficiency of the GPT and LLaMA-2 models were evaluated, providing a nuanced guide to choosing optimal models for NLP tasks based on architectural benefits and adaptation efficiency with limited data and resources. In particular, even with datasets as small as 1,000 rows per class, the F1 scores for the GPT-3.5 and LLaMA-2 models exceeded 0.9, reaching 0.99 with complete datasets. Additionally, the LLaMA-2 13B and 70B models outperformed GPT-3, demonstrating their superior efficiency and effectiveness in text classification. Both the GPT and LLaMA-2 series showed commendable performance on all three tasks, underscoring their ability to handle a diversity of tasks. Based on the size, performance, and resources required for fine-tuning the model, this study identifies LLaMA-2 13B as the most optimal model for NLP tasks.","PeriodicalId":364936,"journal":{"name":"Engineering, Technology & Applied Science Research","volume":"131 30","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141281597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The significant global challenges in eye care are treatment, preventive quality, rehabilitation services for eye patients, and the shortage of qualified eye care professionals. Early detection and diagnosis of eye diseases could allow vision impairment to be avoided. One barrier to ophthalmologists when adopting computer-aided diagnosis tools is the prevalence of sight-threatening uncommon diseases that are often overlooked. Earlier studies have classified eye diseases into two or a small number of classes, focusing on glaucoma, and diabetes-related and age-related vision issues. This study employed three well-established and publicly available datasets to address these limitations and enable automatic classification of a wide range of eye disorders. A Deep Neural Network for Retinal Fundus Disease Classification (DNNRFDC) model was developed, evaluated based on various performance metrics, and compared with four established pre-trained models (EfficientNetB7, EfficientNetB0, UNet, and ResNet152) utilizing transfer learning techniques. The results showed that the proposed DNNRFDC model outperformed these pre-trained models in terms of overall accuracy across all three datasets, achieving an impressive accuracy of 94.10%. Furthermore, the DNNRFDC model has fewer parameters and lower computational requirements, making it more efficient for real-time applications. This innovative model represents a promising avenue for further advancements in the field of ophthalmological diagnosis and care. Despite these promising results, it is essential to acknowledge the limitations of this study, namely the evaluation conducted by using publicly available datasets that may not fully represent the diversity and complexity of real-world clinical scenarios. Future research could incorporate more diverse datasets and explore the integration of additional diagnostic modalities to further enhance the model's robustness and clinical applicability.
{"title":"Advancing Eye Disease Assessment through Deep Learning: A Comparative Study with Pre-Trained Models","authors":"Zamil S. Alzamil","doi":"10.48084/etasr.7294","DOIUrl":"https://doi.org/10.48084/etasr.7294","url":null,"abstract":"The significant global challenges in eye care are treatment, preventive quality, rehabilitation services for eye patients, and the shortage of qualified eye care professionals. Early detection and diagnosis of eye diseases could allow vision impairment to be avoided. One barrier to ophthalmologists when adopting computer-aided diagnosis tools is the prevalence of sight-threatening uncommon diseases that are often overlooked. Earlier studies have classified eye diseases into two or a small number of classes, focusing on glaucoma, and diabetes-related and age-related vision issues. This study employed three well-established and publicly available datasets to address these limitations and enable automatic classification of a wide range of eye disorders. A Deep Neural Network for Retinal Fundus Disease Classification (DNNRFDC) model was developed, evaluated based on various performance metrics, and compared with four established pre-trained models (EfficientNetB7, EfficientNetB0, UNet, and ResNet152) utilizing transfer learning techniques. The results showed that the proposed DNNRFDC model outperformed these pre-trained models in terms of overall accuracy across all three datasets, achieving an impressive accuracy of 94.10%. Furthermore, the DNNRFDC model has fewer parameters and lower computational requirements, making it more efficient for real-time applications. This innovative model represents a promising avenue for further advancements in the field of ophthalmological diagnosis and care. Despite these promising results, it is essential to acknowledge the limitations of this study, namely the evaluation conducted by using publicly available datasets that may not fully represent the diversity and complexity of real-world clinical scenarios. Future research could incorporate more diverse datasets and explore the integration of additional diagnostic modalities to further enhance the model's robustness and clinical applicability.","PeriodicalId":364936,"journal":{"name":"Engineering, Technology & Applied Science Research","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141274321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a novel approach to improve security in dynamic network slices for 5G networks using Graph-based Generative Adversarial Networks (G-GAN). Given the rapidly evolving and adaptable nature of 5G network slices, traditional security mechanisms often fall short in providing real-time, efficient, and scalable defense mechanisms. To address this gap, this study proposes the use of G-GAN, which combines the strengths of Generative Adversarial Networks (GANs) and Graph Neural Networks (GNNs) for adaptive learning and anomaly detection in dynamic network environments. The proposed approach utilizes GAN to generate realistic network traffic patterns, both normal and adversarial, whereas GNNs analyze these patterns within the context of the network's graph-based topology. This combination facilitates the early detection of anomalies and potential security threats, adapting to the ever-changing configurations of network slices. The current study presents a comprehensive methodology for implementing G-GAN, including system architecture, data processing, and model training. The experimental analysis demonstrates the efficacy of G-GAN in accurately identifying security threats and adapting to new scenarios, revealing that G-GAN outperformed established models with an accuracy of 97.12%, precision of 96.20%, recall of 97.24%, and F1-Score of 96.72%. This study not only contributes to the field of network security in the context of 5G, but also opens avenues for future exploration in the application of hybrid AI models for real-time security across various domains.
{"title":"G-GANS for Adaptive Learning in Dynamic Network Slices","authors":"M. Alanazi","doi":"10.48084/etasr.7046","DOIUrl":"https://doi.org/10.48084/etasr.7046","url":null,"abstract":"This paper introduces a novel approach to improve security in dynamic network slices for 5G networks using Graph-based Generative Adversarial Networks (G-GAN). Given the rapidly evolving and adaptable nature of 5G network slices, traditional security mechanisms often fall short in providing real-time, efficient, and scalable defense mechanisms. To address this gap, this study proposes the use of G-GAN, which combines the strengths of Generative Adversarial Networks (GANs) and Graph Neural Networks (GNNs) for adaptive learning and anomaly detection in dynamic network environments. The proposed approach utilizes GAN to generate realistic network traffic patterns, both normal and adversarial, whereas GNNs analyze these patterns within the context of the network's graph-based topology. This combination facilitates the early detection of anomalies and potential security threats, adapting to the ever-changing configurations of network slices. The current study presents a comprehensive methodology for implementing G-GAN, including system architecture, data processing, and model training. The experimental analysis demonstrates the efficacy of G-GAN in accurately identifying security threats and adapting to new scenarios, revealing that G-GAN outperformed established models with an accuracy of 97.12%, precision of 96.20%, recall of 97.24%, and F1-Score of 96.72%. This study not only contributes to the field of network security in the context of 5G, but also opens avenues for future exploration in the application of hybrid AI models for real-time security across various domains.","PeriodicalId":364936,"journal":{"name":"Engineering, Technology & Applied Science Research","volume":"66 43","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141276537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Powder-Mixed Electrical Discharge Machining (PM-EDM) is one of the latest advancements in EDM process capability augmentation. This procedure involves effectively mixing a suitable material in fine powder form with the dielectric fluid. The dielectric fluid's breakdown properties are enhanced by the additional powder. The objective of the present research is to machine the Ti-35Nb-7Zr-5Ta alloy prepared by powder metallurgy and study the influence of process parameters, such as peak current, pulse-on time, pulse-off time, powder type (Ag, Si, Ag+Si), and powder concentration. The metal removal rate and SR represent the response parameters. The Taguchi approach was followed to design the experiments. The five-factor three-level design was chosen to use the Taguchi L27 orthogonal array. It was found that the addition of Ag, Si, or Ag+Si powders to the dielectric fluid enhanced the metal removal rate and the surface finish for this alloy. The addition of Ag powder to the dielectric fluid gave a higher Material Removal Rate (MRR) and a lower SR compared to Si or Ag+Si powders. Powder concentration and pulse current are the most effective parameters on MRR and SR followed by powder type, pulse-on, and pulse-off. The maximum Grey Relational Grade (GRG) exists at (I=5 A, Ton=9 µs, Toff=37 µs, PT=Ag, PC=20 g/L). These are the optimal conditions for PM-EDM of the Ti-35Nb-7Zr-5Ta alloy that give maximum MRR with minimum SR.
粉末混合放电加工(PM-EDM)是放电加工工艺能力增强的最新进展之一。这种方法是将适当的细粉末状材料与电介质有效混合。额外的粉末增强了介电流体的击穿特性。本研究旨在加工粉末冶金法制备的 Ti-35Nb-7Zr-5Ta 合金,并研究峰值电流、脉冲开启时间、脉冲关闭时间、粉末类型(Ag、Si、Ag+Si)和粉末浓度等工艺参数的影响。金属去除率和 SR 代表响应参数。实验设计采用田口方法。采用 Taguchi L27 正交阵列进行五因素三级设计。实验发现,在电介质中添加 Ag、Si 或 Ag+Si 粉末可提高该合金的金属去除率和表面光洁度。与硅粉或 Ag+Si 粉相比,在介电流体中添加 Ag 粉可获得更高的材料去除率 (MRR)和更低的 SR。粉末浓度和脉冲电流是对 MRR 和 SR 最有效的参数,其次是粉末类型、脉冲开启和脉冲关闭。在(I=5 A,Ton=9 µs,Toff=37 µs,PT=Ag,PC=20 g/L)时,灰色关系等级(GRG)最大。这些都是 Ti-35Nb-7Zr-5Ta 合金 PM-EDM 的最佳条件,能以最小的 SR 获得最大的 MRR。
{"title":"Optimization of the PM-EDM Process Parameters for Ti-35Nb-7Zr-5Ta Bio Alloy","authors":"A. R. Hayyawi, H. Al-Ethari, A. H. Haleem","doi":"10.48084/etasr.6845","DOIUrl":"https://doi.org/10.48084/etasr.6845","url":null,"abstract":"Powder-Mixed Electrical Discharge Machining (PM-EDM) is one of the latest advancements in EDM process capability augmentation. This procedure involves effectively mixing a suitable material in fine powder form with the dielectric fluid. The dielectric fluid's breakdown properties are enhanced by the additional powder. The objective of the present research is to machine the Ti-35Nb-7Zr-5Ta alloy prepared by powder metallurgy and study the influence of process parameters, such as peak current, pulse-on time, pulse-off time, powder type (Ag, Si, Ag+Si), and powder concentration. The metal removal rate and SR represent the response parameters. The Taguchi approach was followed to design the experiments. The five-factor three-level design was chosen to use the Taguchi L27 orthogonal array. It was found that the addition of Ag, Si, or Ag+Si powders to the dielectric fluid enhanced the metal removal rate and the surface finish for this alloy. The addition of Ag powder to the dielectric fluid gave a higher Material Removal Rate (MRR) and a lower SR compared to Si or Ag+Si powders. Powder concentration and pulse current are the most effective parameters on MRR and SR followed by powder type, pulse-on, and pulse-off. The maximum Grey Relational Grade (GRG) exists at (I=5 A, Ton=9 µs, Toff=37 µs, PT=Ag, PC=20 g/L). These are the optimal conditions for PM-EDM of the Ti-35Nb-7Zr-5Ta alloy that give maximum MRR with minimum SR.","PeriodicalId":364936,"journal":{"name":"Engineering, Technology & Applied Science Research","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141279007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nguyen Thi Anh, Nguyen Xuan Quynh, Trần Thanh Tùng
Topology optimization is an advanced technique for structural optimization that aims to achieve an optimally efficient structure by redistribution materials while ensuring fulfillment of load-carrying, performance, and initial boundary. One of the obstacles in the process of optimizing structures for mechanical parts is that these optimized structures sometimes encounter difficulties during the manufacturing process. Additive Manufacturing (AM), also known as 3D printing technology, is a method of manufacturing machine parts through joining layers of material. AM opens up the possibility of fabricating complex structures, especially for structures that have been subjected to topology optimization techniques. This project aims to compare the initial shape of a box under static load and its shape after optimization. The subsequent produced models have reduced weights of 43%, 59%, 70%, 73%, and 77%, respectively, weighing 491.45 g, 357.42 g, 261.31 g, 235.56 g, and 203.87 g. All models are capable of supporting a 10 kg load, demonstrating the ability of the structure to meet technical specifications. The results show that combining structural optimization and additive manufacturing can take advantage of both approaches and show significant potential for modern manufacturing.
{"title":"Study on Topology Optimization Design for Additive Manufacturing","authors":"Nguyen Thi Anh, Nguyen Xuan Quynh, Trần Thanh Tùng","doi":"10.48084/etasr.7220","DOIUrl":"https://doi.org/10.48084/etasr.7220","url":null,"abstract":"Topology optimization is an advanced technique for structural optimization that aims to achieve an optimally efficient structure by redistribution materials while ensuring fulfillment of load-carrying, performance, and initial boundary. One of the obstacles in the process of optimizing structures for mechanical parts is that these optimized structures sometimes encounter difficulties during the manufacturing process. Additive Manufacturing (AM), also known as 3D printing technology, is a method of manufacturing machine parts through joining layers of material. AM opens up the possibility of fabricating complex structures, especially for structures that have been subjected to topology optimization techniques. This project aims to compare the initial shape of a box under static load and its shape after optimization. The subsequent produced models have reduced weights of 43%, 59%, 70%, 73%, and 77%, respectively, weighing 491.45 g, 357.42 g, 261.31 g, 235.56 g, and 203.87 g. All models are capable of supporting a 10 kg load, demonstrating the ability of the structure to meet technical specifications. The results show that combining structural optimization and additive manufacturing can take advantage of both approaches and show significant potential for modern manufacturing.","PeriodicalId":364936,"journal":{"name":"Engineering, Technology & Applied Science Research","volume":"25 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141279081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things (IoT) has significantly altered our way of life, being integrated into many application types. These applications require a certain level of security, which is always a top priority when offering various services. It is particularly difficult to protect the information produced by IoT devices from security threats and protect the exchanged data as they pass through various nodes and gateways. Group Key Management (GKM) is an essential method for controlling the deployment of keys for network access and safe data delivery in such dynamic situations. However, the huge volume of IoT devices and the growing subscriber base present a scalability difficulty that is not addressed by the current IoT authentication techniques based on GKM. Moreover, all GKM models currently in use enable the independence of participants. They only concentrate on dependent symmetrical group keys for each subgroup, which is ineffective for subscriptions with very dynamic behavior. To address these issues, this study proposes a unique Decentralized Lightweight Group Key Management (DLGKM) framework integrated with a Reliable and Secure Multicast Routing Protocol (REMI-DLGKM), which is a reliable and efficient multicast routing system for IoT networks. REMI-DLGKM is a cluster-based routing protocol that qualifies for faster multiplex message distribution within the system. According to simulation results, this protocol is more effective than cutting-edge protocols in terms of end-to-end delay, energy consumption, and packet delivery ratio. The packet delivery ratio of REMI-DLGKM was 99.21%, which is 4.395 higher than other methods, such as SRPL, QMR, and MAODV. The proposed routing protocol can reduce energy consumption in IoT devices by employing effective key management strategies.
{"title":"Robust and Secure Routing Protocol Based on Group Key Management for Internet of Things Systems","authors":"Salwa Othmen, Wahida Mansouri, S. Asklany","doi":"10.48084/etasr.7115","DOIUrl":"https://doi.org/10.48084/etasr.7115","url":null,"abstract":"The Internet of Things (IoT) has significantly altered our way of life, being integrated into many application types. These applications require a certain level of security, which is always a top priority when offering various services. It is particularly difficult to protect the information produced by IoT devices from security threats and protect the exchanged data as they pass through various nodes and gateways. Group Key Management (GKM) is an essential method for controlling the deployment of keys for network access and safe data delivery in such dynamic situations. However, the huge volume of IoT devices and the growing subscriber base present a scalability difficulty that is not addressed by the current IoT authentication techniques based on GKM. Moreover, all GKM models currently in use enable the independence of participants. They only concentrate on dependent symmetrical group keys for each subgroup, which is ineffective for subscriptions with very dynamic behavior. To address these issues, this study proposes a unique Decentralized Lightweight Group Key Management (DLGKM) framework integrated with a Reliable and Secure Multicast Routing Protocol (REMI-DLGKM), which is a reliable and efficient multicast routing system for IoT networks. REMI-DLGKM is a cluster-based routing protocol that qualifies for faster multiplex message distribution within the system. According to simulation results, this protocol is more effective than cutting-edge protocols in terms of end-to-end delay, energy consumption, and packet delivery ratio. The packet delivery ratio of REMI-DLGKM was 99.21%, which is 4.395 higher than other methods, such as SRPL, QMR, and MAODV. The proposed routing protocol can reduce energy consumption in IoT devices by employing effective key management strategies.","PeriodicalId":364936,"journal":{"name":"Engineering, Technology & Applied Science Research","volume":"1 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141279427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fatma H. El-Fouly, M. Kachout, R. Ramadan, Abdullah J. Alzahrani, J. Alshudukhi, Ibrahim Mohammed Alseadoon
Wireless Sensor Networks (WSN) can be part of a tremendous number of applications. Many WSN applications require real-time communication where the sensed data have to be delivered to the sink node within a predetermined deadline decided by the application. In WSNs, the sensor nodes' constrained resources (e.g. memory and power) and the lossy wireless links, give rise to significant difficulties in supporting real-time applications. In addition, many WSN routing algorithms strongly emphasize energy efficiency, while delay is not the primary concern. Thus, WSNs desperately need new routing protocols that are reliable, energy-efficient, and appropriate for real-time applications. The proposed algorithm is a real-time routing algorithm appropriate for delay-sensitive applications in WSNs. It has the ability to deliver data on time while also enabling communications that are reliable and energy-efficient. It achieves this by deciding which candidate neighbors are eligible to participate in the routing process and can deliver the packet before its deadline. In order to lessen the delay of the chosen paths, it also computes the relaying speed for each eligible candidate. Moreover, it takes into account link quality, hop count, and available buffer size of the selected relays, which leads to end-to-end delay reduction while also minimizing energy consumption. Finally, it considers the node's energy consumption rate when selecting the next forwarder to extend the network lifetime. Through simulation experiments, the proposed algorithm has shown improved performance in terms of packet delivery ratio, network lifetime packets miss ratio, average end-to-end delay, and energy imbalance factor.
{"title":"Energy-Efficient and Reliable Routing for Real-time Communication in Wireless Sensor Networks","authors":"Fatma H. El-Fouly, M. Kachout, R. Ramadan, Abdullah J. Alzahrani, J. Alshudukhi, Ibrahim Mohammed Alseadoon","doi":"10.48084/etasr.7057","DOIUrl":"https://doi.org/10.48084/etasr.7057","url":null,"abstract":"Wireless Sensor Networks (WSN) can be part of a tremendous number of applications. Many WSN applications require real-time communication where the sensed data have to be delivered to the sink node within a predetermined deadline decided by the application. In WSNs, the sensor nodes' constrained resources (e.g. memory and power) and the lossy wireless links, give rise to significant difficulties in supporting real-time applications. In addition, many WSN routing algorithms strongly emphasize energy efficiency, while delay is not the primary concern. Thus, WSNs desperately need new routing protocols that are reliable, energy-efficient, and appropriate for real-time applications. The proposed algorithm is a real-time routing algorithm appropriate for delay-sensitive applications in WSNs. It has the ability to deliver data on time while also enabling communications that are reliable and energy-efficient. It achieves this by deciding which candidate neighbors are eligible to participate in the routing process and can deliver the packet before its deadline. In order to lessen the delay of the chosen paths, it also computes the relaying speed for each eligible candidate. Moreover, it takes into account link quality, hop count, and available buffer size of the selected relays, which leads to end-to-end delay reduction while also minimizing energy consumption. Finally, it considers the node's energy consumption rate when selecting the next forwarder to extend the network lifetime. Through simulation experiments, the proposed algorithm has shown improved performance in terms of packet delivery ratio, network lifetime packets miss ratio, average end-to-end delay, and energy imbalance factor.","PeriodicalId":364936,"journal":{"name":"Engineering, Technology & Applied Science Research","volume":"35 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141280189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammed Qasim Taha, Mohammed Kareem Mohammed, Bamba El Haiba
Optimal energy harvesting is dependent on the efficient extraction of energy from photovoltaic (PV) arrays. Maximum Power Point Tracking (MPPT) algorithms are crucial in achieving the maximum power harvest from the PV systems. Therefore, in response to a fluctuating power generation rate due to shading of the PV, the MPPT algorithms must dynamically adapt to the PV array's Maximum Power Point (MPP). In this article, three metaheuristic optimization MPPT techniques, utilized in DC converters connected to the array of 4 PV panels, are compared. The Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Ant Colony Optimization (ACO), which are used to optimize MPPT in the converter, are compared. This research evaluates the efficiency of each optimization method in converging to MPP under 2 s after partial shading of the PV with respect to velocity and accuracy. All algorithms exhibit fast MPPT optimization. However, among the evaluated algorithms, the PSO was distinguished for its higher stability and efficiency.
{"title":"Metaheuristic Optimization of Maximum Power Point Tracking in PV Array under Partial Shading","authors":"Mohammed Qasim Taha, Mohammed Kareem Mohammed, Bamba El Haiba","doi":"10.48084/etasr.7385","DOIUrl":"https://doi.org/10.48084/etasr.7385","url":null,"abstract":"Optimal energy harvesting is dependent on the efficient extraction of energy from photovoltaic (PV) arrays. Maximum Power Point Tracking (MPPT) algorithms are crucial in achieving the maximum power harvest from the PV systems. Therefore, in response to a fluctuating power generation rate due to shading of the PV, the MPPT algorithms must dynamically adapt to the PV array's Maximum Power Point (MPP). In this article, three metaheuristic optimization MPPT techniques, utilized in DC converters connected to the array of 4 PV panels, are compared. The Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Ant Colony Optimization (ACO), which are used to optimize MPPT in the converter, are compared. This research evaluates the efficiency of each optimization method in converging to MPP under 2 s after partial shading of the PV with respect to velocity and accuracy. All algorithms exhibit fast MPPT optimization. However, among the evaluated algorithms, the PSO was distinguished for its higher stability and efficiency.","PeriodicalId":364936,"journal":{"name":"Engineering, Technology & Applied Science Research","volume":"140 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141281504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In an era marked by growing concerns about data security and privacy, the need for robust encryption techniques has become a matter of paramount importance. The primary goal of this study is to protect sensitive information during transmission while ensuring efficient and reliable decryption at the receiver's side. To generate robust and unique cryptographic keys, the proposed approach trains an autoencoder neural network based on hashing and optionally generated prime numbers in the MNIST dataset. The key serves as the foundation for secure communication. An additional security layer to the cryptographic algorithm passing through the first ciphertext, was employed utilizing the XORed and Blum-Blum-Shub (BBS) generators to make the system resistant to various types of attacks. This approach offers a robust and innovative solution for secure data transmission, combining the strengths of autoencoder-based key generation and cryptographic encryption. Its effectiveness is demonstrated through testing and simulations.
{"title":"Enhancing Data Security through Machine Learning-based Key Generation and Encryption","authors":"Abhishek Saini, Ruchi Sehrawat","doi":"10.48084/etasr.7181","DOIUrl":"https://doi.org/10.48084/etasr.7181","url":null,"abstract":"In an era marked by growing concerns about data security and privacy, the need for robust encryption techniques has become a matter of paramount importance. The primary goal of this study is to protect sensitive information during transmission while ensuring efficient and reliable decryption at the receiver's side. To generate robust and unique cryptographic keys, the proposed approach trains an autoencoder neural network based on hashing and optionally generated prime numbers in the MNIST dataset. The key serves as the foundation for secure communication. An additional security layer to the cryptographic algorithm passing through the first ciphertext, was employed utilizing the XORed and Blum-Blum-Shub (BBS) generators to make the system resistant to various types of attacks. This approach offers a robust and innovative solution for secure data transmission, combining the strengths of autoencoder-based key generation and cryptographic encryption. Its effectiveness is demonstrated through testing and simulations.","PeriodicalId":364936,"journal":{"name":"Engineering, Technology & Applied Science Research","volume":"44 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141279530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}