Ahmed M. Salman, Jamal A.-K. Mohammed, Farag M. Mohammed
Traditional friction brakes can generate problems such as high braking temperature and pressure, cracking, and wear, leading to braking failure and user damage. Eddy current brake systems (contactless magnetic brakes) are one method used in motion applications. They are wear-free, less temperature-sensitive, quick, easy, and less susceptible to wheel lock, resulting in less brake failure due to the absence of physical contact between the magnet and disc. Important factors that can affect the performance of the braking system are the type of materials manufactured for the permanent magnets. This paper examines the performance of the permanent magnetic eddy current braking (PMECB) system. Different kinds of permanent magnets are proposed in this system to create eddy currents, which provide braking for the braking system is simulated using FEA software to demonstrate the efficiency of braking in terms of force production, energy dissipation, and overall performance findings demonstrated that permanent magnets consisting of neodymium, iron, and boron consistently provided the maximum braking effectiveness. The lowest efficiency is found in ferrite, which has the second-lowest efficiency behind samarium cobalt. This is because ferrite has a weaker magnetic field. Because of this, the PMECBS based on NdFeB magnets has higher power dissipation values, particularly at higher speeds.
{"title":"Analysis of Permanent Magnet Material Influence on Eddy Current Braking Efficiency","authors":"Ahmed M. Salman, Jamal A.-K. Mohammed, Farag M. Mohammed","doi":"10.37917/ijeee.20.2.18","DOIUrl":"https://doi.org/10.37917/ijeee.20.2.18","url":null,"abstract":"Traditional friction brakes can generate problems such as high braking temperature and pressure, cracking, and wear, leading to braking failure and user damage. Eddy current brake systems (contactless magnetic brakes) are one method used in motion applications. They are wear-free, less temperature-sensitive, quick, easy, and less susceptible to wheel lock, resulting in less brake failure due to the absence of physical contact between the magnet and disc. Important factors that can affect the performance of the braking system are the type of materials manufactured for the permanent magnets. This paper examines the performance of the permanent magnetic eddy current braking (PMECB) system. Different kinds of permanent magnets are proposed in this system to create eddy currents, which provide braking for the braking system is simulated using FEA software to demonstrate the efficiency of braking in terms of force production, energy dissipation, and overall performance findings demonstrated that permanent magnets consisting of neodymium, iron, and boron consistently provided the maximum braking effectiveness. The lowest efficiency is found in ferrite, which has the second-lowest efficiency behind samarium cobalt. This is because ferrite has a weaker magnetic field. Because of this, the PMECBS based on NdFeB magnets has higher power dissipation values, particularly at higher speeds.","PeriodicalId":159746,"journal":{"name":"Iraqi Journal for Electrical and Electronic Engineering","volume":"2 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141797313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate long-term load forecasting (LTLF) is crucial for smart grid operations, but existing CNN-based methods face challenges in extracting essential features from electricity load data, resulting in diminished forecasting performance. To overcome this limitation, we propose a novel ensemble model that integrates a feature extraction module, densely connected residual block (DCRB), long short-term memory layer (LSTM), and ensemble thinking. The feature extraction module captures the randomness and trends in climate data, enhancing the accuracy of load data analysis. Leveraging the DCRB, our model demonstrates superior performance by extracting features from multi-scale input data, surpassing conventional CNN-based models. We evaluate our model using hourly load data from Odisha and day-wise data from Delhi, and the experimental results exhibit low root mean square error (RMSE) values of 0.952 and 0.864 for Odisha and Delhi, respectively. This research contributes to a comparative long-term electricity forecasting analysis, showcasing the efficiency of our proposed model in power system management. Moreover, the model holds the potential to sup-port decision making processes, making it a valuable tool for stakeholders in the electricity sector.
{"title":"Comparative Long-Term Electricity Forecasting Analysis: A Case Study of Load Dispatch Centres in India","authors":"Saikat Gochhait, Deepak K. Sharma, Mrinal Bachute","doi":"10.37917/ijeee.20.2.17","DOIUrl":"https://doi.org/10.37917/ijeee.20.2.17","url":null,"abstract":"Accurate long-term load forecasting (LTLF) is crucial for smart grid operations, but existing CNN-based methods face challenges in extracting essential features from electricity load data, resulting in diminished forecasting performance. To overcome this limitation, we propose a novel ensemble model that integrates a feature extraction module, densely connected residual block (DCRB), long short-term memory layer (LSTM), and ensemble thinking. The feature extraction module captures the randomness and trends in climate data, enhancing the accuracy of load data analysis. Leveraging the DCRB, our model demonstrates superior performance by extracting features from multi-scale input data, surpassing conventional CNN-based models. We evaluate our model using hourly load data from Odisha and day-wise data from Delhi, and the experimental results exhibit low root mean square error (RMSE) values of 0.952 and 0.864 for Odisha and Delhi, respectively. This research contributes to a comparative long-term electricity forecasting analysis, showcasing the efficiency of our proposed model in power system management. Moreover, the model holds the potential to sup-port decision making processes, making it a valuable tool for stakeholders in the electricity sector.","PeriodicalId":159746,"journal":{"name":"Iraqi Journal for Electrical and Electronic Engineering","volume":"70 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141798582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hussien A. Al-mtory, Falih M. Alnahwi, Ramzy S. Ali
This paper presents a new optimization algorithm called corrosion diffusion optimization algorithm (CDOA). The proposed algorithm is based on the diffusion behavior of the pitting corrosion on the metal surface. CDOA utilizes the oxidation and reduction electrochemical reductions as well as the mathematical model of Gibbs free energy in its searching for the optimal solution of a certain problem. Unlike other algorithms, CDOA has the advantage of dispensing any parameter that need to be set for improving the convergence toward the optimal solution. The superiority of the proposed algorithm over the others is highlighted by applying them on some unimodal and multimodal benchmark functions. The results show that CDOA has better performance than the other algorithms in solving the unimodal equations regardless the dimension of the variable. On the other hand, CDOA provides the best multimodal optimization solution for dimensions less than or equal to (5, 10, 15, up to 20) but it fails in solving this type of equations for variable dimensions larger than 20. Moreover, the algorithm is also applied on two engineering application problems, namely the PID controller and the cantilever beam to accentuate its high performance in solving the engineering problems. The proposed algorithm results in minimized values for the settling time, rise time, and overshoot for the PID controller. Where the rise time, settling time, and maximum overshoot are reduced in the second order system to 0.0099, 0.0175 and 0.005 sec., in the fourth order system to 0.0129, 0.0129 and 0 sec, in the fifth order system to 0.2339, 0.7756 and 0, in the fourth system which contains time delays to 1.5683, 2.7102 and 1.80 E-4 sec., and in the simple mass-damper system to 0.403, 0.628 and 0 sec., respectively. In addition, it provides the best fitness function for the cantilever beam problem compared with some other well-known algorithms.
{"title":"A New Algorithm Based on Pitting Corrosion for Engineering Design Optimization Problems","authors":"Hussien A. Al-mtory, Falih M. Alnahwi, Ramzy S. Ali","doi":"10.37917/ijeee.20.2.16","DOIUrl":"https://doi.org/10.37917/ijeee.20.2.16","url":null,"abstract":"This paper presents a new optimization algorithm called corrosion diffusion optimization algorithm (CDOA). The proposed algorithm is based on the diffusion behavior of the pitting corrosion on the metal surface. CDOA utilizes the oxidation and reduction electrochemical reductions as well as the mathematical model of Gibbs free energy in its searching for the optimal solution of a certain problem. Unlike other algorithms, CDOA has the advantage of dispensing any parameter that need to be set for improving the convergence toward the optimal solution. The superiority of the proposed algorithm over the others is highlighted by applying them on some unimodal and multimodal benchmark functions. The results show that CDOA has better performance than the other algorithms in solving the unimodal equations regardless the dimension of the variable. On the other hand, CDOA provides the best multimodal optimization solution for dimensions less than or equal to (5, 10, 15, up to 20) but it fails in solving this type of equations for variable dimensions larger than 20. Moreover, the algorithm is also applied on two engineering application problems, namely the PID controller and the cantilever beam to accentuate its high performance in solving the engineering problems. The proposed algorithm results in minimized values for the settling time, rise time, and overshoot for the PID controller. Where the rise time, settling time, and maximum overshoot are reduced in the second order system to 0.0099, 0.0175 and 0.005 sec., in the fourth order system to 0.0129, 0.0129 and 0 sec, in the fifth order system to 0.2339, 0.7756 and 0, in the fourth system which contains time delays to 1.5683, 2.7102 and 1.80 E-4 sec., and in the simple mass-damper system to 0.403, 0.628 and 0 sec., respectively.\u0000In addition, it provides the best fitness function for the cantilever beam problem compared with some other well-known algorithms.","PeriodicalId":159746,"journal":{"name":"Iraqi Journal for Electrical and Electronic Engineering","volume":"3 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141797548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multiplication-accumulation (MAC) operation plays a crucial role in digital signal processing (DSP) applications, such as image convolution and filters, especially when performed on floating-point numbers to achieve high-level of accuracy. The performance of MAC module highly relies upon the performance of the multiplier utilized. This work offers a distinctive and efficient floating-point Vedic multiplier (VM) called adjusted-VM (AVM) to be utilized in MAC module to meet modern DSP demands. The proposed AVM is based on Urdhva-Tiryakbhyam-Sutra (UT-Sutra) approach and utilizes an enhanced design for the Brent-Kung carry-select adder (EBK-CSLA) to generate the final product. A (6*6)-bit AVM is designed first, then, it is extended to design (12*12)-bit AVM which in turns, utilized to design (24*24)-bit AVM. Moreover, the pipelining concept is used to optimize the speed of the offered (24*24)-bit AVM design. The proposed (24*24)-bit AVM can be used to achieve efficient multiplication for mantissa part in binary single-precision (BSP) floating-point MAC module. The proposed AVM architectures are modeled in VHDL, simulated, and synthesized by Xilinx-ISE14.7 tool using several FPGA families. The implementation results demonstrated a noticeable reduction in delay and area occupation by 33.16% and 42.42%, respectively compared with the most recent existing unpipelined design, and a reduction in delay of 44.78% compared with the existing pipelined design.
{"title":"Design Efficient Vedic-Multiplier for Floating-Point MAC Module","authors":"Fatima Tariq Hussein, Fatemah K. AL-Assfor","doi":"10.37917/ijeee.20.2.15","DOIUrl":"https://doi.org/10.37917/ijeee.20.2.15","url":null,"abstract":"Multiplication-accumulation (MAC) operation plays a crucial role in digital signal processing (DSP) applications, such as image convolution and filters, especially when performed on floating-point numbers to achieve high-level of accuracy. The performance of MAC module highly relies upon the performance of the multiplier utilized. This work offers a distinctive and efficient floating-point Vedic multiplier (VM) called adjusted-VM (AVM) to be utilized in MAC module to meet modern DSP demands. The proposed AVM is based on Urdhva-Tiryakbhyam-Sutra (UT-Sutra) approach and utilizes an enhanced design for the Brent-Kung carry-select adder (EBK-CSLA) to generate the final product. A (6*6)-bit AVM is designed first, then, it is extended to design (12*12)-bit AVM which in turns, utilized to design (24*24)-bit AVM. Moreover, the pipelining concept is used to optimize the speed of the offered (24*24)-bit AVM design. The proposed (24*24)-bit AVM can be used to achieve efficient multiplication for mantissa part in binary single-precision (BSP) floating-point MAC module. The proposed AVM architectures are modeled in VHDL, simulated, and synthesized by Xilinx-ISE14.7 tool using several FPGA families. The implementation results demonstrated a noticeable reduction in delay and area occupation by 33.16% and 42.42%, respectively compared with the most recent existing unpipelined design, and a reduction in delay of 44.78% compared with the existing pipelined design.","PeriodicalId":159746,"journal":{"name":"Iraqi Journal for Electrical and Electronic Engineering","volume":"27 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141650491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hand gesture recognition is a quickly developing field with many uses in human-computer interaction, sign language recognition, virtual reality, gaming, and robotics. This paper reviews different ways to model hands, such as vision-based, sensor-based, and data glove-based techniques. It emphasizes the importance of accurate hand modeling and feature extraction for capturing and analyzing gestures. Key features like motion, depth, color, shape, and pixel values and their relevance in gesture recognition are discussed. Challenges faced in hand gesture recognition include lighting variations, complex backgrounds, noise, and real-time performance. Machine learning algorithms are used to classify and recognize gestures based on extracted features. The paper emphasizes the need for further research and advancements to improve hand gesture recognition systems' robustness, accuracy, and usability. This review offers valuable insights into the current state of hand gesture recognition, its applications, and its potential to revolutionize human-computer interaction and enable natural and intuitive interactions between humans and machines. In simpler terms, hand gesture recognition is a way for computers to understand what people are saying with their hands. It has many potential applications, such as allowing people to control computers without touching them or helping people with disabilities communicate. The paper reviews different ways to develop hand gesture recognition systems and discusses the challenges and opportunities in this area.
{"title":"Advancements and Challenges in Hand Gesture Recognition: A Comprehensive Review","authors":"Bothina Kareem Murad, Abbas H. Hassin Alasadi","doi":"10.37917/ijeee.20.2.13","DOIUrl":"https://doi.org/10.37917/ijeee.20.2.13","url":null,"abstract":"Hand gesture recognition is a quickly developing field with many uses in human-computer interaction, sign language recognition, virtual reality, gaming, and robotics. This paper reviews different ways to model hands, such as vision-based, sensor-based, and data glove-based techniques. It emphasizes the importance of accurate hand modeling and feature extraction for capturing and analyzing gestures. Key features like motion, depth, color, shape, and pixel values and their relevance in gesture recognition are discussed. Challenges faced in hand gesture recognition include lighting variations, complex backgrounds, noise, and real-time performance. Machine learning algorithms are used to classify and recognize gestures based on extracted features. The paper emphasizes the need for further research and advancements to improve hand gesture recognition systems' robustness, accuracy, and usability. This review offers valuable insights into the current state of hand gesture recognition, its applications, and its potential to revolutionize human-computer interaction and enable natural and intuitive interactions between humans and machines. In simpler terms, hand gesture recognition is a way for computers to understand what people are saying with their hands. It has many potential applications, such as allowing people to control computers without touching them or helping people with disabilities communicate. The paper reviews different ways to develop hand gesture recognition systems and discusses the challenges and opportunities in this area.","PeriodicalId":159746,"journal":{"name":"Iraqi Journal for Electrical and Electronic Engineering","volume":"200 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141681411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of multimedia technology, securing the transfer of images becomes an urgent matter. Therefore, designing a high-speed/secure system for color images is a real challenge. A nine-dimensional (9D) chaotic-based digital/optical encryption schem is proposed for double-color images in this paper. The scheme consists of cascaded digital and optical encryption parts. The nine chaotic sequences are grouped into three sets, where each set is responsible for encryption one of the RGB channels independently. One of them controls the fusion, XOR operation, and scrambling-based digital part. The other two sets are used for controlling the optical part by constructing two independent chaotic phase masks in the optical Fourier transforms domain. A denoising convolution neural network (DnCNN) is designed to enhance the robustness of the decrypted images against the Gaussian noise. The simulation results prove the robustness of the proposed scheme as the entropy factor reaches an average of 7.997 for the encrypted color lena-baboon images with an infinite peak signal-to-noise ratio (PSNR) for the decrypted images. The designed DnCNN operates efficiently with the proposed encryption scheme as it enhances the performance against the Gaussian noise, where the PSNR of the decrypted Lena image is enhanced from 27.01 dB to 32.56 dB after applying the DnCNN.
{"title":"Design of High-Secure Digital/Optical Double Color Image Encryption Assisted by 9D Chaos and DnCNN","authors":"Rusul Abdulridha Muttashar, Raad Sami Fyath","doi":"10.37917/ijeee.20.2.14","DOIUrl":"https://doi.org/10.37917/ijeee.20.2.14","url":null,"abstract":"With the rapid development of multimedia technology, securing the transfer of images becomes an urgent matter. Therefore, designing a high-speed/secure system for color images is a real challenge. A nine-dimensional (9D) chaotic-based digital/optical encryption schem is proposed for double-color images in this paper. The scheme consists of cascaded digital and optical encryption parts. The nine chaotic sequences are grouped into three sets, where each set is responsible for encryption one of the RGB channels independently. One of them controls the fusion, XOR operation, and scrambling-based digital part. The other two sets are used for controlling the optical part by constructing two independent chaotic phase masks in the optical Fourier transforms domain. A denoising convolution neural network (DnCNN) is designed to enhance the robustness of the decrypted images against the Gaussian noise. The simulation results prove the robustness of the proposed scheme as the entropy factor reaches an average of 7.997 for the encrypted color lena-baboon images with an infinite peak signal-to-noise ratio (PSNR) for the decrypted images. The designed DnCNN operates efficiently with the proposed encryption scheme as it enhances the performance against the Gaussian noise, where the PSNR of the decrypted Lena image is enhanced from 27.01 dB to 32.56 dB after applying the DnCNN.","PeriodicalId":159746,"journal":{"name":"Iraqi Journal for Electrical and Electronic Engineering","volume":"136 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141682547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, multimedia communication has become very widespread and this requires it to be protected from attackers and transmitted securely for reliability. Encryption and decryption techniques are useful in providing effective security for speech signals to ensure that these signals are transmitted with secure data and prevent third parties or the public from reading private messages. Due to the rapid improvement in digital communications over the recent period up to the present, the security of voice data transmitted over various networks has been classified as a favored field of study in earlier years. The contributions to audio encryption are discussed in this review. This Comprehensive review mainly focuses on presenting several kinds of methods for audio encryption and decryption the analysis of these methods with their advantages and disadvantages have been investigated thoroughly. It will be classified into encryption based on traditional methods and encryption based on advanced chaotic systems. They are divided into two types, continuous-time system, and discrete-time system, and also classified based on the synchronization method and the implementation method. In the fields of information and communications security, system designers face many challenges in both cost, performance, and architecture design, Field Programmable gate arrays (FPGAs) provide an excellent balance between computational power and processing flexibility. In addition, encryption methods will be classified based on Chaos-based Pseudo Random Bit Generator, Fractional-order systems, and hybrid chaotic generator systems, which is an advantageous point for this review compared with previous ones. Audio algorithms are presented, discussed, and compared, highlighting important advantages and disadvantages. Audio signals have a large volume and a strong correlation between data samples. Therefore, if traditional cryptography systems are used to encrypt such huge data, they gain significant overhead. Standard symmetric encryption systems also have a small key-space, which makes them vulnerable to attacks. On the other hand, encryption by asymmetric algorithms is not ideal due to low processing speed and complexity. Therefore, great importance has been given to using chaotic theory to encode audio files. Therefore, when proposing an appropriate encryption method to ensure a high degree of security, the key space, which is the critical part of every encryption system, and the key sensitivity must be taken into account. The key sensitivity is related to the initial values and control variables of the chaotic system chosen as the audio encryption algorithm. In addition, the proposed algorithm should eliminate the problems of periodic windows, such as limited chaotic range and non-uniform distribution, and the quality of the recovered audio signal remains good, which confirms the convenience, reliability, and high security.
{"title":"Study of Chaotic-based Audio Encryption Algorithms: A Review","authors":"Alaa Shumran, Abdul-Basset A. Al-Hussein","doi":"10.37917/ijeee.20.2.8","DOIUrl":"https://doi.org/10.37917/ijeee.20.2.8","url":null,"abstract":"Nowadays, multimedia communication has become very widespread and this requires it to be protected from attackers and transmitted securely for reliability. Encryption and decryption techniques are useful in providing effective security for speech signals to ensure that these signals are transmitted with secure data and prevent third parties or the public from reading private messages. Due to the rapid improvement in digital communications over the recent period up to the present, the security of voice data transmitted over various networks has been classified as a favored field of study in earlier years. The contributions to audio encryption are discussed in this review. This Comprehensive review mainly focuses on presenting several kinds of methods for audio encryption and decryption the analysis of these methods with their advantages and disadvantages have been investigated thoroughly. It will be classified into encryption based on traditional methods and encryption based on advanced chaotic systems. They are divided into two types, continuous-time system, and discrete-time system, and also classified based on the synchronization method and the implementation method. In the fields of information and communications security, system designers face many challenges in both cost, performance, and architecture design, Field Programmable gate arrays (FPGAs) provide an excellent balance between computational power and processing flexibility. In addition, encryption methods will be classified based on Chaos-based Pseudo Random Bit Generator, Fractional-order systems, and hybrid chaotic generator systems, which is an advantageous point for this review compared with previous ones. Audio algorithms are presented, discussed, and compared, highlighting important advantages and disadvantages. Audio signals have a large volume and a strong correlation between data samples. Therefore, if traditional cryptography systems are used to encrypt such huge data, they gain significant overhead. Standard symmetric encryption systems also have a small key-space, which makes them vulnerable to attacks. On the other hand, encryption by asymmetric algorithms is not ideal due to low processing speed and complexity. Therefore, great importance has been given to using chaotic theory to encode audio files. Therefore, when proposing an appropriate encryption method to ensure a high degree of security, the key space, which is the critical part of every encryption system, and the key sensitivity must be taken into account. The key sensitivity is related to the initial values and control variables of the chaotic system chosen as the audio encryption algorithm. In addition, the proposed algorithm should eliminate the problems of periodic windows, such as limited chaotic range and non-uniform distribution, and the quality of the recovered audio signal remains good, which confirms the convenience, reliability, and high security.","PeriodicalId":159746,"journal":{"name":"Iraqi Journal for Electrical and Electronic Engineering","volume":"30 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141358830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing demand for electricity due to population expansion has led to frequent interruptions in electrical power, so there are backup power lines everywhere, especially in the sectors of education, health, banking, transportation and communications. DC sources are beginning to become widely spread in terms of low maintenance requirements, no need for refueling, and no pollutant emission in these institutions. The problems of DC systems are; losses in DC system components, and change in output voltage as loads change. This research presents a power system that generates 1760W AC power from batteries bank, the system consists of a twin inverter to reduce losses in switches and filters, and thus improving the efficiency and the power factor of the system, and fuzzy logic controllers to regulate the output voltage of the converter and inverter. Modeling and simulation in MATLAB / Simulink showed obtaining a constant load voltage with acceptable values of total harmonics distortion (THD) under different conditions of loads and batteries.
{"title":"Design a Power System of 1760W Based on a Twin Inverter and a Fuzzy Logic Controller","authors":"Samhar Saeed Shukir","doi":"10.37917/ijeee.20.2.6","DOIUrl":"https://doi.org/10.37917/ijeee.20.2.6","url":null,"abstract":"The increasing demand for electricity due to population expansion has led to frequent interruptions in electrical power, so there are backup power lines everywhere, especially in the sectors of education, health, banking, transportation and communications. DC sources are beginning to become widely spread in terms of low maintenance requirements, no need for refueling, and no pollutant emission in these institutions. The problems of DC systems are; losses in DC system components, and change in output voltage as loads change. This research presents a power system that generates 1760W AC power from batteries bank, the system consists of a twin inverter to reduce losses in switches and filters, and thus improving the efficiency and the power factor of the system, and fuzzy logic controllers to regulate the output voltage of the converter and inverter. Modeling and simulation in MATLAB / Simulink showed obtaining a constant load voltage with acceptable values of total harmonics distortion (THD) under different conditions of loads and batteries.","PeriodicalId":159746,"journal":{"name":"Iraqi Journal for Electrical and Electronic Engineering","volume":" 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141366831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wijdan Yassen A. AlKarem, Eman Thabet Khalid, Khawla H. Ali
Automatic signature verification methods play a significant role in providing a secure and authenticated handwritten signature in many applications, to prevent forgery problems, specifically institutions of finance, and transections of legal papers, etc. There are two types of handwritten signature verification methods: online verification (dynamic) and offline verification (static) methods. Besides, signature verification approaches can be categorized into two styles: writer dependent (WD), and writer independent (WI) styles. Offline signature verification methods demands a high representation features for the signature image. However, lots of studies have been proposed for WI offline signature verification. Yet, there is necessity to improve the overall accuracy measurements. Therefore, a proved solution in this paper is depended on deep learning via convolutional neural network (CNN) for signature verification and optimize the overall accuracy measurements. The introduced model is trained on English signature dataset. For model evaluation, the deployed model is utilized to make predictions on new data of Arabic signature dataset to classify whether the signature is real or forged. The overall obtained accuracy is 95.36% based on validation dataset.
自动签名验证方法在许多应用中,特别是在金融机构和法律文书的转录等方面,在提供安全、可验证的手写签名以防止伪造问题方面发挥着重要作用。手写签名验证方法有两类:在线验证(动态)和离线验证(静态)方法。此外,签名验证方法还可分为两种风格:依赖于书写者(WD)风格和独立于书写者(WI)风格。离线签名验证方法对签名图像的表示特征要求较高。然而,已有很多研究提出了 WI 离线签名验证方法。然而,有必要提高整体准确度测量。因此,本文提出了一种成熟的解决方案,即通过卷积神经网络(CNN)的深度学习进行签名验证,并优化整体准确度测量。引入的模型在英文签名数据集上进行了训练。为了对模型进行评估,利用部署的模型对阿拉伯语签名数据集的新数据进行预测,以分类签名是真实的还是伪造的。基于验证数据集的总体准确率为 95.36%。
{"title":"Handwritten Signature Verification Method Using Convolutional Neural Network","authors":"Wijdan Yassen A. AlKarem, Eman Thabet Khalid, Khawla H. Ali","doi":"10.37917/ijeee.20.2.7","DOIUrl":"https://doi.org/10.37917/ijeee.20.2.7","url":null,"abstract":"Automatic signature verification methods play a significant role in providing a secure and authenticated handwritten signature in many applications, to prevent forgery problems, specifically institutions of finance, and transections of legal papers, etc. There are two types of handwritten signature verification methods: online verification (dynamic) and offline verification (static) methods. Besides, signature verification approaches can be categorized into two styles: writer dependent (WD), and writer independent (WI) styles. Offline signature verification methods demands a high representation features for the signature image. However, lots of studies have been proposed for WI offline signature verification. Yet, there is necessity to improve the overall accuracy measurements. Therefore, a proved solution in this paper is depended on deep learning via convolutional neural network (CNN) for signature verification and optimize the overall accuracy measurements. The introduced model is trained on English signature dataset. For model evaluation, the deployed model is utilized to make predictions on new data of Arabic signature dataset to classify whether the signature is real or forged. The overall obtained accuracy is 95.36% based on validation dataset.","PeriodicalId":159746,"journal":{"name":"Iraqi Journal for Electrical and Electronic Engineering","volume":" 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141366818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noor Kareem Jumaa, Auday A.H. Mohamad, Abbas Muhammed Allawy, Ali A. Mohammed
Ad-Hoc networks have an adaptive architecture, temporarily configured to provide communication between wireless devices that provide network nodes. Forwarding packets from the source node to the remote destination node may require intermediate cooperative nodes (relay nodes), which may act selfishly because they are power-constrained. The nodes should exhibit cooperation even when faced with occasional selfish or non-cooperative behaviour from other nodes. Several factors affect the behaviour of nodes; those factors are the number of packets required to redirect, power consumption per node, and power constraints per node. Power constraints per node and grade of generosity. This article is based on a dynamic collaboration strategy, specifically the Generous Tit-for-Tat (GTFT), and it aims to represent an Ad-Hoc network operating with the Generous Tit-for-Tat (GTFT) cooperation strategy, measure statistics for the data, and then analyze these statistics using the Taguchi method. The transfer speed and relay node performance both have an impact on the factors that shape the network conditions and are subject to analysis using the Taguchi Method (TM). The analyzed parameters are node throughput, the amount of relay requested packets produced by a node per number of relays requested packets taken by a node, and the amount of accepted relay requested by a node per amount of relay requested by a node. A Taguchi L9 orthogonal array was used to analyze node behaviour, and the results show that the effect parameters were number of packets, power consumption, power constraint of the node, and grade of generosity. The tested parameters influence node cooperation in the following sequence: number of packets required to redirect (N) (effects on behaviour with a percent of 6.8491), power consumption per node (C) (effects on behaviour with a percent of 0.7467), power constraints per node (P) (effects on behaviour with a percent of 0.6831), and grade of generosity (ε) (effects on behaviour with a percent of 0.4530). Taguchi experiments proved that the grade of generosity (GoG) is not the influencing factor where the highest productivity level is, while the number of packets per second required to redirect also has an impact on node behaviour.
{"title":"Taguchi Method Based Node Performance Analysis of Generous TIT- for-TAT Cooperation of AD-HOC Networks","authors":"Noor Kareem Jumaa, Auday A.H. Mohamad, Abbas Muhammed Allawy, Ali A. Mohammed","doi":"10.37917/ijeee.20.2.3","DOIUrl":"https://doi.org/10.37917/ijeee.20.2.3","url":null,"abstract":"Ad-Hoc networks have an adaptive architecture, temporarily configured to provide communication between wireless devices that provide network nodes. Forwarding packets from the source node to the remote destination node may require intermediate cooperative nodes (relay nodes), which may act selfishly because they are power-constrained. The nodes should exhibit cooperation even when faced with occasional selfish or non-cooperative behaviour from other nodes. Several factors affect the behaviour of nodes; those factors are the number of packets required to redirect, power consumption per node, and power constraints per node. Power constraints per node and grade of generosity. This article is based on a dynamic collaboration strategy, specifically the Generous Tit-for-Tat (GTFT), and it aims to represent an Ad-Hoc network operating with the Generous Tit-for-Tat (GTFT) cooperation strategy, measure statistics for the data, and then analyze these statistics using the Taguchi method. The transfer speed and relay node performance both have an impact on the factors that shape the network conditions and are subject to analysis using the Taguchi Method (TM). The analyzed parameters are node throughput, the amount of relay requested packets produced by a node per number of relays requested packets taken by a node, and the amount of accepted relay requested by a node per amount of relay requested by a node. A Taguchi L9 orthogonal array was used to analyze node behaviour, and the results show that the effect parameters were number of packets, power consumption, power constraint of the node, and grade of generosity. The tested parameters influence node cooperation in the following sequence: number of packets required to redirect (N) (effects on behaviour with a percent of 6.8491), power consumption per node (C) (effects on behaviour with a percent of 0.7467), power constraints per node (P) (effects on behaviour with a percent of 0.6831), and grade of generosity (ε) (effects on behaviour with a percent of 0.4530). Taguchi experiments proved that the grade of generosity (GoG) is not the influencing factor where the highest productivity level is, while the number of packets per second required to redirect also has an impact on node behaviour.","PeriodicalId":159746,"journal":{"name":"Iraqi Journal for Electrical and Electronic Engineering","volume":"6 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140660312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}