A dual feed and dual-band reconfigurable antenna is designed, analyzed, and prototyped in this work for fixed satellite service communication applications. The designed model occupies the compact dimension of 24X21X1.2 mm on FR4 substrate and provides an input impedance of 50 ohms at both ports. The proposed model offers additional circular polarization characteristics at both the resonating bands. The PIN diode-based switching conditions, and the frequency reconfigurability analysis in both simulation and measurement are almost match. The combination of dual-band resonance, frequency reconfigurable nature, and compact dimension makes this model an attractive candidate in the specified field with considerable gain (8.5 dB) and efficiency (80%).
{"title":"A Four Slot Dual Feed and Dual Band Reconfigurable Antenna for Fixed Satellite Service Applications","authors":"T. V. Suri Apparao, G. Karunakar","doi":"10.32985/ijeces.14.10.9","DOIUrl":"https://doi.org/10.32985/ijeces.14.10.9","url":null,"abstract":"A dual feed and dual-band reconfigurable antenna is designed, analyzed, and prototyped in this work for fixed satellite service communication applications. The designed model occupies the compact dimension of 24X21X1.2 mm on FR4 substrate and provides an input impedance of 50 ohms at both ports. The proposed model offers additional circular polarization characteristics at both the resonating bands. The PIN diode-based switching conditions, and the frequency reconfigurability analysis in both simulation and measurement are almost match. The combination of dual-band resonance, frequency reconfigurable nature, and compact dimension makes this model an attractive candidate in the specified field with considerable gain (8.5 dB) and efficiency (80%).","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"9 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138977149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Any object-oriented (O-O) module's primary goal is to build classes with a high level of coherent interaction between variables and methods. To increase the quality of O-O (Object-Oriented) software, various metrics emphasizing cohesiveness have been established so far. These metrics operate on both the design and the code levels. However, these metrics still fall short of fully measuring the cohesion of object-oriented (O-O) software. Based on several concepts of cohesive interlinkages between variables and procedures, the study proposed an enhanced cohesion metric. The four forms of cohesive linkages (VMRv, VMMv, VMRTv, and VMOv) between variables and procedures were the focus of this study. The axiomatic frame of reference was employed for theoretical validation, and univariate logistic regression was applied in the MATLAB environment for empirical validation. The approach of univariate logistic regression has been adopted because it provides incredibly accurate data and can even be applied to datasets that can be linearly separated. The proposed metric exhibits high cohesion, which is the ultimate perspective of a highly reusable Object- Oriented (O-O) module, as evidenced by the testing phase and even training the real dataset with reusability prediction in terms of high values of precision, recall, R2, and low value of RSME of VMICM metric. The study results demonstrated that the proposed metric can act as a measure for predicting the reusability of the Object-Oriented (O-O) system.
{"title":"Empirical Validation of Variable Method Interaction Cohesion Metric (VMICM) for Enhancing Reusability of Object-Oriented (O-O) Software","authors":"Bharti Bisht, Parul Gandhi","doi":"10.32985/ijeces.14.10.2","DOIUrl":"https://doi.org/10.32985/ijeces.14.10.2","url":null,"abstract":"Any object-oriented (O-O) module's primary goal is to build classes with a high level of coherent interaction between variables and methods. To increase the quality of O-O (Object-Oriented) software, various metrics emphasizing cohesiveness have been established so far. These metrics operate on both the design and the code levels. However, these metrics still fall short of fully measuring the cohesion of object-oriented (O-O) software. Based on several concepts of cohesive interlinkages between variables and procedures, the study proposed an enhanced cohesion metric. The four forms of cohesive linkages (VMRv, VMMv, VMRTv, and VMOv) between variables and procedures were the focus of this study. The axiomatic frame of reference was employed for theoretical validation, and univariate logistic regression was applied in the MATLAB environment for empirical validation. The approach of univariate logistic regression has been adopted because it provides incredibly accurate data and can even be applied to datasets that can be linearly separated. The proposed metric exhibits high cohesion, which is the ultimate perspective of a highly reusable Object- Oriented (O-O) module, as evidenced by the testing phase and even training the real dataset with reusability prediction in terms of high values of precision, recall, R2, and low value of RSME of VMICM metric. The study results demonstrated that the proposed metric can act as a measure for predicting the reusability of the Object-Oriented (O-O) system.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"23 16","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139009287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Transformer-based models have been utilized in natural language processing (NLP) for a wide variety of tasks like summarization, translation, and conversational agents. These models can capture long-term dependencies within the input, so they have significantly more representational capabilities than Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). Nevertheless, these models require significant computational resources in terms of high memory usage, and extensive training time. In this paper, we propose a novel document categorization model, with improved parameter efficiency that encodes text using a single, lightweight, multiheaded attention encoder block. The model also uses a hybrid word and position embedding to represent input tokens. The proposed model is evaluated for the Scientific Literature Classification task (SLC) and is compared with state-of-the-art models that have previously been applied to the task. Ten datasets of varying sizes and class distributions have been employed in the experiments. The proposed model shows significant performance improvements, with a high level of efficiency in terms of parameter and computation resource requirements as compared to other transformer-based models, and outperforms previously used methods.
{"title":"Improving Scientific Literature Classification: A Parameter-Efficient Transformer-Based Approach","authors":"Mohammad Munzir Ahanger, M. Arif Wani","doi":"10.32985/ijeces.14.10.4","DOIUrl":"https://doi.org/10.32985/ijeces.14.10.4","url":null,"abstract":"Transformer-based models have been utilized in natural language processing (NLP) for a wide variety of tasks like summarization, translation, and conversational agents. These models can capture long-term dependencies within the input, so they have significantly more representational capabilities than Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). Nevertheless, these models require significant computational resources in terms of high memory usage, and extensive training time. In this paper, we propose a novel document categorization model, with improved parameter efficiency that encodes text using a single, lightweight, multiheaded attention encoder block. The model also uses a hybrid word and position embedding to represent input tokens. The proposed model is evaluated for the Scientific Literature Classification task (SLC) and is compared with state-of-the-art models that have previously been applied to the task. Ten datasets of varying sizes and class distributions have been employed in the experiments. The proposed model shows significant performance improvements, with a high level of efficiency in terms of parameter and computation resource requirements as compared to other transformer-based models, and outperforms previously used methods.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"3 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139006514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The pre-processing of satellite data is a vital step in harnessing the full potential of remote sensing pictures. EgyptSat-1, Egypt's first satellite for observing the Earth from a distance, encountered a major obstacle as a considerable amount of the images it captured could not be used since the necessary radiometric coefficients were missing. This study utilises a cross-calibration methodology, taking advantage of the spectral similarity between Spot 4 and Spot 5 as reference satellites, in order to retrieve these difficult-to-obtain coefficients. The analysis demonstrates that the selection of window size in the cross-calibration process is crucial in determining the outcomes. In general, smaller window sizes tend to produce better results. However, there are certain cases when larger windows are more successful, such as in the scenario of EgyptSat-1's band 3 and its cross-calibration with Spot 5. In contrast to a previous study, the new methodology produces much diminished uncertainty factors, indicating a remarkable enhancement in accuracy. The cross-calibration results highlight the significance of selecting the appropriate window size and satellite for accurate calibration, especially for the Near-Infrared (NIR) band, which is highly responsive to these parameters. Moreover, there are differences in the computations of offset and gain between Spot 4 and Spot 5, which further highlight the intricacies involved in radiometric calibration. The results of this study lead to the determination of improved calibration coefficients for EgyptSat -1, with the specific aim of maximising the accuracy of the results and minimising any errors.
{"title":"Estimating Egyptsat -1 Radiometric Coefficient using Cross Calibration with Spot4 and Spot5","authors":"Sayed Abdo, Ibrahim Ziedan, Asmaa Elyamany","doi":"10.32985/ijeces.14.10.6","DOIUrl":"https://doi.org/10.32985/ijeces.14.10.6","url":null,"abstract":"The pre-processing of satellite data is a vital step in harnessing the full potential of remote sensing pictures. EgyptSat-1, Egypt's first satellite for observing the Earth from a distance, encountered a major obstacle as a considerable amount of the images it captured could not be used since the necessary radiometric coefficients were missing. This study utilises a cross-calibration methodology, taking advantage of the spectral similarity between Spot 4 and Spot 5 as reference satellites, in order to retrieve these difficult-to-obtain coefficients. The analysis demonstrates that the selection of window size in the cross-calibration process is crucial in determining the outcomes. In general, smaller window sizes tend to produce better results. However, there are certain cases when larger windows are more successful, such as in the scenario of EgyptSat-1's band 3 and its cross-calibration with Spot 5. In contrast to a previous study, the new methodology produces much diminished uncertainty factors, indicating a remarkable enhancement in accuracy. The cross-calibration results highlight the significance of selecting the appropriate window size and satellite for accurate calibration, especially for the Near-Infrared (NIR) band, which is highly responsive to these parameters. Moreover, there are differences in the computations of offset and gain between Spot 4 and Spot 5, which further highlight the intricacies involved in radiometric calibration. The results of this study lead to the determination of improved calibration coefficients for EgyptSat -1, with the specific aim of maximising the accuracy of the results and minimising any errors.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"3 8","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139008466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a novel approach to simulating the interaction between electromagnetic waves and a Debye medium utilizing a Transmission Line Matrix (TLM) algorithm with the symmetrical condensed node (SCN-TLM) technique. The proposed method utilizes the polarization current within the media and incorporates the auxiliary differential equation (ADE) technique to handle scattering following the conventional discretization process. The averaged approximation is employed to utilize the polarization current density J and the electric voltage. By reducing the number of operations required per iteration, the New ADE- TLM method has successfully decreased the computational time compared to time convolution techniques. Despite this reduction in computational time, the New ADE-TLM method maintains a numerical accuracy that is comparable to that of time convolution techniques. The efficiency and precision of this approach are confirmed by the agreement between the results obtained and those predicted by the analytic model.
{"title":"The New ADE-TLM Algorithm for Modeling Debye Medium","authors":"E. H. El ouardy, Hanan El Faylali","doi":"10.32985/ijeces.14.10.5","DOIUrl":"https://doi.org/10.32985/ijeces.14.10.5","url":null,"abstract":"In this paper, we present a novel approach to simulating the interaction between electromagnetic waves and a Debye medium utilizing a Transmission Line Matrix (TLM) algorithm with the symmetrical condensed node (SCN-TLM) technique. The proposed method utilizes the polarization current within the media and incorporates the auxiliary differential equation (ADE) technique to handle scattering following the conventional discretization process. The averaged approximation is employed to utilize the polarization current density J and the electric voltage. By reducing the number of operations required per iteration, the New ADE- TLM method has successfully decreased the computational time compared to time convolution techniques. Despite this reduction in computational time, the New ADE-TLM method maintains a numerical accuracy that is comparable to that of time convolution techniques. The efficiency and precision of this approach are confirmed by the agreement between the results obtained and those predicted by the analytic model.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"56 9","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139006847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eveline Pregitha R., Vinod Kumar R. S., Ebbie Selvakumar C.
Ultrasound is a non-invasive method to diagnose and treat medical conditions. It is becoming increasingly popular to use portable ultrasound scanning devices to reduce patient wait times and make healthcare more convenient for patients. By using ultrasound imaging, you will be able to obtain images with better quality and also gain information about soft tissues. The interference caused by tissues reflected in ultrasound waves resulted in intensified speckle sound, complicating imaging. In this paper, a novel Foe-Net has been proposed for segmenting the fetal in ultrasound images. Initially, the input US images are noise removal phase using two different filters Adaptive Gaussian Filter (AGF) and Adaptive Bilateral Filter (ABF) used to reduce the noise artifacts. Then, the US images are enhanced using CLAHE and MSR for smoothing to enhance the image quality. Finally, the denoised images are input to the V-net is used to segment the fetal in the US images. The experimental outcomes of the proposed Multi-Scale Retinex (MSR) is an image enhancement technique that improves image quality by adjusting its illumination and enhancing details. Foe-Net was measured by specific parameters such as specificity, precision, and accuracy. The proposed Foe-Net achieves an overall accuracy of 99.48%, specificity of 98.56 %, and precision of 96.82 % for segmented fetal in ultrasound images. The proposed Foe-Net attains better pre-processing outcomes at low error rates and, high SNR, high PSNR, and high SSIM values.
超声波是一种非侵入性的医疗诊断和治疗方法。使用便携式超声波扫描设备来减少病人的等待时间,为病人提供更方便的医疗保健服务正变得越来越流行。通过使用超声波成像,您可以获得质量更好的图像,还能获得有关软组织的信息。组织对超声波的反射干扰导致斑点声增强,使成像变得复杂。本文提出了一种新颖的 Foe-Net 技术,用于分割超声波图像中的胎儿。首先,使用自适应高斯滤波器(AGF)和自适应双侧滤波器(ABF)对输入的 US 图像进行去噪处理,以减少噪声伪影。然后,使用 CLAHE 和 MSR 对 US 图像进行平滑增强,以提高图像质量。最后,将经过去噪处理的图像输入到 V-net 中,用于分割 US 图像中的胎儿。拟议的多尺度 Retinex(MSR)是一种图像增强技术,可通过调整光照和增强细节来提高图像质量。Foe-Net 通过特异性、精确性和准确性等特定参数进行测量。所提出的 Foe-Net 在超声图像中分割胎儿的总体准确率为 99.48%,特异性为 98.56%,精确度为 96.82%。拟议的 Foe-Net 在低错误率、高 SNR、高 PSNR 和高 SSIM 值的情况下实现了更好的预处理效果。
{"title":"FOE NET: Segmentation of Fetal in Ultrasound Images Using V-NET","authors":"Eveline Pregitha R., Vinod Kumar R. S., Ebbie Selvakumar C.","doi":"10.32985/ijeces.14.10.7","DOIUrl":"https://doi.org/10.32985/ijeces.14.10.7","url":null,"abstract":"Ultrasound is a non-invasive method to diagnose and treat medical conditions. It is becoming increasingly popular to use portable ultrasound scanning devices to reduce patient wait times and make healthcare more convenient for patients. By using ultrasound imaging, you will be able to obtain images with better quality and also gain information about soft tissues. The interference caused by tissues reflected in ultrasound waves resulted in intensified speckle sound, complicating imaging. In this paper, a novel Foe-Net has been proposed for segmenting the fetal in ultrasound images. Initially, the input US images are noise removal phase using two different filters Adaptive Gaussian Filter (AGF) and Adaptive Bilateral Filter (ABF) used to reduce the noise artifacts. Then, the US images are enhanced using CLAHE and MSR for smoothing to enhance the image quality. Finally, the denoised images are input to the V-net is used to segment the fetal in the US images. The experimental outcomes of the proposed Multi-Scale Retinex (MSR) is an image enhancement technique that improves image quality by adjusting its illumination and enhancing details. Foe-Net was measured by specific parameters such as specificity, precision, and accuracy. The proposed Foe-Net achieves an overall accuracy of 99.48%, specificity of 98.56 %, and precision of 96.82 % for segmented fetal in ultrasound images. The proposed Foe-Net attains better pre-processing outcomes at low error rates and, high SNR, high PSNR, and high SSIM values.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"73 5","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139008186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-12DOI: 10.32985/ijeces.14.10.12
Ola Badran, Jafar Jallad
Optimal distribution network reconfiguration (DNR), distributed generations location and sizing (DGs-LS), tap changer adjustment (TCA), and capacitors bank location and sizing (CAs-SL) are different methodologies used to reduce loss and enhance the voltage profile of distribution systems. DNR is the process of changing the network topography by changing both sectionalized and tie switch states. The optimal location looks to find the optimal setting of the DG and CA within the distribution network. Optimal size seeks to find the optimal output generation of both DG and CA. The TCA looks to find the optimal position for TC. These methods are challenging optimization problems and resort to meta-heuristic techniques to find a globally optimal solution. This paper presents a new methodology with which to simultaneously solve the problem of DNR, DGs-LS, TCA, and CAs-SL in distribution networks. This work aims to minimize active and reactive power losses, including voltage profile improvement using a multi-objective decision approach. The firefly algorithm (FA) and analytic hierarchy process (AHP) are used to optimize the fitness function and determine the function weight factors through the use of MATLAB software. Several scenarios were considered on the IEEE 69-bus network. In terms of active power and reactive losses, reductions in the test system of 96.16% and 92.7%, respectively, were achieved, evidencing the positive impact of the proposed methodology on distribution networks.
{"title":"Active and Reactive Power loss Minimization Along with Voltage profile Improvement for Distribution Reconfiguration","authors":"Ola Badran, Jafar Jallad","doi":"10.32985/ijeces.14.10.12","DOIUrl":"https://doi.org/10.32985/ijeces.14.10.12","url":null,"abstract":"Optimal distribution network reconfiguration (DNR), distributed generations location and sizing (DGs-LS), tap changer adjustment (TCA), and capacitors bank location and sizing (CAs-SL) are different methodologies used to reduce loss and enhance the voltage profile of distribution systems. DNR is the process of changing the network topography by changing both sectionalized and tie switch states. The optimal location looks to find the optimal setting of the DG and CA within the distribution network. Optimal size seeks to find the optimal output generation of both DG and CA. The TCA looks to find the optimal position for TC. These methods are challenging optimization problems and resort to meta-heuristic techniques to find a globally optimal solution. This paper presents a new methodology with which to simultaneously solve the problem of DNR, DGs-LS, TCA, and CAs-SL in distribution networks. This work aims to minimize active and reactive power losses, including voltage profile improvement using a multi-objective decision approach. The firefly algorithm (FA) and analytic hierarchy process (AHP) are used to optimize the fitness function and determine the function weight factors through the use of MATLAB software. Several scenarios were considered on the IEEE 69-bus network. In terms of active power and reactive losses, reductions in the test system of 96.16% and 92.7%, respectively, were achieved, evidencing the positive impact of the proposed methodology on distribution networks.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"1 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139008424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-12DOI: 10.32985/ijeces.14.10.11
Aymen Kadhim Mohaisen
Photovoltaic (PV) cells have non-linear properties influenced by environmental factors, including irradiation and temperature. As a result, a method known as maximum power point tracking (MPPT) was implemented to boost the PV cells' efficiency and make the most of the energy they could provide. The traditional perturb and observe (P&O) approach for determining the maximum power point tracking (MPPT) has various drawbacks, including poor steady-state performance, increased oscillation around the MPP point, and delayed reaction. As a result, this work aims to present a hybrid fuzzy logic (FL) and P&O MPPT approach to improve the PV system's performance coupled to the lithium battery storage system. Matlab/Simulink is used to bring the suggested technique to life, after which its efficacy is evaluated in the context of rapid changes in the irradiance level. According to the findings of the simulations, the suggested strategy has the potential to enhance the steady-state performance of PV systems in terms of oscillation and time response. Finally, the proposed results are compared with that obtained by the conventional P&O technique, and the stress of PV power is limited to ∆P=1kW and the overshoot power is limited to 5%.
{"title":"Artificial Intelligent Maximum Power Point Controller based Hybrid Photovoltaic/Battery System","authors":"Aymen Kadhim Mohaisen","doi":"10.32985/ijeces.14.10.11","DOIUrl":"https://doi.org/10.32985/ijeces.14.10.11","url":null,"abstract":"Photovoltaic (PV) cells have non-linear properties influenced by environmental factors, including irradiation and temperature. As a result, a method known as maximum power point tracking (MPPT) was implemented to boost the PV cells' efficiency and make the most of the energy they could provide. The traditional perturb and observe (P&O) approach for determining the maximum power point tracking (MPPT) has various drawbacks, including poor steady-state performance, increased oscillation around the MPP point, and delayed reaction. As a result, this work aims to present a hybrid fuzzy logic (FL) and P&O MPPT approach to improve the PV system's performance coupled to the lithium battery storage system. Matlab/Simulink is used to bring the suggested technique to life, after which its efficacy is evaluated in the context of rapid changes in the irradiance level. According to the findings of the simulations, the suggested strategy has the potential to enhance the steady-state performance of PV systems in terms of oscillation and time response. Finally, the proposed results are compared with that obtained by the conventional P&O technique, and the stress of PV power is limited to ∆P=1kW and the overshoot power is limited to 5%.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"4 6","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139008463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agile methodology for software development has been in vogue for a few decades, notably among small and medium enterprises. The omission of an explicit risk identification approach turns a blind eye to a range of perilous risks, thus dumping the management into strenuous situations and precipitating dreadful issues at the crucial stages of the project. To overcome this drawback a novel Agile Software Risk Identification using Deep learning (ASRI-DL) approach has been proposed that uses a deep learning technique along with the closed fishbowl strategy, thus assisting the team in finding the risks by molding them to think from diverse perspectives, enhancing wider areas of risk coverage. The proposed technique uses a multi-head Convolutional Neural Network (Multihead-CNN) method for classifying the risk into 11 classes such as over-doing, under-doing, mistakes, concept risks, changes, differences, difficulties, dependency, conflicts, issues, and challenges in terms of producing a higher number of risks concerning score, criticality, and uniqueness of the risk ideas. The descriptive statistics further demonstrate that the participation and risk coverage of the individuals in the proposed methodology exceeded the other two as a result of applying the closed fishbowl strategy and making use of the risk identification aid. The proposed method has been compared with existing techniques such as Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), Generalized Linear Models (GLM), and CNN using specific parameters such as accuracy, specificity, and sensitivity. Experimental findings show that the proposed ASRI-DL technique achieves a classification accuracy of 99.16% with a small error rate with 50 training epochs respectively.
{"title":"Multi-Head CNN-based Software Development Risk Classification","authors":"Ayesha Ziana M., Charles J.","doi":"10.32985/ijeces.14.10.1","DOIUrl":"https://doi.org/10.32985/ijeces.14.10.1","url":null,"abstract":"Agile methodology for software development has been in vogue for a few decades, notably among small and medium enterprises. The omission of an explicit risk identification approach turns a blind eye to a range of perilous risks, thus dumping the management into strenuous situations and precipitating dreadful issues at the crucial stages of the project. To overcome this drawback a novel Agile Software Risk Identification using Deep learning (ASRI-DL) approach has been proposed that uses a deep learning technique along with the closed fishbowl strategy, thus assisting the team in finding the risks by molding them to think from diverse perspectives, enhancing wider areas of risk coverage. The proposed technique uses a multi-head Convolutional Neural Network (Multihead-CNN) method for classifying the risk into 11 classes such as over-doing, under-doing, mistakes, concept risks, changes, differences, difficulties, dependency, conflicts, issues, and challenges in terms of producing a higher number of risks concerning score, criticality, and uniqueness of the risk ideas. The descriptive statistics further demonstrate that the participation and risk coverage of the individuals in the proposed methodology exceeded the other two as a result of applying the closed fishbowl strategy and making use of the risk identification aid. The proposed method has been compared with existing techniques such as Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), Generalized Linear Models (GLM), and CNN using specific parameters such as accuracy, specificity, and sensitivity. Experimental findings show that the proposed ASRI-DL technique achieves a classification accuracy of 99.16% with a small error rate with 50 training epochs respectively.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"52 24","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139007017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-12DOI: 10.32985/ijeces.14.10.10
Son Nguyen Thanh, Tuan Pham Van, Tu Pham Minh, Anh Hoang
This paper presents an effective speed control method for brushed DC motors fed by a DC chopper using the concept of Finite Control Set-Model Predictive Control (FCS-MPC). As this control algorithm requires the parameters of the controlled object, the estimation of motor parameters is first performed by using two types of data. The first data includes the output speed response corresponding to the step input voltage to obtain the transfer function in the no-load regime. The second data consists of the motor speed and armature current when a load torque is applied to the motor shaft. The discrete-time equation of the motor armature circuit is used to obtain the future values of the armature circuit current and the motor speed. A cost function is defined based on the difference between the reference and predicted motor speed. The optimal switching states of the DC chopper are selected corresponding to the maximum value of the cost function. The performance of the proposed speed control algorithm is validated on an experimental system. The simulation and experimental results obtained show that the MPC controller can outperform the conventional proportional-integral (PI) controller.
{"title":"Parameter Estimation and Predictive Speed Control of Chopper-Fed Brushed DC Motors","authors":"Son Nguyen Thanh, Tuan Pham Van, Tu Pham Minh, Anh Hoang","doi":"10.32985/ijeces.14.10.10","DOIUrl":"https://doi.org/10.32985/ijeces.14.10.10","url":null,"abstract":"This paper presents an effective speed control method for brushed DC motors fed by a DC chopper using the concept of Finite Control Set-Model Predictive Control (FCS-MPC). As this control algorithm requires the parameters of the controlled object, the estimation of motor parameters is first performed by using two types of data. The first data includes the output speed response corresponding to the step input voltage to obtain the transfer function in the no-load regime. The second data consists of the motor speed and armature current when a load torque is applied to the motor shaft. The discrete-time equation of the motor armature circuit is used to obtain the future values of the armature circuit current and the motor speed. A cost function is defined based on the difference between the reference and predicted motor speed. The optimal switching states of the DC chopper are selected corresponding to the maximum value of the cost function. The performance of the proposed speed control algorithm is validated on an experimental system. The simulation and experimental results obtained show that the MPC controller can outperform the conventional proportional-integral (PI) controller.","PeriodicalId":41912,"journal":{"name":"International Journal of Electrical and Computer Engineering Systems","volume":"18 8","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139008990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}