Pub Date : 2022-01-01DOI: 10.32604/cmc.2022.023588
Waleed Rafique, Ayesha Khan, Ahmad S. Almogren, J. Arshad, Adnan Yousaf, Mujtaba Hussain Jaffery, Ateeq Ur Rehman, Muhammad Shafiq
: An excessive use of non-linear devices in industry results in current harmonics that degrades the power quality with an unfavorable effect on power system performance. In this research, a novel control technique-based Hybrid-Active Power-Filter (HAPF) is implemented for reactive power compensation and harmonic current component for balanced load by improving the Power-Factor (PF) and Total–Hormonic Distortion (THD) and the performance of a system. This work proposed a soft-computing technique based on Particle Swarm-Optimization (PSO) and Adaptive Fuzzy technique to avoid the phase delays caused by conventional control methods. Moreover, the control algorithms are implemented for an instantaneous reactive and active current (I d -I q ) and power theory (Pq0) in SIMULINK. To prevent the degradation effect of disturbances on the system’s
:工业中过多使用非线性器件会产生电流谐波,从而降低电能质量,对电力系统的性能产生不利影响。在本研究中,通过改善系统的功率因数(PF)和总激素失真(THD)以及系统性能,实现了一种基于混合有源功率滤波器(HAPF)的新型控制技术,用于平衡负载的无功补偿和谐波电流分量。本文提出了一种基于粒子群优化(PSO)和自适应模糊技术的软计算技术,以避免传统控制方法引起的相位延迟。此外,在SIMULINK中实现了瞬时无功和有功电流(I d -I q)和功率理论(Pq0)的控制算法。防止扰动对系统的退化效应
{"title":"Adaptive Fuzzy Logic Controller for Harmonics Mitigation Using Particle Swarm Optimization","authors":"Waleed Rafique, Ayesha Khan, Ahmad S. Almogren, J. Arshad, Adnan Yousaf, Mujtaba Hussain Jaffery, Ateeq Ur Rehman, Muhammad Shafiq","doi":"10.32604/cmc.2022.023588","DOIUrl":"https://doi.org/10.32604/cmc.2022.023588","url":null,"abstract":": An excessive use of non-linear devices in industry results in current harmonics that degrades the power quality with an unfavorable effect on power system performance. In this research, a novel control technique-based Hybrid-Active Power-Filter (HAPF) is implemented for reactive power compensation and harmonic current component for balanced load by improving the Power-Factor (PF) and Total–Hormonic Distortion (THD) and the performance of a system. This work proposed a soft-computing technique based on Particle Swarm-Optimization (PSO) and Adaptive Fuzzy technique to avoid the phase delays caused by conventional control methods. Moreover, the control algorithms are implemented for an instantaneous reactive and active current (I d -I q ) and power theory (Pq0) in SIMULINK. To prevent the degradation effect of disturbances on the system’s","PeriodicalId":10440,"journal":{"name":"Cmc-computers Materials & Continua","volume":"41 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74094784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.32604/cmc.2022.020264
Ibrahim Alsukayti, Aman Singh
: A famous psychologist or researcher, Daniel Goleman, gave a theory on the importance of Emotional Intelligence for the success of an individual’s life. Daniel Goleman quoted in the research that “The contribution of an individual’s Intelligence Quotient (IQ) is only 20% for their success, the remaining 80% is due to Emotional Intelligence (EQ)”. However, in the absence of a reliable technique for EQ evaluation, this factor of overall intelligence is ignored in most of the intelligence evaluation mechanisms. This research presented an analysis based on basic statistical tools along with more sophisticated deep learning tools. The proposed cross intelligence evaluation uses two different aspects which are similar, i.e., EQ and SQ to estimate EQ by using a trained model over SQ Dataset. This presented analysis ensures the resemblance between the Emotional and Social Intelligence of an Individual. The research authenticates the results over standard statistical tools and is practically inspected by deep learning tools. Trait Emotional Intelligence Questionnaire-Short Form (TEIQue-SF) and Social IQ dataset are deployed over a Multi-layered Long-Short Term Memory (M-LSTM) based deep learning model for accessing the resemblance between EQ and SQ. The M-LSTM based trained deep learning model registered, the high positive resemblance between Emotional and Social Intelligence and concluded that the resemblance factor between these two is more than 99.84%. This much resemblance allows future researchers to calculate human emotional intelligence with the help of social intelligence. This flexibility also allows the use of Big Data available on social networks, to calculate the emotional intelligence of an individual.
{"title":"Cross Intelligence Evaluation for Effective Emotional Intelligence Estimation","authors":"Ibrahim Alsukayti, Aman Singh","doi":"10.32604/cmc.2022.020264","DOIUrl":"https://doi.org/10.32604/cmc.2022.020264","url":null,"abstract":": A famous psychologist or researcher, Daniel Goleman, gave a theory on the importance of Emotional Intelligence for the success of an individual’s life. Daniel Goleman quoted in the research that “The contribution of an individual’s Intelligence Quotient (IQ) is only 20% for their success, the remaining 80% is due to Emotional Intelligence (EQ)”. However, in the absence of a reliable technique for EQ evaluation, this factor of overall intelligence is ignored in most of the intelligence evaluation mechanisms. This research presented an analysis based on basic statistical tools along with more sophisticated deep learning tools. The proposed cross intelligence evaluation uses two different aspects which are similar, i.e., EQ and SQ to estimate EQ by using a trained model over SQ Dataset. This presented analysis ensures the resemblance between the Emotional and Social Intelligence of an Individual. The research authenticates the results over standard statistical tools and is practically inspected by deep learning tools. Trait Emotional Intelligence Questionnaire-Short Form (TEIQue-SF) and Social IQ dataset are deployed over a Multi-layered Long-Short Term Memory (M-LSTM) based deep learning model for accessing the resemblance between EQ and SQ. The M-LSTM based trained deep learning model registered, the high positive resemblance between Emotional and Social Intelligence and concluded that the resemblance factor between these two is more than 99.84%. This much resemblance allows future researchers to calculate human emotional intelligence with the help of social intelligence. This flexibility also allows the use of Big Data available on social networks, to calculate the emotional intelligence of an individual.","PeriodicalId":10440,"journal":{"name":"Cmc-computers Materials & Continua","volume":"40 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74126205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.32604/cmc.2022.019071
Wafa Difallah, F. Bounaama, B. Draoui, Khelifa Benahmed, A. Laaboudi
: Water conservation starts from rationalizing irrigation, as it is the largest consumer of this vital source. Following the critical and urgent nature of this issue, several works have been proposed. The idea of most researchers is to develop irrigation management systems to meet the water needs of plants with optimal use of this resource. In fact, irrigation water requirement is only the amount of water that must be applied to compensate the evapotranspiration loss. Penman-Monteith equation is the most common formula to evaluate reference evapotranspiration, but it requires many factors that cannot be available in many cases. This leads to a trend towards behavior model estimation. System identification with control is one of the most promising applicationsin this axis. The idea behind this proposal depends on three stages: First, the estimation of reference evapotranspiration (ET0) by a linear ARX model, where temperature, relative humidity, insolation duration and wind speed are used as inputs, and ET0 estimated by Penman-Monteith equation as output. The results show that the values estimated by this method were in good agreement with the measured data. The second part of this paper is to manage the quantity of water. For this purpose, two controllers are used for testing, lead-lag and PID. To adjust the first controller and optimize the choice of its parameters, Nelder-Mead algorithm is used. In the last part, a comparative study is done between the two used controllers.
{"title":"Model Identification and Control of Evapotranspiration for Irrigation Water Optimization","authors":"Wafa Difallah, F. Bounaama, B. Draoui, Khelifa Benahmed, A. Laaboudi","doi":"10.32604/cmc.2022.019071","DOIUrl":"https://doi.org/10.32604/cmc.2022.019071","url":null,"abstract":": Water conservation starts from rationalizing irrigation, as it is the largest consumer of this vital source. Following the critical and urgent nature of this issue, several works have been proposed. The idea of most researchers is to develop irrigation management systems to meet the water needs of plants with optimal use of this resource. In fact, irrigation water requirement is only the amount of water that must be applied to compensate the evapotranspiration loss. Penman-Monteith equation is the most common formula to evaluate reference evapotranspiration, but it requires many factors that cannot be available in many cases. This leads to a trend towards behavior model estimation. System identification with control is one of the most promising applicationsin this axis. The idea behind this proposal depends on three stages: First, the estimation of reference evapotranspiration (ET0) by a linear ARX model, where temperature, relative humidity, insolation duration and wind speed are used as inputs, and ET0 estimated by Penman-Monteith equation as output. The results show that the values estimated by this method were in good agreement with the measured data. The second part of this paper is to manage the quantity of water. For this purpose, two controllers are used for testing, lead-lag and PID. To adjust the first controller and optimize the choice of its parameters, Nelder-Mead algorithm is used. In the last part, a comparative study is done between the two used controllers.","PeriodicalId":10440,"journal":{"name":"Cmc-computers Materials & Continua","volume":"158 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73134717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.32604/cmc.2022.021047
A. Baumy, Abeer D. Algarni, M. Abdalla, W. El-shafai, Fathi E. Abd El-Samie, Naglaa. F. Soliman
: This paper is concerned with a vital topic in image processing: color image forgery detection. The development of computing capabilities has led to a breakthrough in hacking and forgery attacks on signal, image, and data communicated over networks. Hence, there is an urgent need for developing efficient image forgery detection algorithms. Two main types of forgery are considered in this paper: splicing and copy-move. Splicing is performed by inserting a part of an image into another image. On the other hand, copy-move forgery is performed by copying a part of the image into another position in the same image. The proposed approach for splicing detection is based on the assumption that illumination between the original and tampered images is different. To detect the difference between the original and tampered images, the homomorphic transform separates the illumination component from the reflectance component. The illumination histogram derivative is used for detecting the difference in illumination, and hence forgery detection is accomplished. Prior to performing the forgery detection process, some pre-processing techniques, including histogram equalization, histogram matching, high-pass filtering, homomorphic enhancement, and single image super-resolution, are introduced to reinforce the details and changes between the original and embedded sections. The proposed approach for copy-move forgery detection is performed with the Speeded Up Robust Features (SURF) algorithm, which extracts feature points and feature vectors. Searching for the copied partition is accomplished through matching with Euclidian distance and hierarchical clustering. In addition, some preprocessing methods are used with the SURF algorithm, such as histogram equalization and single-mage super-resolution. Simulation results proved the feasibility and the robustness of the pre-processing step in homomorphic detection and SURF detection algorithms for splicing and copy-move forgery detection, respectively.
{"title":"Efficient Forgery Detection Approaches for Digital Color Images","authors":"A. Baumy, Abeer D. Algarni, M. Abdalla, W. El-shafai, Fathi E. Abd El-Samie, Naglaa. F. Soliman","doi":"10.32604/cmc.2022.021047","DOIUrl":"https://doi.org/10.32604/cmc.2022.021047","url":null,"abstract":": This paper is concerned with a vital topic in image processing: color image forgery detection. The development of computing capabilities has led to a breakthrough in hacking and forgery attacks on signal, image, and data communicated over networks. Hence, there is an urgent need for developing efficient image forgery detection algorithms. Two main types of forgery are considered in this paper: splicing and copy-move. Splicing is performed by inserting a part of an image into another image. On the other hand, copy-move forgery is performed by copying a part of the image into another position in the same image. The proposed approach for splicing detection is based on the assumption that illumination between the original and tampered images is different. To detect the difference between the original and tampered images, the homomorphic transform separates the illumination component from the reflectance component. The illumination histogram derivative is used for detecting the difference in illumination, and hence forgery detection is accomplished. Prior to performing the forgery detection process, some pre-processing techniques, including histogram equalization, histogram matching, high-pass filtering, homomorphic enhancement, and single image super-resolution, are introduced to reinforce the details and changes between the original and embedded sections. The proposed approach for copy-move forgery detection is performed with the Speeded Up Robust Features (SURF) algorithm, which extracts feature points and feature vectors. Searching for the copied partition is accomplished through matching with Euclidian distance and hierarchical clustering. In addition, some preprocessing methods are used with the SURF algorithm, such as histogram equalization and single-mage super-resolution. Simulation results proved the feasibility and the robustness of the pre-processing step in homomorphic detection and SURF detection algorithms for splicing and copy-move forgery detection, respectively.","PeriodicalId":10440,"journal":{"name":"Cmc-computers Materials & Continua","volume":"36 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75847157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.32604/cmc.2022.021899
Ayman Abd-Elhamed, M. Fathy, K. M. Abdelgaber
: The capability of piles to withstand horizontal loads is a major design issue. The current research work aims to investigate numerically the responses of laterally loaded piles at working load employing the concept of a beam-on-Winkler-foundation model. The governing differential equation for a laterally loaded pile on elastic subgrade is derived. Based on Legendre-Galerkin method and Runge-Kutta formulas of order four and five, the flexural equation of long piles embedded in homogeneous sandy soils with modulus of subgrade reaction linearly variable with depth is solved for both free- and fixed-headed piles. Mathematica, as one of the world’s leading computational software, was employed for the implementation of solutions. The proposed numerical techniques provide the responses for the entire pile length under the applied lateral load. The utilized numerical approaches are validated against experimental and analytical results of previously published works showing a more accurate estimation of the response of laterally loaded piles. Therefore, the proposed approaches can maintain both mathematical simplicity and comparable accuracy with the experimental results.
{"title":"Numerical Analysis of Laterally Loaded Long Piles in Cohesionless Soil","authors":"Ayman Abd-Elhamed, M. Fathy, K. M. Abdelgaber","doi":"10.32604/cmc.2022.021899","DOIUrl":"https://doi.org/10.32604/cmc.2022.021899","url":null,"abstract":": The capability of piles to withstand horizontal loads is a major design issue. The current research work aims to investigate numerically the responses of laterally loaded piles at working load employing the concept of a beam-on-Winkler-foundation model. The governing differential equation for a laterally loaded pile on elastic subgrade is derived. Based on Legendre-Galerkin method and Runge-Kutta formulas of order four and five, the flexural equation of long piles embedded in homogeneous sandy soils with modulus of subgrade reaction linearly variable with depth is solved for both free- and fixed-headed piles. Mathematica, as one of the world’s leading computational software, was employed for the implementation of solutions. The proposed numerical techniques provide the responses for the entire pile length under the applied lateral load. The utilized numerical approaches are validated against experimental and analytical results of previously published works showing a more accurate estimation of the response of laterally loaded piles. Therefore, the proposed approaches can maintain both mathematical simplicity and comparable accuracy with the experimental results.","PeriodicalId":10440,"journal":{"name":"Cmc-computers Materials & Continua","volume":"30 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75113724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.32604/cmc.2022.019125
Oday A. Hassen, N. Azman Abu, Z. Zainal Abidin, Saad M. Darwish
: A robust smile recognition system could be widely used for many real-world applications. Classification of a facial smile in an unconstrained setting is difficult due to the invertible and wide variety in face images. In this paper, an adaptive model for smile expression classification is suggested that integrates a fast features extraction algorithm and cascade classifiers. Our model takes advantage of the intrinsic association between face detection, smile, and other face features to alleviate the over-fitting issue on the limited training set and increase classification results. The features are extracted taking into account to exclude any unnecessary coefficients in the feature vector; thereby enhancing the discriminatory capacity of the extracted features and reducing the computational process. Still, the main causes of error in learning are due to noise, bias, and variance. Ensemble helps to minimize these factors. Combinations of multiple classifiers decrease variance, especially in the case of unstable classifiers, and may produce a more reliable classification than a single classifier. However, a shortcoming of bagging as the best ensemble classifier is its random selection, where the classification performance relies on the chance to pick an appropriate subset of training items. The suggested model employs a modified form of bagging while creating training sets to deal with this challenge (error-based bootstrapping). The experimental results for smile classification on the JAFFE, CK+, and CK+48 benchmark datasets show the feasibility of our proposed model.
{"title":"Realistic Smile Expression Recognition Approach Using Ensemble Classifier with Enhanced Bagging","authors":"Oday A. Hassen, N. Azman Abu, Z. Zainal Abidin, Saad M. Darwish","doi":"10.32604/cmc.2022.019125","DOIUrl":"https://doi.org/10.32604/cmc.2022.019125","url":null,"abstract":": A robust smile recognition system could be widely used for many real-world applications. Classification of a facial smile in an unconstrained setting is difficult due to the invertible and wide variety in face images. In this paper, an adaptive model for smile expression classification is suggested that integrates a fast features extraction algorithm and cascade classifiers. Our model takes advantage of the intrinsic association between face detection, smile, and other face features to alleviate the over-fitting issue on the limited training set and increase classification results. The features are extracted taking into account to exclude any unnecessary coefficients in the feature vector; thereby enhancing the discriminatory capacity of the extracted features and reducing the computational process. Still, the main causes of error in learning are due to noise, bias, and variance. Ensemble helps to minimize these factors. Combinations of multiple classifiers decrease variance, especially in the case of unstable classifiers, and may produce a more reliable classification than a single classifier. However, a shortcoming of bagging as the best ensemble classifier is its random selection, where the classification performance relies on the chance to pick an appropriate subset of training items. The suggested model employs a modified form of bagging while creating training sets to deal with this challenge (error-based bootstrapping). The experimental results for smile classification on the JAFFE, CK+, and CK+48 benchmark datasets show the feasibility of our proposed model.","PeriodicalId":10440,"journal":{"name":"Cmc-computers Materials & Continua","volume":"57 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88242852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.32604/cmc.2022.019048
M. Assad, I. Mahariq, Raymond Ghandour, M. Nazari, T. Abdeljawad
: Nanofluids are extensively applied in various heat transfer mediums for improving their heat transfer characteristics and hence their performance. Specific heat capacity of nanofluids, as one of the thermophysical properties, performs principal role in heat transfer of thermal mediums utilizing nanofluids. In this regard, different studies have been carried out to investigate the influential factors on nanofluids specific heat. Moreover, several regression models based on correlations or artificial intelligence have been developed for forecasting this property of nanofluids. In the current review paper, influential parameters on the specific heat capacity of nanofluids are introduced. Afterwards, the proposed models for their forecasting and modeling are proposed. According to the reviewed works, concentration and properties of solid structures in addition to temperature affect specific heat capacity to large extent and must be considered as inputs for the models. Moreover, by using other effective factors, the accuracy and comprehensive of the models can be modified. Finally, some suggestions are offered for the upcoming works in the relevant topics.
{"title":"Utilization of Machine Learning Methods in Modeling Specific Heat Capacity of Nanofluids","authors":"M. Assad, I. Mahariq, Raymond Ghandour, M. Nazari, T. Abdeljawad","doi":"10.32604/cmc.2022.019048","DOIUrl":"https://doi.org/10.32604/cmc.2022.019048","url":null,"abstract":": Nanofluids are extensively applied in various heat transfer mediums for improving their heat transfer characteristics and hence their performance. Specific heat capacity of nanofluids, as one of the thermophysical properties, performs principal role in heat transfer of thermal mediums utilizing nanofluids. In this regard, different studies have been carried out to investigate the influential factors on nanofluids specific heat. Moreover, several regression models based on correlations or artificial intelligence have been developed for forecasting this property of nanofluids. In the current review paper, influential parameters on the specific heat capacity of nanofluids are introduced. Afterwards, the proposed models for their forecasting and modeling are proposed. According to the reviewed works, concentration and properties of solid structures in addition to temperature affect specific heat capacity to large extent and must be considered as inputs for the models. Moreover, by using other effective factors, the accuracy and comprehensive of the models can be modified. Finally, some suggestions are offered for the upcoming works in the relevant topics.","PeriodicalId":10440,"journal":{"name":"Cmc-computers Materials & Continua","volume":"13 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74908810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.32604/cmc.2022.024345
Mohamed Amin Ben Atitallah, R. Kachouri, A. Ben Atitallah, H. Mnif
: In the context of constructing an embedded system to help visually impaired people to interpret text, in this paper, an efficient High-level synthesis (HLS) Hardware/Software (HW/SW) design for text extraction using the Gamma Correction Method (GCM) is proposed. Indeed, the GCM is a common method used to extract text from a complex color image and video. The purpose of this work is to study the complexity of the GCM method on Xilinx ZCU102 FPGA board and to propose a HW implementation as Intellectual Property (IP) block of the critical blocks in this method using HLS flow with taking account the quality of the text extraction. This IP is integrated and connected to the ARM Cortex-A53 as coprocessor in HW/SW codesign context. The experimental results show that the HLS HW/SW implementation of the GCM method on ZCU102 FPGA board allows a reduction in processing time by about 89% compared to the SW implementation. This result is given for the same potency and strength of SW implementation for the text extraction.
{"title":"An Efficient HW/SW Design for Text Extraction from Complex Color Image","authors":"Mohamed Amin Ben Atitallah, R. Kachouri, A. Ben Atitallah, H. Mnif","doi":"10.32604/cmc.2022.024345","DOIUrl":"https://doi.org/10.32604/cmc.2022.024345","url":null,"abstract":": In the context of constructing an embedded system to help visually impaired people to interpret text, in this paper, an efficient High-level synthesis (HLS) Hardware/Software (HW/SW) design for text extraction using the Gamma Correction Method (GCM) is proposed. Indeed, the GCM is a common method used to extract text from a complex color image and video. The purpose of this work is to study the complexity of the GCM method on Xilinx ZCU102 FPGA board and to propose a HW implementation as Intellectual Property (IP) block of the critical blocks in this method using HLS flow with taking account the quality of the text extraction. This IP is integrated and connected to the ARM Cortex-A53 as coprocessor in HW/SW codesign context. The experimental results show that the HLS HW/SW implementation of the GCM method on ZCU102 FPGA board allows a reduction in processing time by about 89% compared to the SW implementation. This result is given for the same potency and strength of SW implementation for the text extraction.","PeriodicalId":10440,"journal":{"name":"Cmc-computers Materials & Continua","volume":"21 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75700733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}