Pub Date : 2023-10-01DOI: 10.11591/ijece.v13i5.pp5198-5208
Olabode Idowu-Bismark, Francis F. Idachaba, Atayero
The proliferation of handheld devices has continued to push the demand for higher data rates. Network providers will use small cells as an overlay to macrocell in fifth-generation (5G) for network capacity enhancement. The current cellular wireless backhauls suffer from the problem of insufficient backhaul capacity to cater to the new small cell deployment scenarios. Using the 3D digital map of Lagos Island in the Wireless InSite, small cells are deployed on a street canyon and in high-rise scenarios to simulate the backhaul links to the small cells at 28 GHz center frequency and 100 MHz bandwidth. Using a user-defined signal to interference plus noise ratio-throughput (SINR-throughput) table based on an adaptive modulation and coding scheme (MCS), the throughput values were generated based on the equation specified by 3GPP TS 38.306 V15.2.0 0, which estimates the peak data rate based on the modulation order and coding rate for each data stream calculated by the propagation model. Finding shows achieved channel capacity is comparable with gigabit passive optical networks (GPON) used in fiber to the ‘X’ (FTTX) for backhauling small cells. The effect of channel parameters such as root mean squared (RMS) delay spread and RMS angular spread on channel capacity are also investigated and explained.
{"title":"Fifth-generation small cell backhaul capacity enhancement and large-scale parameter effect","authors":"Olabode Idowu-Bismark, Francis F. Idachaba, Atayero","doi":"10.11591/ijece.v13i5.pp5198-5208","DOIUrl":"https://doi.org/10.11591/ijece.v13i5.pp5198-5208","url":null,"abstract":"The proliferation of handheld devices has continued to push the demand for higher data rates. Network providers will use small cells as an overlay to macrocell in fifth-generation (5G) for network capacity enhancement. The current cellular wireless backhauls suffer from the problem of insufficient backhaul capacity to cater to the new small cell deployment scenarios. Using the 3D digital map of Lagos Island in the Wireless InSite, small cells are deployed on a street canyon and in high-rise scenarios to simulate the backhaul links to the small cells at 28 GHz center frequency and 100 MHz bandwidth. Using a user-defined signal to interference plus noise ratio-throughput (SINR-throughput) table based on an adaptive modulation and coding scheme (MCS), the throughput values were generated based on the equation specified by 3GPP TS 38.306 V15.2.0 0, which estimates the peak data rate based on the modulation order and coding rate for each data stream calculated by the propagation model. Finding shows achieved channel capacity is comparable with gigabit passive optical networks (GPON) used in fiber to the ‘X’ (FTTX) for backhauling small cells. The effect of channel parameters such as root mean squared (RMS) delay spread and RMS angular spread on channel capacity are also investigated and explained.","PeriodicalId":38060,"journal":{"name":"International Journal of Electrical and Computer Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42758896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.11591/ijece.v13i5.pp5932-5941
N. Bhaskar, Priyanka Tupe-Waghmare, Shobha S. Nikam, Rakhi Khedkar
In this paper, we propose an efficient home-based system for monitoring chronic kidney disease (CKD). As non-invasive disease identification approaches are gaining popularity nowadays, the proposed system is designed to detect kidney disease from saliva samples. Salivary diagnosis has advanced its popularity over the last few years due to the non-invasive sample collection technique. The use of salivary components to monitor and detect kidney disease is investigated through an experimental investigation. We measured the amount of urea in the saliva sample to detect CKD. Further, this article explains the use of predictive analysis using machine learning techniques and data analytics in remote healthcare management. The proposed health monitoring system classified the samples with an accuracy of 97.1%. With internet facilities available everywhere, this methodology can offer better healthcare services, with real-time decision support in remote monitoring platform.
{"title":"Computer-aided automated detection of kidney disease using supervised learning technique","authors":"N. Bhaskar, Priyanka Tupe-Waghmare, Shobha S. Nikam, Rakhi Khedkar","doi":"10.11591/ijece.v13i5.pp5932-5941","DOIUrl":"https://doi.org/10.11591/ijece.v13i5.pp5932-5941","url":null,"abstract":"In this paper, we propose an efficient home-based system for monitoring chronic kidney disease (CKD). As non-invasive disease identification approaches are gaining popularity nowadays, the proposed system is designed to detect kidney disease from saliva samples. Salivary diagnosis has advanced its popularity over the last few years due to the non-invasive sample collection technique. The use of salivary components to monitor and detect kidney disease is investigated through an experimental investigation. We measured the amount of urea in the saliva sample to detect CKD. Further, this article explains the use of predictive analysis using machine learning techniques and data analytics in remote healthcare management. The proposed health monitoring system classified the samples with an accuracy of 97.1%. With internet facilities available everywhere, this methodology can offer better healthcare services, with real-time decision support in remote monitoring platform.","PeriodicalId":38060,"journal":{"name":"International Journal of Electrical and Computer Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49351207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.11591/ijece.v13i5.pp5483-5490
Tawfiq Alrawashdeh, Khaldun G. Al-Moghrabi, Ali M. Al-Ghonmein
Typically, the problem of scheduling exams for universities aims to determine a schedule that satisfies logistics constraints, including the number of available exam rooms and the exam delivery mode (online or paper-based). The objective of this problem varies according to the university’s requirements. For example, some universities may seek to minimize operational costs, while others may work to minimize the schedule's length. Consequently, the objective imposed by the university affects the complexity of the problem. In this study, we present a grouping-based approach designed to address the problem of scheduling the exam timetable. The approach begins by profiling the courses’ exams based on their requirements, grouping exams with similar requirements to be scheduled at the same time. Then, an insertion strategy is used to obtain the exam schedule while satisfying the imposed constraints of the targeted university. We applied this approach to the problem of exam scheduling at Al-Hussein Bin Talal University in Jordan and achieved a balanced exam schedule that met all the imposed constraints.
通常,大学安排考试的问题旨在确定一个满足后勤限制的时间表,包括可用考场的数量和考试交付模式(在线或纸质)。这个问题的目的因大学的要求而异。例如,一些大学可能会寻求将运营成本降至最低,而另一些大学则可能会努力将时间表的长度降至最低。因此,大学强加的目标影响了问题的复杂性。在这项研究中,我们提出了一种基于分组的方法,旨在解决考试时间表的安排问题。该方法首先根据课程的要求对课程的考试进行分析,并将具有类似要求的考试分组,以便同时安排。然后,在满足目标大学的强加约束的情况下,使用插入策略来获得考试时间表。我们将这种方法应用于约旦Al-Hussein Bin Talal大学的考试安排问题,并实现了一个平衡的考试安排,满足了所有强加的限制。
{"title":"A profiling-based algorithm for exams’ scheduling problem","authors":"Tawfiq Alrawashdeh, Khaldun G. Al-Moghrabi, Ali M. Al-Ghonmein","doi":"10.11591/ijece.v13i5.pp5483-5490","DOIUrl":"https://doi.org/10.11591/ijece.v13i5.pp5483-5490","url":null,"abstract":"Typically, the problem of scheduling exams for universities aims to determine a schedule that satisfies logistics constraints, including the number of available exam rooms and the exam delivery mode (online or paper-based). The objective of this problem varies according to the university’s requirements. For example, some universities may seek to minimize operational costs, while others may work to minimize the schedule's length. Consequently, the objective imposed by the university affects the complexity of the problem. In this study, we present a grouping-based approach designed to address the problem of scheduling the exam timetable. The approach begins by profiling the courses’ exams based on their requirements, grouping exams with similar requirements to be scheduled at the same time. Then, an insertion strategy is used to obtain the exam schedule while satisfying the imposed constraints of the targeted university. We applied this approach to the problem of exam scheduling at Al-Hussein Bin Talal University in Jordan and achieved a balanced exam schedule that met all the imposed constraints.","PeriodicalId":38060,"journal":{"name":"International Journal of Electrical and Computer Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44668559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.11591/ijece.v13i5.pp5314-5332
Zh.B. Sadirmekova, M. Sambetbayeva, Sandugash Serikbayeva, G. Borankulova, A. Yerimbetova, A. Murzakhmetov
Currently, there is an avalanche-like increase in the need for automatic text processing, respectively, new effective methods and tools for processing texts in natural language are emerging. Although these methods, tools and resources are mostly presented on the internet, many of them remain inaccessible to developers, since they are not systematized, distributed in various directories or on separate sites of both humanitarian and technical orientation. All this greatly complicates their search and practical use in conducting research in computational linguistics and developing applied systems for natural text processing. This paper is aimed at solving the need described above. The paper goal is to develop model of an intelligent information resource based on modern methods of natural language processing (IIR NLP). The main goal of IIR NLP is to render convenient valuable access for specialists in the field of computational linguistics. The originality of our proposed approach is that the developed ontology of the subject area “NLP” will be used to systematize all the above knowledge, data, information resources and organize meaningful access to them, and semantic web standards and technology tools will be used as a software basis.
{"title":"Development of an intelligent information resource model based on modern natural language processing methods","authors":"Zh.B. Sadirmekova, M. Sambetbayeva, Sandugash Serikbayeva, G. Borankulova, A. Yerimbetova, A. Murzakhmetov","doi":"10.11591/ijece.v13i5.pp5314-5332","DOIUrl":"https://doi.org/10.11591/ijece.v13i5.pp5314-5332","url":null,"abstract":"Currently, there is an avalanche-like increase in the need for automatic text processing, respectively, new effective methods and tools for processing texts in natural language are emerging. Although these methods, tools and resources are mostly presented on the internet, many of them remain inaccessible to developers, since they are not systematized, distributed in various directories or on separate sites of both humanitarian and technical orientation. All this greatly complicates their search and practical use in conducting research in computational linguistics and developing applied systems for natural text processing. This paper is aimed at solving the need described above. The paper goal is to develop model of an intelligent information resource based on modern methods of natural language processing (IIR NLP). The main goal of IIR NLP is to render convenient valuable access for specialists in the field of computational linguistics. The originality of our proposed approach is that the developed ontology of the subject area “NLP” will be used to systematize all the above knowledge, data, information resources and organize meaningful access to them, and semantic web standards and technology tools will be used as a software basis.","PeriodicalId":38060,"journal":{"name":"International Journal of Electrical and Computer Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46852933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.11591/ijece.v13i5.pp5165-5178
Lenka Benova, L. Hudec
As network traffic increases and new intrusions occur, anomaly detection solutions based on machine learning are necessary to detect previously unknown intrusion patterns. Most of the developed models require a labelled dataset, which can be challenging owing to a shortage of publicly available datasets. These datasets are often too small to effectively train machine learning models, which further motivates the use of real unlabeled traffic. By using real traffic, it is possible to more accurately simulate the types of anomalies that might occur in a real-world network and improve the performance of the detection model. We present a method able to predict and categorize anomalies without the aid of a labelled dataset, demonstrating the model’s usability while also gathering a dataset from real noisy network traffic. The proposed long short-term memory (LTSM) based intrusion detection system was tested in a real-world setting of an antivirus company and was successful in detecting various intrusions using 5-minute windowing over both the predicted and real update curves thereby demonstrating its usefulness. Our contribution was the development of a robust model generally applicable to any hypertext transfer protocol (HTTP) traffic with almost real-time anomaly detection, while also outperforming earlier studies in terms of prediction accuracy.
{"title":"Web server load prediction and anomaly detection from hypertext transfer protocol logs","authors":"Lenka Benova, L. Hudec","doi":"10.11591/ijece.v13i5.pp5165-5178","DOIUrl":"https://doi.org/10.11591/ijece.v13i5.pp5165-5178","url":null,"abstract":"As network traffic increases and new intrusions occur, anomaly detection solutions based on machine learning are necessary to detect previously unknown intrusion patterns. Most of the developed models require a labelled dataset, which can be challenging owing to a shortage of publicly available datasets. These datasets are often too small to effectively train machine learning models, which further motivates the use of real unlabeled traffic. By using real traffic, it is possible to more accurately simulate the types of anomalies that might occur in a real-world network and improve the performance of the detection model. We present a method able to predict and categorize anomalies without the aid of a labelled dataset, demonstrating the model’s usability while also gathering a dataset from real noisy network traffic. The proposed long short-term memory (LTSM) based intrusion detection system was tested in a real-world setting of an antivirus company and was successful in detecting various intrusions using 5-minute windowing over both the predicted and real update curves thereby demonstrating its usefulness. Our contribution was the development of a robust model generally applicable to any hypertext transfer protocol (HTTP) traffic with almost real-time anomaly detection, while also outperforming earlier studies in terms of prediction accuracy.","PeriodicalId":38060,"journal":{"name":"International Journal of Electrical and Computer Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46902758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.11591/ijece.v13i5.pp4886-4900
Somyod Santimalai, T. Tayjasanant
This paper designed and developed a tropical daylight-mimicking lighting system based on photometric, radiometric and International Commission on Illumination (CIE) standard melanopic performances from natural lighting cycles in Thailand. Spectral power distribution (SPD) during daylight in summer and winter were recorded to create a dynamic artificial lighting system that best matches the natural daylight characteristics. Two set-ups light emitting diode (LED) (LED-A and LED-B) were screened, developed, validated and compared with different chromaticity layouts of the correlated color temperatures (CCTs) allocated on Planckian locus and later converted to x-y co-ordinates in a chromaticity diagram. Based on CCT and Duv deviations between two developed setups, LED-A could mimick circadian points on the chromaticity diagram better than LED-B did. CCT and Duv values of LED-A (dCCT=3.75% and dDuv=17.36%) can match closer to the daylight than those of LED-B (dCCT=5.0 % and dDuv=56.84%). For CIE-standard melanopic performances (melanopic efficacy of luminous radiation (mELR), melanopic equivalent daylight (D65) illuminance (mEDI) and melanopic daylight efficacy ratio (mDER)), LED-A is suitable to use indoor with averages of 1.16 W×lm-1, 236 lx and 0.84, respectively, while LED-B is good to use outdoor with averages of 1.53 W×lm-1, 266 lx and 1.06, respectively. The proposed design can be used as a guideline to establish a daylight-mimicking LED lighting system from actual measurement data.
{"title":"Visual and melanopic performance of a tropical daylight-mimicking lighting: a case study in Thailand","authors":"Somyod Santimalai, T. Tayjasanant","doi":"10.11591/ijece.v13i5.pp4886-4900","DOIUrl":"https://doi.org/10.11591/ijece.v13i5.pp4886-4900","url":null,"abstract":"This paper designed and developed a tropical daylight-mimicking lighting system based on photometric, radiometric and International Commission on Illumination (CIE) standard melanopic performances from natural lighting cycles in Thailand. Spectral power distribution (SPD) during daylight in summer and winter were recorded to create a dynamic artificial lighting system that best matches the natural daylight characteristics. Two set-ups light emitting diode (LED) (LED-A and LED-B) were screened, developed, validated and compared with different chromaticity layouts of the correlated color temperatures (CCTs) allocated on Planckian locus and later converted to x-y co-ordinates in a chromaticity diagram. Based on CCT and Duv deviations between two developed setups, LED-A could mimick circadian points on the chromaticity diagram better than LED-B did. CCT and Duv values of LED-A (dCCT=3.75% and dDuv=17.36%) can match closer to the daylight than those of LED-B (dCCT=5.0 % and dDuv=56.84%). For CIE-standard melanopic performances (melanopic efficacy of luminous radiation (mELR), melanopic equivalent daylight (D65) illuminance (mEDI) and melanopic daylight efficacy ratio (mDER)), LED-A is suitable to use indoor with averages of 1.16 W×lm-1, 236 lx and 0.84, respectively, while LED-B is good to use outdoor with averages of 1.53 W×lm-1, 266 lx and 1.06, respectively. The proposed design can be used as a guideline to establish a daylight-mimicking LED lighting system from actual measurement data.","PeriodicalId":38060,"journal":{"name":"International Journal of Electrical and Computer Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47467265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.11591/ijece.v13i5.pp5265-5272
Sopee Kaewchada, Somporn Ruang-On, U. Kuhapong, Kritaphat Songsri-in
The objectives of this research were developing a model for forecasting vegetable prices in Nakhon Si Thammarat Province using random forest and comparing the forecast results of different crops. The information used in this paper were monthly climate data and average monthly vegetable prices collected between 2011 – 2020 from Nakhon Si Thammarat meteorological station and Nakhon Si Thammarat Provincial Commercial Office, respectively. We evaluated model performance based on mean absolute percentage error (MAPE), root mean squared error (RMSE), and mean absolute error (MAE). The experimental results showed that the random forest model was able to predict the prices of vegetables, including pumpkin, eggplant, and lentils with high accuracy with MAPE values of 0.09, 0.07, and 0.15, with RMSE values of 1.82, 1.46, and 2.33, and with MAE values of 3.32, 2.15, and 5.42, respectively. The forecast model derived from this research can be beneficial for vegetable planting planning in the Pak Phanang River Basin of Nakhon Si Thammarat Province, Thailand.
{"title":"Random forest model for forecasting vegetable prices: a case study in Nakhon Si Thammarat Province, Thailand","authors":"Sopee Kaewchada, Somporn Ruang-On, U. Kuhapong, Kritaphat Songsri-in","doi":"10.11591/ijece.v13i5.pp5265-5272","DOIUrl":"https://doi.org/10.11591/ijece.v13i5.pp5265-5272","url":null,"abstract":"The objectives of this research were developing a model for forecasting vegetable prices in Nakhon Si Thammarat Province using random forest and comparing the forecast results of different crops. The information used in this paper were monthly climate data and average monthly vegetable prices collected between 2011 – 2020 from Nakhon Si Thammarat meteorological station and Nakhon Si Thammarat Provincial Commercial Office, respectively. We evaluated model performance based on mean absolute percentage error (MAPE), root mean squared error (RMSE), and mean absolute error (MAE). The experimental results showed that the random forest model was able to predict the prices of vegetables, including pumpkin, eggplant, and lentils with high accuracy with MAPE values of 0.09, 0.07, and 0.15, with RMSE values of 1.82, 1.46, and 2.33, and with MAE values of 3.32, 2.15, and 5.42, respectively. The forecast model derived from this research can be beneficial for vegetable planting planning in the Pak Phanang River Basin of Nakhon Si Thammarat Province, Thailand.","PeriodicalId":38060,"journal":{"name":"International Journal of Electrical and Computer Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47637116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Renewable energy generation is increasingly attractive since it is non-polluting and viable. Recently, the technical and economic performance of power system networks has been enhanced by integrating renewable energy sources (RES). This work focuses on the size of solar and wind production by replacing the thermal generation to decrease cost and losses on a big electrical power system. The Weibull and Lognormal probability density functions are used to calculate the deliverable power of wind and solar energy, to be integrated into the power system. Due to the uncertain and intermittent conditions of these sources, their integration complicates the optimal power flow problem. This paper proposes an optimal power flow (OPF) using the whale optimization algorithm (WOA), to solve for the stochastic wind and solar power integrated power system. In this paper, the ideal capacity of RES along with thermal generators has been determined by considering total generation cost as an objective function. The proposed methodology is tested on the IEEE-30 system to ensure its usefulness. Obtained results show the effectiveness of WOA when compared with other algorithms like non-dominated sorting genetic algorithm (NSGA-II), grey wolf optimization (GWO) and particle swarm optimization-GWO (PSO-GWO).
{"title":"Optimal power flow with distributed energy sources using whale optimization algorithm","authors":"T.Papi Naidu, Ganapathy Balasubramanian, Bathina Venkateshwar Rao","doi":"10.11591/ijece.v13i5.pp4835-4844","DOIUrl":"https://doi.org/10.11591/ijece.v13i5.pp4835-4844","url":null,"abstract":"Renewable energy generation is increasingly attractive since it is non-polluting and viable. Recently, the technical and economic performance of power system networks has been enhanced by integrating renewable energy sources (RES). This work focuses on the size of solar and wind production by replacing the thermal generation to decrease cost and losses on a big electrical power system. The Weibull and Lognormal probability density functions are used to calculate the deliverable power of wind and solar energy, to be integrated into the power system. Due to the uncertain and intermittent conditions of these sources, their integration complicates the optimal power flow problem. This paper proposes an optimal power flow (OPF) using the whale optimization algorithm (WOA), to solve for the stochastic wind and solar power integrated power system. In this paper, the ideal capacity of RES along with thermal generators has been determined by considering total generation cost as an objective function. The proposed methodology is tested on the IEEE-30 system to ensure its usefulness. Obtained results show the effectiveness of WOA when compared with other algorithms like non-dominated sorting genetic algorithm (NSGA-II), grey wolf optimization (GWO) and particle swarm optimization-GWO (PSO-GWO).","PeriodicalId":38060,"journal":{"name":"International Journal of Electrical and Computer Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42216430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.11591/ijece.v13i5.pp5366-5373
Hicham Gueddah, Youssef Lachibi
Digital environments for human learning have evolved a lot in recent years thanks to incredible advances in information technologies. Computer assistance for text creation and editing tools represent a future market in which natural language processing (NLP) concepts will be used. This is particularly the case of the automatic correction of spelling mistakes used daily by data operators. Unfortunately, these spellcheckers are considered writing aids tools, they are unable to perform this task automatically without user’s assistance. In this paper, we suggest a filtered composition metric based on the weighting of two lexical similarity distances in order to reach the auto-correction. The approach developed in this article requires the use of two phases: the first phase of correction involves combining two well-known distances: the edit distance weighted by relative weights of the proximity of the Arabic keyboard and the calligraphical similarity between Arabic alphabet, and combine this measure with the JaroWinkler distance to better weight, filter solutions having the same metric. The second phase is considered as a booster of the first phase, this use the probabilistic bigram language model after the recognition of the solutions of error, which may have the same lexical similarity measure in the first correction phase. The evaluation of the experimental results obtained from the test performed by our filtered composition measure on a dataset of errors allowed us to achieve a 96% of auto-correction rate.
{"title":"Arabic spellchecking: a depth-filtered composition metric to achieve fully automatic correction","authors":"Hicham Gueddah, Youssef Lachibi","doi":"10.11591/ijece.v13i5.pp5366-5373","DOIUrl":"https://doi.org/10.11591/ijece.v13i5.pp5366-5373","url":null,"abstract":"Digital environments for human learning have evolved a lot in recent years thanks to incredible advances in information technologies. Computer assistance for text creation and editing tools represent a future market in which natural language processing (NLP) concepts will be used. This is particularly the case of the automatic correction of spelling mistakes used daily by data operators. Unfortunately, these spellcheckers are considered writing aids tools, they are unable to perform this task automatically without user’s assistance. In this paper, we suggest a filtered composition metric based on the weighting of two lexical similarity distances in order to reach the auto-correction. The approach developed in this article requires the use of two phases: the first phase of correction involves combining two well-known distances: the edit distance weighted by relative weights of the proximity of the Arabic keyboard and the calligraphical similarity between Arabic alphabet, and combine this measure with the JaroWinkler distance to better weight, filter solutions having the same metric. The second phase is considered as a booster of the first phase, this use the probabilistic bigram language model after the recognition of the solutions of error, which may have the same lexical similarity measure in the first correction phase. The evaluation of the experimental results obtained from the test performed by our filtered composition measure on a dataset of errors allowed us to achieve a 96% of auto-correction rate.","PeriodicalId":38060,"journal":{"name":"International Journal of Electrical and Computer Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47322498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.11591/ijece.v13i5.pp5674-5680
Trong Hieu Luu, Phan Nguyen Ky Phuc, T. Lam, Zhi-qiu Yu, Van Tinh Lam
Solar panel quality inspection is a time consuming and costly task. This study tries to develop as reliable method for evaluating the panels quality by using ensemble technique based on three machine learning models namely logistic regression, support vector machine and artificial neural network. The data in this study came from infrared camera which were captured in dark room. The panels are supplied with direct current (DC) power while the infrared camera is located perpendicular with panel surface. Dataset is divided into four classes where each class represent for a level of damage percentage. The approach is suitable for systems which has limited resources as well as number of training images which is very popular in reality. Result shows that the proposed method performs with the accuracy is higher than 90%.
{"title":"Ensembling techniques in solar panel quality classification","authors":"Trong Hieu Luu, Phan Nguyen Ky Phuc, T. Lam, Zhi-qiu Yu, Van Tinh Lam","doi":"10.11591/ijece.v13i5.pp5674-5680","DOIUrl":"https://doi.org/10.11591/ijece.v13i5.pp5674-5680","url":null,"abstract":"Solar panel quality inspection is a time consuming and costly task. This study tries to develop as reliable method for evaluating the panels quality by using ensemble technique based on three machine learning models namely logistic regression, support vector machine and artificial neural network. The data in this study came from infrared camera which were captured in dark room. The panels are supplied with direct current (DC) power while the infrared camera is located perpendicular with panel surface. Dataset is divided into four classes where each class represent for a level of damage percentage. The approach is suitable for systems which has limited resources as well as number of training images which is very popular in reality. Result shows that the proposed method performs with the accuracy is higher than 90%.","PeriodicalId":38060,"journal":{"name":"International Journal of Electrical and Computer Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45837340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}