The rapid growth of distributed generation (DG) units has necessitated their optimization to address the increasing complexity of power grids and reduce power losses. The need for optimization of distributed generation (DG) units has been growing rapidly over the past few years. To minimize such losses, the optimal allocation of DG units needs to be correctly identified and applied. On the other hand, Garra Rufa optimization (GRO) is a mathematical optimization technique that is used to determine the high effective and efficient way to solve very complex problems to achieve optimal results. In this work, Garra Rufa optimization is used to identify the optimal placement and size of DG units in order to meet specific power loss requirements. A comparison between genetic algorithm (GA), particle swarm optimization (PSO), and GRO is done using MATLAB to validate the proposed method. The comparison shows that GRO is better than the other methods in DG allocation, especially in more than two DGs. The optimization techniques are evaluated using the IEEE standard power system case, specifically the 30-bus configuration.
{"title":"Optimal DG allocation by Garra Rufa optimization for power loss reduction","authors":"R. K. Chillab, M. Smida, Aqeel S. Jaber, A. Sakly","doi":"10.32629/jai.v6i3.779","DOIUrl":"https://doi.org/10.32629/jai.v6i3.779","url":null,"abstract":"The rapid growth of distributed generation (DG) units has necessitated their optimization to address the increasing complexity of power grids and reduce power losses. The need for optimization of distributed generation (DG) units has been growing rapidly over the past few years. To minimize such losses, the optimal allocation of DG units needs to be correctly identified and applied. On the other hand, Garra Rufa optimization (GRO) is a mathematical optimization technique that is used to determine the high effective and efficient way to solve very complex problems to achieve optimal results. In this work, Garra Rufa optimization is used to identify the optimal placement and size of DG units in order to meet specific power loss requirements. A comparison between genetic algorithm (GA), particle swarm optimization (PSO), and GRO is done using MATLAB to validate the proposed method. The comparison shows that GRO is better than the other methods in DG allocation, especially in more than two DGs. The optimization techniques are evaluated using the IEEE standard power system case, specifically the 30-bus configuration.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48655361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aims to explore the feasibility of using an in-house developed photoacoustic (PA) system for predicting blood phantom concentrations using a pretrained Alexnet and a Long Short-Term Memory (LSTM) network. In two separate experiments, we investigate the performance of our strategy using a point laser source and a color-tunable Light-Emitting Diode (LED) as the illumination source. A single-point transducer is employed to measure signal change by adding ten different black ink concentrations into a tube. These PA signals are used for training and testing the employed deep networks. We found that the LED system with light wavelength of 450 nm gives the best characterization performance. The classification accuracy of the Alexnet and LSTM models tested on this dataset shows an average value of 94% and 96%, respectively, making this a preferred light wavelength for future operation. Our system may be used for the noninvasive assessment of microcirculatory changes in humans.
{"title":"Characterization of ink-based phantoms with deep networks and photoacoustic method","authors":"Hui Ling Chua, A. Huong, X. Ngu","doi":"10.32629/jai.v6i3.621","DOIUrl":"https://doi.org/10.32629/jai.v6i3.621","url":null,"abstract":"This study aims to explore the feasibility of using an in-house developed photoacoustic (PA) system for predicting blood phantom concentrations using a pretrained Alexnet and a Long Short-Term Memory (LSTM) network. In two separate experiments, we investigate the performance of our strategy using a point laser source and a color-tunable Light-Emitting Diode (LED) as the illumination source. A single-point transducer is employed to measure signal change by adding ten different black ink concentrations into a tube. These PA signals are used for training and testing the employed deep networks. We found that the LED system with light wavelength of 450 nm gives the best characterization performance. The classification accuracy of the Alexnet and LSTM models tested on this dataset shows an average value of 94% and 96%, respectively, making this a preferred light wavelength for future operation. Our system may be used for the noninvasive assessment of microcirculatory changes in humans.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49065963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geethamanikanta Jakka, Amrutanshu Panigrahi, Abhilash Pati, M. N. Das, Jyotsnarani Tripathy
In order to evaluate a person’s or a company’s creditworthiness, financial institutions must use credit scoring. Traditional credit scoring algorithms frequently rely on manual and rule-based methods, which can be tedious and inaccurate. Recent developments in artificial intelligence (AI) technology have opened up possibilities for creating more reliable and effective credit rating systems. The data are pre-processed, including scaling using the 0–1 normalization method and resolving missing values by imputation. Information gain (IG), gain ratio (GR), and chi-square are three feature selection methodologies covered in the study. While GR normalizes IG by dividing it by the total entropy of the feature, IG quantifies the reduction in total entropy by adding a new feature. Based on chi-squared statistics, the most vital traits are determined using chi-square. This research employs different ML models to develop a hybrid model for credit score prediction. The ML algorithms support vector machine (SVM), neural networks (NNs), decision trees (DTs), random forest (RF), and logistic regression (LR) classifiers are employed here for experiments along with IG, GR, and chi-square feature selection methodologies for credit prediction over Australian and German datasets. The study offers an understanding of the decision-making process for informative characteristics and the functionality of machine learning (ML) in credit prediction tasks. The empirical analysis shows that in the case of the German dataset, the DT with GR feature selection and hyperparameter optimization outperforms SVM and NN with an accuracy of 99.78%. For the Australian dataset, SVM with GR feature selection outperforms NN and DT with an accuracy of 99.98%.
{"title":"A novel credit scoring system in financial institutions using artificial intelligence technology","authors":"Geethamanikanta Jakka, Amrutanshu Panigrahi, Abhilash Pati, M. N. Das, Jyotsnarani Tripathy","doi":"10.32629/jai.v6i2.824","DOIUrl":"https://doi.org/10.32629/jai.v6i2.824","url":null,"abstract":"In order to evaluate a person’s or a company’s creditworthiness, financial institutions must use credit scoring. Traditional credit scoring algorithms frequently rely on manual and rule-based methods, which can be tedious and inaccurate. Recent developments in artificial intelligence (AI) technology have opened up possibilities for creating more reliable and effective credit rating systems. The data are pre-processed, including scaling using the 0–1 normalization method and resolving missing values by imputation. Information gain (IG), gain ratio (GR), and chi-square are three feature selection methodologies covered in the study. While GR normalizes IG by dividing it by the total entropy of the feature, IG quantifies the reduction in total entropy by adding a new feature. Based on chi-squared statistics, the most vital traits are determined using chi-square. This research employs different ML models to develop a hybrid model for credit score prediction. The ML algorithms support vector machine (SVM), neural networks (NNs), decision trees (DTs), random forest (RF), and logistic regression (LR) classifiers are employed here for experiments along with IG, GR, and chi-square feature selection methodologies for credit prediction over Australian and German datasets. The study offers an understanding of the decision-making process for informative characteristics and the functionality of machine learning (ML) in credit prediction tasks. The empirical analysis shows that in the case of the German dataset, the DT with GR feature selection and hyperparameter optimization outperforms SVM and NN with an accuracy of 99.78%. For the Australian dataset, SVM with GR feature selection outperforms NN and DT with an accuracy of 99.98%.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41735944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bharanidharan Chandrakasan, M. Subramanian, H. Manoharan, S. Selvarajan, Dr Rajanikanth Aluvalu
Purpose: In the past ten years, research on Intelligent Transportation Systems (ITS) has advanced tremendously in everyday situations to deliver improved performance for transport networks. To prevent problems with vehicular traffic, it is essential that alarm messages be sent on time. The truth is that an ITS system in and of itself could be a feature of a vehicular ad hoc network (VANET), which is a wireless network extension. As a result, a previously investigated path between two nodes might be destroyed over a short period of time. Design: The Time delay-based Multipath Routing (TMR) protocol is presented in this research which efficiently determines a route that is optimal for delivering packets to the target vehicle with the least amount of time delay. Using the TMR method, data flow is reduced, especially for daily communication. As a result, there are few packet retransmissions. Findings: To demonstrate how effective the suggested protocol is, several different protocols, including AOMDV, FF-AOMDV, EGSR, QMR, and ISR, have been used to evaluate the TMR. Simulation outcomes show how well our suggested approach performs when compared to alternative methods. Originality: Our method would accomplish two objectives as a consequence. First, it would increase the speed of data transmission, quickly transfer data packets to the target vehicle, especially warning messages, and prevent vehicular issues like automobile accidents. Second, to relieve network stress and minimize network congestion and data collisions.
{"title":"Future transportation computing model with trifold algorithm for real-time multipath networks","authors":"Bharanidharan Chandrakasan, M. Subramanian, H. Manoharan, S. Selvarajan, Dr Rajanikanth Aluvalu","doi":"10.32629/jai.v6i2.618","DOIUrl":"https://doi.org/10.32629/jai.v6i2.618","url":null,"abstract":"Purpose: In the past ten years, research on Intelligent Transportation Systems (ITS) has advanced tremendously in everyday situations to deliver improved performance for transport networks. To prevent problems with vehicular traffic, it is essential that alarm messages be sent on time. The truth is that an ITS system in and of itself could be a feature of a vehicular ad hoc network (VANET), which is a wireless network extension. As a result, a previously investigated path between two nodes might be destroyed over a short period of time. Design: The Time delay-based Multipath Routing (TMR) protocol is presented in this research which efficiently determines a route that is optimal for delivering packets to the target vehicle with the least amount of time delay. Using the TMR method, data flow is reduced, especially for daily communication. As a result, there are few packet retransmissions. Findings: To demonstrate how effective the suggested protocol is, several different protocols, including AOMDV, FF-AOMDV, EGSR, QMR, and ISR, have been used to evaluate the TMR. Simulation outcomes show how well our suggested approach performs when compared to alternative methods. Originality: Our method would accomplish two objectives as a consequence. First, it would increase the speed of data transmission, quickly transfer data packets to the target vehicle, especially warning messages, and prevent vehicular issues like automobile accidents. Second, to relieve network stress and minimize network congestion and data collisions.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44792850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Musale, Kalyani Gargate, Vaishnavi Gulavani, Samruddhi Kadam, S. Kothawade
Sign language is a medium of communication for people with hearing and speaking impairment. It uses gestures to convey messages. The proposed system focuses on using sign language in search engines and helping specially-abled people get the information they are looking for. Here, we are using Marathi sign language. Translation systems for Indian sign languages are not much simple and popular as American sign language. Marathi language consists of words with individual letters formed of two letter = Swara + Vyanjan (Mulakshar). Every Vyanjan or Swara individually has a unique sign which can be represented as image or video with still frames. Any letter formed of both Swara and Vyanjan is represented with hand gesture signing the Vyanjan as above and with movement of signed gesture in shape of Swara in Devnagari script. Such letters are represented with videos containing motion and frames in particular sequence. Further the predicted term can be searched on google using the sign search. The proposed system includes three important steps: 1) hand detection; 2) sign recognition using neural networks; 3) fetching search results. Overall, the system has great potential to help individuals with hearing and speaking impairment to access information on the internet through the use of sign language. It is a promising application of machine learning and deep learning techniques.
{"title":"Indian sign language recognition and search results","authors":"S. Musale, Kalyani Gargate, Vaishnavi Gulavani, Samruddhi Kadam, S. Kothawade","doi":"10.32629/jai.v6i3.1000","DOIUrl":"https://doi.org/10.32629/jai.v6i3.1000","url":null,"abstract":"Sign language is a medium of communication for people with hearing and speaking impairment. It uses gestures to convey messages. The proposed system focuses on using sign language in search engines and helping specially-abled people get the information they are looking for. Here, we are using Marathi sign language. Translation systems for Indian sign languages are not much simple and popular as American sign language. Marathi language consists of words with individual letters formed of two letter = Swara + Vyanjan (Mulakshar). Every Vyanjan or Swara individually has a unique sign which can be represented as image or video with still frames. Any letter formed of both Swara and Vyanjan is represented with hand gesture signing the Vyanjan as above and with movement of signed gesture in shape of Swara in Devnagari script. Such letters are represented with videos containing motion and frames in particular sequence. Further the predicted term can be searched on google using the sign search. The proposed system includes three important steps: 1) hand detection; 2) sign recognition using neural networks; 3) fetching search results. Overall, the system has great potential to help individuals with hearing and speaking impairment to access information on the internet through the use of sign language. It is a promising application of machine learning and deep learning techniques.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47006751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Junaid Khan, Rashid Mustafa, Pushparaj Pal
The efficiency and performance of fuel cell (FC) systems heavily rely on their ability to track the maximum power point (MPP) of the FC stack. This research article presents a comprehensive review and comparative analysis of various global maximum power point tracking (GMPPT) techniques developed for FC systems. These techniques aim to optimize power extraction from FCs, enhance system efficiency, and improve overall performance. Through a detailed investigation and evaluation of different GMPPT methods, this study sheds light on the advancements made in this field, identifies key challenges, and provides recommendations for future research directions. The findings of this research contribute to the development of more efficient and reliable FC systems for diverse applications.
{"title":"Comparative analysis of various global maximum power point tracking techniques for fuel cell frameworks","authors":"Mohammad Junaid Khan, Rashid Mustafa, Pushparaj Pal","doi":"10.32629/jai.v6i2.703","DOIUrl":"https://doi.org/10.32629/jai.v6i2.703","url":null,"abstract":"The efficiency and performance of fuel cell (FC) systems heavily rely on their ability to track the maximum power point (MPP) of the FC stack. This research article presents a comprehensive review and comparative analysis of various global maximum power point tracking (GMPPT) techniques developed for FC systems. These techniques aim to optimize power extraction from FCs, enhance system efficiency, and improve overall performance. Through a detailed investigation and evaluation of different GMPPT methods, this study sheds light on the advancements made in this field, identifies key challenges, and provides recommendations for future research directions. The findings of this research contribute to the development of more efficient and reliable FC systems for diverse applications.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69961177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Blesswin, S. Mary, Shubhangi Suryawanshi, Vanita G. Kshirsagar, Sarika Y. Pabalkar, Mithra Venkatesan, Catherine Esther Karunya
In the digital era, data transfer plays a crucial role in various industries such as banking, healthcare, marketing, and social media. Images are widely used as a means of communication. The presence of cyber attackers poses a significant risk to data integrity and security during transmission. According to the cost of data breach report 2021, the healthcare industry has experienced the highest costs associated with data breaches, highlighting the need for robust security measures. Visual cryptography (VC) is a technique used to secure image data during transmission. It involves encrypting the image and dividing it into shares, which are then communicated to the intended recipients. Each individual share does not reveal any classified information. At the destination, the shares are digitally combined to reconstruct the original image. When implementing VC, several factors need to be considered, including security, computational complexity, and the quality of the reconstructed image. In this paper, a new method called progressive meaningful visual cryptography (PMVC) is proposed for transferring secret images. The PMVC method introduces an error instance that triggers meaningful shares generation. The proposed method ensures the quality of the reconstructed image by achieving a peak signal-to-noise ratio (PSNR) of up to 37 dB.
{"title":"Secure transmission of grayscale images with triggered error visual sharing","authors":"John Blesswin, S. Mary, Shubhangi Suryawanshi, Vanita G. Kshirsagar, Sarika Y. Pabalkar, Mithra Venkatesan, Catherine Esther Karunya","doi":"10.32629/jai.v6i2.957","DOIUrl":"https://doi.org/10.32629/jai.v6i2.957","url":null,"abstract":"In the digital era, data transfer plays a crucial role in various industries such as banking, healthcare, marketing, and social media. Images are widely used as a means of communication. The presence of cyber attackers poses a significant risk to data integrity and security during transmission. According to the cost of data breach report 2021, the healthcare industry has experienced the highest costs associated with data breaches, highlighting the need for robust security measures. Visual cryptography (VC) is a technique used to secure image data during transmission. It involves encrypting the image and dividing it into shares, which are then communicated to the intended recipients. Each individual share does not reveal any classified information. At the destination, the shares are digitally combined to reconstruct the original image. When implementing VC, several factors need to be considered, including security, computational complexity, and the quality of the reconstructed image. In this paper, a new method called progressive meaningful visual cryptography (PMVC) is proposed for transferring secret images. The PMVC method introduces an error instance that triggers meaningful shares generation. The proposed method ensures the quality of the reconstructed image by achieving a peak signal-to-noise ratio (PSNR) of up to 37 dB.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47141490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine and deep learning (DL) algorithms have advanced to a point where a wide range of crucial real-world computer vision problems can be solved. Facial Expression Recognition (FER) is one of these applications; it is the foremost non-verbal intentions and a fascinating study of symmetry. A prevalent application of deep learning has become the area of vision, where facial expression recognition has emerged as one of the most promising new frontiers. Latterly deep learning-based FER models have been plagued by technical problems, including under-fitting and over-fitting. Probably inadequate information is used for training and expressing ideas. With these considerations in mind, this article gives a systematic and complete survey of the most cutting-edge AI strategies and gives a conclusion to address the aforementioned problems. It is also a scheme of classification for existing facial proposals in compact. This survey analyses the structure of the usual FER method and discusses the feasible technologies that may be used in its respective elements. In addition, this study provides a summary of seventeen widely-used FER datasets that reviews functioning novel machine and DL networks suggested by academics and outline their benefits and liability in the context of facial expression acknowledgment based on static replicas. Finally, this study discusses the research obstacles and open consequences of that well-conditioned face expression recognition scheme.
{"title":"An extensive study of facial expression recognition using artificial intelligence techniques with different datasets","authors":"Sridhar Reddy Karra, Arun L. Kakhandki","doi":"10.32629/jai.v6i2.631","DOIUrl":"https://doi.org/10.32629/jai.v6i2.631","url":null,"abstract":"Machine and deep learning (DL) algorithms have advanced to a point where a wide range of crucial real-world computer vision problems can be solved. Facial Expression Recognition (FER) is one of these applications; it is the foremost non-verbal intentions and a fascinating study of symmetry. A prevalent application of deep learning has become the area of vision, where facial expression recognition has emerged as one of the most promising new frontiers. Latterly deep learning-based FER models have been plagued by technical problems, including under-fitting and over-fitting. Probably inadequate information is used for training and expressing ideas. With these considerations in mind, this article gives a systematic and complete survey of the most cutting-edge AI strategies and gives a conclusion to address the aforementioned problems. It is also a scheme of classification for existing facial proposals in compact. This survey analyses the structure of the usual FER method and discusses the feasible technologies that may be used in its respective elements. In addition, this study provides a summary of seventeen widely-used FER datasets that reviews functioning novel machine and DL networks suggested by academics and outline their benefits and liability in the context of facial expression acknowledgment based on static replicas. Finally, this study discusses the research obstacles and open consequences of that well-conditioned face expression recognition scheme.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41915872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper reviews the theoretical foundations and components of blended learning (BL) in higher education globally, analyzing six articles from five countries published between January 2016 and December 2020. The study identified challenges faced by instructors, including workload, timeliness, and lack of academic and technical skills to manage BL. Balancing face-to-face and online learning was also challenging. To address these issues, the importance of staff training, support, and networking was emphasized, proposing a modified BL model for tertiary education in Bangladesh, which could be implemented post-pandemic using a machine-learning approach. The mixed BL model was recommended for Bangladeshi institutions, utilizing machine learning algorithms to facilitate outcome-based learning through technological applications. A preliminary survey of 120 students from BGC Trust University in Bangladesh was conducted using statistical data obtained from machine learning algorithms to explore the applicability of the mixed-learning approach. Machine learning proved beneficial for data analysis, drawing valuable insights for educators and policymakers seeking effective teaching strategies that incorporate technology. This research underscores the potential of machine learning in conducting surveys and analyzing data related to blended learning in tertiary education, offering significant contributions to the field.
{"title":"Blended learning pedagogy and its implementation in the tertiary education: Bangladesh perspectives","authors":"Shrabonti Mitra, MD. Abdul Malek, Tanzin Sultana, Abhijit Pathak, Md. Jainal Abedin, Khadizatul Kobra, Md. Habib Ullah, Mayeen Uddin Khandaker","doi":"10.32629/jai.v6i2.744","DOIUrl":"https://doi.org/10.32629/jai.v6i2.744","url":null,"abstract":"This paper reviews the theoretical foundations and components of blended learning (BL) in higher education globally, analyzing six articles from five countries published between January 2016 and December 2020. The study identified challenges faced by instructors, including workload, timeliness, and lack of academic and technical skills to manage BL. Balancing face-to-face and online learning was also challenging. To address these issues, the importance of staff training, support, and networking was emphasized, proposing a modified BL model for tertiary education in Bangladesh, which could be implemented post-pandemic using a machine-learning approach. The mixed BL model was recommended for Bangladeshi institutions, utilizing machine learning algorithms to facilitate outcome-based learning through technological applications. A preliminary survey of 120 students from BGC Trust University in Bangladesh was conducted using statistical data obtained from machine learning algorithms to explore the applicability of the mixed-learning approach. Machine learning proved beneficial for data analysis, drawing valuable insights for educators and policymakers seeking effective teaching strategies that incorporate technology. This research underscores the potential of machine learning in conducting surveys and analyzing data related to blended learning in tertiary education, offering significant contributions to the field.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45979720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optical character recognition (OCR) converts text images into machine-readable text. Due to the non-availability of several standard datasets of Devanagari characters, researchers have used many techniques for developing an OCR system with varying recognition rates using their own created datasets. The main objective of our proposed study is to improve the recognition rate by analyzing the effect of using batch normalization (BN) instead of dropout in convolutional neural network (CNN) architecture. So, a CNN-based model HDevChaRNet (Handwritten Devanagari Character Recognition Network) is proposed in this study for same to recognize offline handwritten Devanagari characters using a dataset named Devanagari handwritten character dataset (DHCD). DHCD comprises a total of 46 classes of characters, out of which 36 are consonants, and 10 are numerals. The proposed models based on convolutional neural network (CNN) with BN for recognizing the Devanagari characters showed an improved accuracy of 98.75%, 99.70%, and 99.17% for 36, 10, and 46 classes, respectively.
{"title":"HDevChaRNet: A deep learning-based model for recognizing offline handwritten devanagari characters","authors":"Bharati Yadav, Ajay Indian, Gaurav Meena","doi":"10.32629/jai.v6i2.679","DOIUrl":"https://doi.org/10.32629/jai.v6i2.679","url":null,"abstract":"Optical character recognition (OCR) converts text images into machine-readable text. Due to the non-availability of several standard datasets of Devanagari characters, researchers have used many techniques for developing an OCR system with varying recognition rates using their own created datasets. The main objective of our proposed study is to improve the recognition rate by analyzing the effect of using batch normalization (BN) instead of dropout in convolutional neural network (CNN) architecture. So, a CNN-based model HDevChaRNet (Handwritten Devanagari Character Recognition Network) is proposed in this study for same to recognize offline handwritten Devanagari characters using a dataset named Devanagari handwritten character dataset (DHCD). DHCD comprises a total of 46 classes of characters, out of which 36 are consonants, and 10 are numerals. The proposed models based on convolutional neural network (CNN) with BN for recognizing the Devanagari characters showed an improved accuracy of 98.75%, 99.70%, and 99.17% for 36, 10, and 46 classes, respectively.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43334757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}