Pub Date : 2020-03-31DOI: 10.5121/ijaia.2020.11201
Waheeda Almayyan
Parkinson’s disease is a complex chronic neurodegenerative disorder of the central nervous system. One of the common symptoms for the Parkinson’s disease subjects, is vocal performance degradation. Patients usually advised to follow personalized rehabilitative treatment sessions with speech experts. Recent research trends aim to investigate the potential of using sustained vowel phonations for replicating the speech experts’ assessments of Parkinson’s disease subjects’ voices. With the purpose of improving the accuracy and efficiency of Parkinson’s disease treatment, this article proposes a two-stage diagnosis model to evaluate an LSVT dataset. Firstly, we propose a modified minimum Redundancy-Maximum Relevance (mRMR) feature selection approach, based on Cuckoo Search and Tabu Search to reduce the features numbers. Secondly, we apply simple random sampling technique to dataset to increase the samples of the minority class. Promisingly, the developed approach obtained a classification Accuracy rate of 95% with 24 features by 10-fold CV method.
{"title":"A Modified Maximum Relevance Minimum Redundancy Feature Selection Method Based on Tabu Search For Parkinson’s Disease Mining","authors":"Waheeda Almayyan","doi":"10.5121/ijaia.2020.11201","DOIUrl":"https://doi.org/10.5121/ijaia.2020.11201","url":null,"abstract":"Parkinson’s disease is a complex chronic neurodegenerative disorder of the central nervous system. One of the common symptoms for the Parkinson’s disease subjects, is vocal performance degradation. Patients usually advised to follow personalized rehabilitative treatment sessions with speech experts. Recent research trends aim to investigate the potential of using sustained vowel phonations for replicating the speech experts’ assessments of Parkinson’s disease subjects’ voices. With the purpose of improving the accuracy and efficiency of Parkinson’s disease treatment, this article proposes a two-stage diagnosis model to evaluate an LSVT dataset. Firstly, we propose a modified minimum Redundancy-Maximum Relevance (mRMR) feature selection approach, based on Cuckoo Search and Tabu Search to reduce the features numbers. Secondly, we apply simple random sampling technique to dataset to increase the samples of the minority class. Promisingly, the developed approach obtained a classification Accuracy rate of 95% with 24 features by 10-fold CV method.","PeriodicalId":93188,"journal":{"name":"International journal of artificial intelligence & applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5121/ijaia.2020.11201","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49338939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-31DOI: 10.5121/ijaia.2020.11104
N. Khosla, D. Sharma
A semi-supervised classifier is used in this paper is to investigate a model for forecasting unpredictable load on the IT systems and to predict extreme CPU utilization in a complex enterprise environment with large number of applications running concurrently. This proposed model forecasts the likelihood of a scenario where extreme load of web traffic impacts the IT systems and this model predicts the CPU utilization under extreme stress conditions. The enterprise IT environment consists of a large number of applications running in a real time system. Load features are extracted while analysing an envelope of the patterns of work-load traffic which are hidden in the transactional data of these applications. This method simulates and generates synthetic workload demand patterns, run use-case high priority scenarios in a test environment and use our model to predict the excessive CPU utilization under peak load conditions for validation. Expectation Maximization classifier with forced-learning, attempts to extract and analyse the parameters that can maximize the chances of the model after subsiding the unknown labels. As a result of this model, likelihood of an excessive CPU utilization can be predicted in short duration as compared to few days in a complex enterprise environment. Workload demand prediction and profiling has enormous potential in optimizing usages of IT resources with minimal risk.
{"title":"Using Semi-supervised Classifier to Forecast Extreme CPU Utilization","authors":"N. Khosla, D. Sharma","doi":"10.5121/ijaia.2020.11104","DOIUrl":"https://doi.org/10.5121/ijaia.2020.11104","url":null,"abstract":"A semi-supervised classifier is used in this paper is to investigate a model for forecasting unpredictable\u0000 load on the IT systems and to predict extreme CPU utilization in a complex enterprise environment with\u0000 large number of applications running concurrently. This proposed model forecasts the likelihood of a\u0000 scenario where extreme load of web traffic impacts the IT systems and this model predicts the CPU\u0000 utilization under extreme stress conditions. The enterprise IT environment consists of a large number of\u0000 applications running in a real time system. Load features are extracted while analysing an envelope of the\u0000 patterns of work-load traffic which are hidden in the transactional data of these applications. This method\u0000 simulates and generates synthetic workload demand patterns, run use-case high priority scenarios in a test\u0000 environment and use our model to predict the excessive CPU utilization under peak load conditions for\u0000 validation. Expectation Maximization classifier with forced-learning, attempts to extract and analyse the\u0000 parameters that can maximize the chances of the model after subsiding the unknown labels. As a result of\u0000 this model, likelihood of an excessive CPU utilization can be predicted in short duration as compared to\u0000 few days in a complex enterprise environment. Workload demand prediction and profiling has enormous\u0000 potential in optimizing usages of IT resources with minimal risk.","PeriodicalId":93188,"journal":{"name":"International journal of artificial intelligence & applications","volume":"11 1","pages":"45-52"},"PeriodicalIF":0.0,"publicationDate":"2020-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45511427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-30DOI: 10.5121/ijaia.2020.11105
B. Kanso
In this paper we present a hybrid technique that applies an ant colony optimization algorithm followed by simulated annealing local search approach to solving the Multi-Depot Periodic Open Capacitated Arc Routing Problem (MDPOCARP). This problem is a new variant of OCARP that has never been studied in the literature and consists of determining optimal routes in each period where each route starts from a given depot, visits a list of required edges and finishes by the last one. The final edge of the route is not required to be a depot. We developed a constructive heuristic, called Nearest Insertion Heuristic (NIH) to build an initial solution. The proposed algorithm is evaluated on three different benchmarks sets and numerical results show that the proposed approach achieves highly efficient results.
{"title":"Hybrid ANT Colony Algorithm for the Multi-depot Periodic Open Capacitated Arc Routing Problem","authors":"B. Kanso","doi":"10.5121/ijaia.2020.11105","DOIUrl":"https://doi.org/10.5121/ijaia.2020.11105","url":null,"abstract":"In this paper we present a hybrid technique that applies an ant colony optimization algorithm followed by\u0000 simulated annealing local search approach to solving the Multi-Depot Periodic Open Capacitated Arc\u0000 Routing Problem (MDPOCARP). This problem is a new variant of OCARP that has never been studied in\u0000 the literature and consists of determining optimal routes in each period where each route starts from a\u0000 given depot, visits a list of required edges and finishes by the last one. The final edge of the route is not\u0000 required to be a depot. We developed a constructive heuristic, called Nearest Insertion Heuristic (NIH) to\u0000 build an initial solution. The proposed algorithm is evaluated on three different benchmarks sets and\u0000 numerical results show that the proposed approach achieves highly efficient results.","PeriodicalId":93188,"journal":{"name":"International journal of artificial intelligence & applications","volume":"11 1","pages":"53"},"PeriodicalIF":0.0,"publicationDate":"2020-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5121/ijaia.2020.11105","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46354016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-30DOI: 10.5121/ijaia.2020.11107
Ibrahim Gashaw, H. Shashirekha
Many automatic translation works have been addressed between major European language pairs, by taking advantage of large scale parallel corpora, but very few research works are conducted on the Amharic-Arabic language pair due to its parallel data scarcity. However, there is no benchmark parallel Amharic-Arabic text corpora available for Machine Translation task. Therefore, a small parallel Quranic text corpus is constructed by modifying the existing monolingual Arabic text and its equivalent translation of Amharic language text corpora available on Tanzile. Experiments are carried out on Two Long ShortTerm Memory (LSTM) and Gated Recurrent Units (GRU) based Neural Machine Translation (NMT) using Attention-based Encoder-Decoder architecture which is adapted from the open-source OpenNMT system. LSTM and GRU based NMT models and Google Translation system are compared and found that LSTM based OpenNMT outperforms GRU based OpenNMT and Google Translation system, with a BLEU score of 12%, 11%, and 6% respectively.
{"title":"Construction of Amharic-arabic Parallel Text Corpus for Neural Machine Translation","authors":"Ibrahim Gashaw, H. Shashirekha","doi":"10.5121/ijaia.2020.11107","DOIUrl":"https://doi.org/10.5121/ijaia.2020.11107","url":null,"abstract":"Many automatic translation works have been addressed between major European language pairs, by\u0000 taking advantage of large scale parallel corpora, but very few research works are conducted on the\u0000 Amharic-Arabic language pair due to its parallel data scarcity. However, there is no benchmark parallel\u0000 Amharic-Arabic text corpora available for Machine Translation task. Therefore, a small parallel Quranic\u0000 text corpus is constructed by modifying the existing monolingual Arabic text and its equivalent translation\u0000 of Amharic language text corpora available on Tanzile. Experiments are carried out on Two Long ShortTerm Memory (LSTM) and Gated Recurrent Units (GRU) based Neural Machine Translation (NMT) using\u0000 Attention-based Encoder-Decoder architecture which is adapted from the open-source OpenNMT system.\u0000 LSTM and GRU based NMT models and Google Translation system are compared and found that LSTM\u0000 based OpenNMT outperforms GRU based OpenNMT and Google Translation system, with a BLEU score\u0000 of 12%, 11%, and 6% respectively.","PeriodicalId":93188,"journal":{"name":"International journal of artificial intelligence & applications","volume":"11 1","pages":"79-91"},"PeriodicalIF":0.0,"publicationDate":"2020-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5121/ijaia.2020.11107","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46674491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-30DOI: 10.5121/ijaia.2020.11103
W. Qasim, B. Mitras
In this research, two algorithms first, considered to be one of hybrid algorithms. And it is algorithm represents invasive weed optimization. This algorithm is a random numerical algorithm and the second algorithm representing the grey wolves optimization. This algorithm is one of the algorithms of swarm intelligence in intelligent optimization. The algorithm of invasive weed optimization is inspired by nature as the weeds have colonial behavior and were introduced by Mehrabian and Lucas in 2006. Invasive weeds are a serious threat to cultivated plants because of their adaptability and are a threat to the overall planting process. The behavior of these weeds has been studied and applied in the invasive weed algorithm. The algorithm of grey wolves, which is considered as a swarm intelligence algorithm, has been used to reach the goal and reach the best solution. The algorithm was designed by SeyedaliMirijalili in 2014 and taking advantage of the intelligence of the squadrons is to avoid falling into local solutions so the new hybridization process between the previous algorithms GWO and IWO and we will symbolize the new algorithm IWOGWO.Comparing the suggested hybrid algorithm with the orig.
{"title":"A Hybrid Algorithm Based on Invasive Weed Optimization Algorithm and Grey Wolf Optimization Algorithm","authors":"W. Qasim, B. Mitras","doi":"10.5121/ijaia.2020.11103","DOIUrl":"https://doi.org/10.5121/ijaia.2020.11103","url":null,"abstract":"In this research, two algorithms first, considered to be one of hybrid algorithms. And it is algorithm\u0000 represents invasive weed optimization. This algorithm is a random numerical algorithm and the second\u0000 algorithm representing the grey wolves optimization. This algorithm is one of the algorithms of swarm\u0000 intelligence in intelligent optimization. The algorithm of invasive weed optimization is inspired by nature as\u0000 the weeds have colonial behavior and were introduced by Mehrabian and Lucas in 2006. Invasive weeds\u0000 are a serious threat to cultivated plants because of their adaptability and are a threat to the overall\u0000 planting process. The behavior of these weeds has been studied and applied in the invasive weed algorithm.\u0000 The algorithm of grey wolves, which is considered as a swarm intelligence algorithm, has been used to\u0000 reach the goal and reach the best solution. The algorithm was designed by SeyedaliMirijalili in 2014 and\u0000 taking advantage of the intelligence of the squadrons is to avoid falling into local solutions so the new\u0000 hybridization process between the previous algorithms GWO and IWO and we will symbolize the new\u0000 algorithm IWOGWO.Comparing the suggested hybrid algorithm with the orig.","PeriodicalId":93188,"journal":{"name":"International journal of artificial intelligence & applications","volume":"11 1","pages":"31-44"},"PeriodicalIF":0.0,"publicationDate":"2020-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5121/ijaia.2020.11103","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42321691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-30DOI: 10.5121/ijaia.2020.11102
Christiana Panayiotou
The purpose of the current paper is to present an ontological analysis to the identification of a particular type of prepositional figures of speech via the identification of inconsistencies in ontological concepts. Prepositional noun phrases are used widely in a multiplicity of domains to describe real world events and activities. However, one aspect that makes a prepositional noun phrase poetical is that the latter suggests a semantic relationship between concepts that does not exist in the real world. The current paper shows that a set of rules based on WordNet classes and an ontology representing human behaviour and properties, can be used to identify figures of speech due to the discrepancies in the semantic relations of the concepts involved. Based on this realization, the paper describes a method for determining poetic vs. non-poetic prepositional figures of speech, using WordNet class hierarchies. The paper also addresses the problem of inconsistency resulting from the assertion of figures of speech in ontological knowledge bases, identifying the problems involved in their representation. Finally, it discusses how a contextualized approach might help to resolve this problem.
{"title":"An Ontological Analysis and Natural Language Processing of Figures of Speech","authors":"Christiana Panayiotou","doi":"10.5121/ijaia.2020.11102","DOIUrl":"https://doi.org/10.5121/ijaia.2020.11102","url":null,"abstract":"The purpose of the current paper is to present an ontological analysis to the identification of a particular\u0000 type of prepositional figures of speech via the identification of inconsistencies in ontological concepts.\u0000 Prepositional noun phrases are used widely in a multiplicity of domains to describe real world events and\u0000 activities. However, one aspect that makes a prepositional noun phrase poetical is that the latter suggests a\u0000 semantic relationship between concepts that does not exist in the real world. The current paper shows that\u0000 a set of rules based on WordNet classes and an ontology representing human behaviour and properties,\u0000 can be used to identify figures of speech due to the discrepancies in the semantic relations of the concepts\u0000 involved. Based on this realization, the paper describes a method for determining poetic vs. non-poetic\u0000 prepositional figures of speech, using WordNet class hierarchies. The paper also addresses the problem of\u0000 inconsistency resulting from the assertion of figures of speech in ontological knowledge bases, identifying\u0000 the problems involved in their representation. Finally, it discusses how a contextualized approach might\u0000 help to resolve this problem.","PeriodicalId":93188,"journal":{"name":"International journal of artificial intelligence & applications","volume":"11 1","pages":"17-30"},"PeriodicalIF":0.0,"publicationDate":"2020-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47672411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.5121/ijaia.2020.11403
Adele Peskin, Boris Wilthan, Michael Majurski
Using a unique data collection, we are able to study the detection of dense geometric objects in image data where object density, clarity, and size vary. The data is a large set of black and white images of scatterplots, taken from journals reporting thermophysical property data of metal systems, whose plot points are represented primarily by circles, triangles, and squares. We built a highly accurate single class U-Net convolutional neural network model to identify 97 % of image objects in a defined set of test images, locating the centers of the objects to within a few pixels of the correct locations. We found an optimal way in which to mark our training data masks to achieve this level of accuracy. The optimal markings for object classification, however, required more information in the masks to identify particular types of geometries. We show a range of different patterns used to mark the training data masks, and how they help or hurt our dual goals of location and classification. Altering the annotations in the segmentation masks can increase both the accuracy of object classification and localization on the plots, more than other factors such as adding loss terms to the network calculations. However, localization of the plot points and classification of the geometric objects require different optimal training data.
{"title":"DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS.","authors":"Adele Peskin, Boris Wilthan, Michael Majurski","doi":"10.5121/ijaia.2020.11403","DOIUrl":"https://doi.org/10.5121/ijaia.2020.11403","url":null,"abstract":"<p><p>Using a unique data collection, we are able to study the detection of dense geometric objects in image data where object density, clarity, and size vary. The data is a large set of black and white images of scatterplots, taken from journals reporting thermophysical property data of metal systems, whose plot points are represented primarily by circles, triangles, and squares. We built a highly accurate single class U-Net convolutional neural network model to identify 97 % of image objects in a defined set of test images, locating the centers of the objects to within a few pixels of the correct locations. We found an optimal way in which to mark our training data masks to achieve this level of accuracy. The optimal markings for object classification, however, required more information in the masks to identify particular types of geometries. We show a range of different patterns used to mark the training data masks, and how they help or hurt our dual goals of location and classification. Altering the annotations in the segmentation masks can increase both the accuracy of object classification and localization on the plots, more than other factors such as adding loss terms to the network calculations. However, localization of the plot points and classification of the geometric objects require different optimal training data.</p>","PeriodicalId":93188,"journal":{"name":"International journal of artificial intelligence & applications","volume":"11 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5121/ijaia.2020.11403","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39107811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-30DOI: 10.5121/ijaia.2019.10601
Ibon Merino, J. Azpiazu, Anthony Remazeilles, B. Sierra
Detection and description of keypoints from an image is a well-studied problem in Computer Vision. Some methods like SIFT, SURF or ORB are computationally really efficient. This paper proposes a solution for a particular case study on object recognition of industrial parts based on hierarchical classification. Reducing the number of instances leads to better performance, indeed, that is what the use of the hierarchical classification is looking for. We demonstrate that this method performs better than using just one method like ORB, SIFT or FREAK, despite being fairly slower.
{"title":"2D Features-based Detector and Descriptor Selection System for Hierarchical Recognition of Industrial Parts","authors":"Ibon Merino, J. Azpiazu, Anthony Remazeilles, B. Sierra","doi":"10.5121/ijaia.2019.10601","DOIUrl":"https://doi.org/10.5121/ijaia.2019.10601","url":null,"abstract":"Detection and description of keypoints from an image is a well-studied problem in Computer Vision. Some methods like SIFT, SURF or ORB are computationally really efficient. This paper proposes a solution for a particular case study on object recognition of industrial parts based on hierarchical classification. Reducing the number of instances leads to better performance, indeed, that is what the use of the hierarchical \u0000classification is looking for. We demonstrate that this method performs better than using just one method like ORB, SIFT or FREAK, despite being fairly slower.","PeriodicalId":93188,"journal":{"name":"International journal of artificial intelligence & applications","volume":"10 1","pages":"1-13"},"PeriodicalIF":0.0,"publicationDate":"2019-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44678464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-30DOI: 10.5121/ijaia.2019.10605
A. Agarwal
Location specific characteristics of a road segment such as road geometry as well as surrounding road features can contribute significantly to road accident risk. A Google Maps image of a road segment provides a comprehensive visual of its complex geometry and the surrounding features. This paper proposes a novel machine learning approach using Convolutional Neural Networks (CNN) to accident risk prediction by unlocking the precise interaction of these many small road features that work in combination to contribute to a greater accident risk. The model has worldwide applicability and a very low cost/time effort to implement for a new city since Google Maps are available in most places across the globe. It also significantly contributes to existing research on accident prevention by allowing for the inclusion of highly detailed road geometry to weigh in on the prediction as well as the new locationbased attributes like proximity to schools and businesses.
{"title":"Predicting Road Accident Risk Using Google Maps Images and A Convolutional Neural Network","authors":"A. Agarwal","doi":"10.5121/ijaia.2019.10605","DOIUrl":"https://doi.org/10.5121/ijaia.2019.10605","url":null,"abstract":"Location specific characteristics of a road segment such as road geometry as well as surrounding road features can contribute significantly to road accident risk. A Google Maps image of a road segment provides a comprehensive visual of its complex geometry and the surrounding features. This paper proposes a novel machine learning approach using Convolutional Neural Networks (CNN) to accident risk prediction by unlocking the precise interaction of these many small road features that work in combination to contribute to a greater accident risk. The model has worldwide applicability and a very low cost/time effort to implement for a new city since Google Maps are available in most places across the globe. It also significantly contributes to existing research on accident prevention by allowing for the inclusion of highly detailed road geometry to weigh in on the prediction as well as the new locationbased attributes like proximity to schools and businesses.","PeriodicalId":93188,"journal":{"name":"International journal of artificial intelligence & applications","volume":"10 1","pages":"49-59"},"PeriodicalIF":0.0,"publicationDate":"2019-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48823193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-30DOI: 10.5121/ijaia.2019.10602
Ghada Alsebayel, J. Berri
Game based learning is becoming a widespread technique used to enhance motivation, involvement and educational experience of learners. Games have the potential to support educational curricula when designed effectively. In this work, an educational game to teach Arabic spelling to children is proposed. The game consists of two main parts; a robot and a desktop application. The robot is connected to the desktop application to form the complete game. Our mere focus is to develop an interactive, adaptive game to motivate students and let them interact joyfully in their environment while learning simple Arabic spelling rules. The interaction was implemented through designing an interaction model between the user and the robot, where the robot responds to user input with appropriate facial expressions and vocal statements. On the other hand, adaption and intelligence of the game is done through utilizing the nutshell of expert systems’ framework with some alterations. Our proposed game is based on the curriculum of Saudi Arabia in elementary schools. It is anticipated that the deployment of robot-based games in the classroom will advance students’ engagement and enthusiasm about learning Arabic spelling.
{"title":"Robot Based Interactive Game for Teaching Arabic Spelling","authors":"Ghada Alsebayel, J. Berri","doi":"10.5121/ijaia.2019.10602","DOIUrl":"https://doi.org/10.5121/ijaia.2019.10602","url":null,"abstract":"Game based learning is becoming a widespread technique used to enhance motivation, involvement and educational experience of learners. Games have the potential to support educational curricula when designed effectively. In this work, an educational game to teach Arabic spelling to children is proposed. The game consists of two main parts; a robot and a desktop application. The robot is connected to the desktop application to form the complete game. Our mere focus is to develop an interactive, adaptive game to motivate students and let them interact joyfully in their environment while learning simple Arabic spelling rules. The interaction was implemented through designing an interaction model between the user and the robot, where the robot responds to user input with appropriate facial expressions and vocal statements. On the other hand, adaption and intelligence of the game is done through utilizing the nutshell of expert systems’ framework with some alterations. Our proposed game is based on the curriculum of Saudi Arabia in elementary schools. It is anticipated that the deployment of robot-based games in the classroom will advance students’ engagement and enthusiasm about learning Arabic spelling.","PeriodicalId":93188,"journal":{"name":"International journal of artificial intelligence & applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46839418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}