Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357181
Saad Chakkor, Mostafa Baghouri, Zineb Cheker, A. Oualkadi, J. E. Hangouche, Jawhar Laamech
This is a proposal for an automated detection and remote monitoring system made up of a centralized network of communicating portable electronic devices based on biomedical sensors operating in the IoT context in synergy with wireless sensor network technologies, telemedicine and artificial intelligence. This network will be deployed to monitor a population settling in a target area (cities, region, country, etc.). The goal of this system is the detection and early diagnosis of the disease in people infected with the COVID-19 virus, using a device (such as a bracelet or a chest strap). This device collects in real time all the necessary biomedical measurements of a person, including their location, freeing them from any hospitalization or use of complex and expensive equipment. These informations are then transmitted, via a wireless connection, to a regional or national control center which takes care of its storage in a specialized database. This center executes a decision-making algorithm using artificial intelligence and fuzzy inference engine to detect accurately each possible abnormal change in the supervised biomedical signs reflecting risk factor or indicating the appearance of symptoms characterizing COVID-19 disease. In the positive case, the control system triggers a warning alarm concerning this infected person and requests intervention of the competent authorities to take the necessary measures and actions. Computer simulations with Matlab software tool have been conducted to evaluate the performance of the proposed system. Study findings show that the designed device is suitable for application in COVID-19 patient monitoring.
{"title":"Intelligent Network for Proactive Detection of COVID-19 Disease","authors":"Saad Chakkor, Mostafa Baghouri, Zineb Cheker, A. Oualkadi, J. E. Hangouche, Jawhar Laamech","doi":"10.1109/CiSt49399.2021.9357181","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357181","url":null,"abstract":"This is a proposal for an automated detection and remote monitoring system made up of a centralized network of communicating portable electronic devices based on biomedical sensors operating in the IoT context in synergy with wireless sensor network technologies, telemedicine and artificial intelligence. This network will be deployed to monitor a population settling in a target area (cities, region, country, etc.). The goal of this system is the detection and early diagnosis of the disease in people infected with the COVID-19 virus, using a device (such as a bracelet or a chest strap). This device collects in real time all the necessary biomedical measurements of a person, including their location, freeing them from any hospitalization or use of complex and expensive equipment. These informations are then transmitted, via a wireless connection, to a regional or national control center which takes care of its storage in a specialized database. This center executes a decision-making algorithm using artificial intelligence and fuzzy inference engine to detect accurately each possible abnormal change in the supervised biomedical signs reflecting risk factor or indicating the appearance of symptoms characterizing COVID-19 disease. In the positive case, the control system triggers a warning alarm concerning this infected person and requests intervention of the competent authorities to take the necessary measures and actions. Computer simulations with Matlab software tool have been conducted to evaluate the performance of the proposed system. Study findings show that the designed device is suitable for application in COVID-19 patient monitoring.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"252 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120862631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357291
Kaoutar Belhoucine, M. Mourchid, A. Mouloudi, Samir Mbarki
Introducing ontology in information retrieval provides the obvious benefit of higher precision and addresses other common issues such as information quality and user adaptation. However, the main disadvantage is the costs (i.e., time and effort) of manually constructing an ontology and of its representativeness of the specified domain. This paper considers the ontology construction process and proposes a middle-out approach that allows the construction of a well-founded ontology speedily. The domain application that interests us is Moroccan commercial law. The ontology to be built aims to support users in describing a specific legal situation and retrieving the relevant legal articles and court decisions in similar cases. The proposed approach combines a top-down and bottom-up strategy. The first allows us to define an ontological model of the legal domain by reusing an existing core ontology, whereas the second populates and refines this model based on an ontology-learning process from Arabic texts.
{"title":"A Middle-out Approach for Building a Legal domain ontology in Arabic","authors":"Kaoutar Belhoucine, M. Mourchid, A. Mouloudi, Samir Mbarki","doi":"10.1109/CiSt49399.2021.9357291","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357291","url":null,"abstract":"Introducing ontology in information retrieval provides the obvious benefit of higher precision and addresses other common issues such as information quality and user adaptation. However, the main disadvantage is the costs (i.e., time and effort) of manually constructing an ontology and of its representativeness of the specified domain. This paper considers the ontology construction process and proposes a middle-out approach that allows the construction of a well-founded ontology speedily. The domain application that interests us is Moroccan commercial law. The ontology to be built aims to support users in describing a specific legal situation and retrieving the relevant legal articles and court decisions in similar cases. The proposed approach combines a top-down and bottom-up strategy. The first allows us to define an ontological model of the legal domain by reusing an existing core ontology, whereas the second populates and refines this model based on an ontology-learning process from Arabic texts.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134142208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357298
S. Elyassami, Yasir Hamid, T. Habuza
People lose their lives every day due to road traffic crashes. The problem is so humongous globally that the World Health Organization, in its Sustainable Development Agenda 2030, is inviting the coordinates efforts across nations towards it and aspiring to cut down the deaths and injuries to half. Taking a clue from that, the proposed work is undertaken to build machine learning-based models for analyzing the crash data, identifying the important risk factors, and predict the injury severity of drivers. The proposed work studied and analyzed several factors of road accidents to create an accurate and interpretable model that predicts the occurrence and severity of car accidents by investigating crash causal factors and crash severity factors. In the proposed work, we employed three machine learning algorithms to vis-à-vis Decision Tree, Random Forest, and Gradient Boosted tree on Statewide Vehicle Crashes Dataset provided by Maryland State Police. The gradient boosted-based model reported the highest prediction accuracy and provided the most influencing factors in the predictive model. The findings showed that disregarding traffic signals and stop signs, road design problems, poor visibility, and bad weather conditions are the most important variables in the predictive road traffic crash model. Using the identified risk factors is crucial in establishing actions that may reduce the risks related to those factors.
{"title":"Road Crashes Analysis and Prediction using Gradient Boosted and Random Forest Trees","authors":"S. Elyassami, Yasir Hamid, T. Habuza","doi":"10.1109/CiSt49399.2021.9357298","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357298","url":null,"abstract":"People lose their lives every day due to road traffic crashes. The problem is so humongous globally that the World Health Organization, in its Sustainable Development Agenda 2030, is inviting the coordinates efforts across nations towards it and aspiring to cut down the deaths and injuries to half. Taking a clue from that, the proposed work is undertaken to build machine learning-based models for analyzing the crash data, identifying the important risk factors, and predict the injury severity of drivers. The proposed work studied and analyzed several factors of road accidents to create an accurate and interpretable model that predicts the occurrence and severity of car accidents by investigating crash causal factors and crash severity factors. In the proposed work, we employed three machine learning algorithms to vis-à-vis Decision Tree, Random Forest, and Gradient Boosted tree on Statewide Vehicle Crashes Dataset provided by Maryland State Police. The gradient boosted-based model reported the highest prediction accuracy and provided the most influencing factors in the predictive model. The findings showed that disregarding traffic signals and stop signs, road design problems, poor visibility, and bad weather conditions are the most important variables in the predictive road traffic crash model. Using the identified risk factors is crucial in establishing actions that may reduce the risks related to those factors.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132751878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357067
Yassine Akhiat, Youssef Asnaoui, M. Chahhou, Ahmed Zinedine
Feature selection (FS) is a very important pre-processing technique in machine learning and data mining. It aims to select a small subset of relevant and informative features from the original feature space that may contain many irrelevant, redundant and noisy features. Feature selection usually leads to better performance, interpretability, and lower computational cost. In the literature, FS methods are categorized into three main approaches: Filters, Wrappers, and Embedded. In this paper we introduce a new feature selection method called graph feature selection (GFS). The main steps of GFS are the following: first, we create a weighted graph where each node corresponds to each feature and the weight between two nodes is computed using a matrix of individual and pairwise score of a Decision tree classifier. Second, at each iteration, we split the graph into two random partitions having the same number of nodes, then we keep moving the worst node from one partition to another until the global modularity is converged. Third, from the final best partition, we select the best ranked features according to a new proposed variable importance criterion. The results of GFS are compared to three well-known feature selection algorithms using nine benchmarking datasets. The proposed method shows its ability and effectiveness at identifying the most informative feature subset.
{"title":"A new graph feature selection approach","authors":"Yassine Akhiat, Youssef Asnaoui, M. Chahhou, Ahmed Zinedine","doi":"10.1109/CiSt49399.2021.9357067","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357067","url":null,"abstract":"Feature selection (FS) is a very important pre-processing technique in machine learning and data mining. It aims to select a small subset of relevant and informative features from the original feature space that may contain many irrelevant, redundant and noisy features. Feature selection usually leads to better performance, interpretability, and lower computational cost. In the literature, FS methods are categorized into three main approaches: Filters, Wrappers, and Embedded. In this paper we introduce a new feature selection method called graph feature selection (GFS). The main steps of GFS are the following: first, we create a weighted graph where each node corresponds to each feature and the weight between two nodes is computed using a matrix of individual and pairwise score of a Decision tree classifier. Second, at each iteration, we split the graph into two random partitions having the same number of nodes, then we keep moving the worst node from one partition to another until the global modularity is converged. Third, from the final best partition, we select the best ranked features according to a new proposed variable importance criterion. The results of GFS are compared to three well-known feature selection algorithms using nine benchmarking datasets. The proposed method shows its ability and effectiveness at identifying the most informative feature subset.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132966685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357250
Samira Lafraxo, Mohamed El Ansari
The novel Coronavirus (COVID19) is an infectious epidemic declared in March 2020 as Pandemic. Because of its easy and rapid transmission, Coronavirus has caused thousands of deaths around the world. Thus, developing new systems for accurate and fast COVID19 detection is becoming crucial. X-ray imaging is used by radiology doctors for the diagnosis of coron-avirus. However, this process requires considerable time. Therefore, artificial intelligence systems can help in reducing pressure on health care systems. In this paper, we propose CoviNet a deep learning network to automatically detect COVID19 presence in chest X-ray images. The suggested architecture is based on an adaptive median filter, histogram equalization, and a convolutional neural network. It is trained end-to-end on a publicly available dataset. Our model achieved an accuracy of 98.62% for binary classification and 95.77% for multi-class classification. As the early diagnosis may limit the spread of the virus, this framework can be used to assist radiologists in the initial diagnosis of COVID19.
{"title":"CoviNet: Automated COVID-19 Detection from X-rays using Deep Learning Techniques","authors":"Samira Lafraxo, Mohamed El Ansari","doi":"10.1109/CiSt49399.2021.9357250","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357250","url":null,"abstract":"The novel Coronavirus (COVID19) is an infectious epidemic declared in March 2020 as Pandemic. Because of its easy and rapid transmission, Coronavirus has caused thousands of deaths around the world. Thus, developing new systems for accurate and fast COVID19 detection is becoming crucial. X-ray imaging is used by radiology doctors for the diagnosis of coron-avirus. However, this process requires considerable time. Therefore, artificial intelligence systems can help in reducing pressure on health care systems. In this paper, we propose CoviNet a deep learning network to automatically detect COVID19 presence in chest X-ray images. The suggested architecture is based on an adaptive median filter, histogram equalization, and a convolutional neural network. It is trained end-to-end on a publicly available dataset. Our model achieved an accuracy of 98.62% for binary classification and 95.77% for multi-class classification. As the early diagnosis may limit the spread of the virus, this framework can be used to assist radiologists in the initial diagnosis of COVID19.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"83 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116410346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357293
Yacine Oubelkacem, A. Bakkali, S. A. Lyazidi, M. Haddad, T. Lamhasni, A. Ben-Ncer
Two Islamic parchments dating back to the IXth century along with a third Jewish one whose age is unknown were investigated by means of a completely non-invasive multi-techniques analysis combining all of elemental XRF and structural Raman, ATR-FTIR and FOR spectroscopies. The materials initially used in the preparation of the writing supports were identified; while the Islamic parchments seem to be condensed tannins-pretreated, hydrolysable tannins and lead white have been highlighted in the Jewish one. Collagen gelatinization with molecular helix disorders phenomena have been highlighted in all parchments; degradation products, gypsum and calcium oxalates, have been identified in parchments supports and writing black inks. These latter have been characterized as iron gall types, while all coloring materials have been identified and characterized: Gold, natural minerals and insect extracts. In addition to constituting valuable scientific data prior to future restorations, the obtained results are highly helpful to: i) improving the available codicological data, ii) establishing the traceability of the investigated parchments and iii) enriching the knowledge of ancient writing supports and materials, and highlighting technologies and practices developed by middle ages craftsmen.
{"title":"Non-invasive physicochemical investigations of ancient Moroccan Islamic and Jewish parchments","authors":"Yacine Oubelkacem, A. Bakkali, S. A. Lyazidi, M. Haddad, T. Lamhasni, A. Ben-Ncer","doi":"10.1109/CiSt49399.2021.9357293","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357293","url":null,"abstract":"Two Islamic parchments dating back to the IXth century along with a third Jewish one whose age is unknown were investigated by means of a completely non-invasive multi-techniques analysis combining all of elemental XRF and structural Raman, ATR-FTIR and FOR spectroscopies. The materials initially used in the preparation of the writing supports were identified; while the Islamic parchments seem to be condensed tannins-pretreated, hydrolysable tannins and lead white have been highlighted in the Jewish one. Collagen gelatinization with molecular helix disorders phenomena have been highlighted in all parchments; degradation products, gypsum and calcium oxalates, have been identified in parchments supports and writing black inks. These latter have been characterized as iron gall types, while all coloring materials have been identified and characterized: Gold, natural minerals and insect extracts. In addition to constituting valuable scientific data prior to future restorations, the obtained results are highly helpful to: i) improving the available codicological data, ii) establishing the traceability of the investigated parchments and iii) enriching the knowledge of ancient writing supports and materials, and highlighting technologies and practices developed by middle ages craftsmen.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121157614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357186
B. Vogel‐Heuser, K. Land, Fandi Bi
The digitalization of teaching due to the Covid-19 pandemic offers new challenges, yet also new opportunities. To assist and encourage students in their self-study of the unified modeling language (UML), modeling tasks were provided; then student solutions were analyzed and discussed in web meetings. This way, earlier and more in-depth insights into typical faults in the students' modeling solutions could be achieved. Two groups of students were considered, and it was examined whether students make fewer or different faults in modeling depending on their maturity and pre-knowledge.
{"title":"Challenges for Students of Mechanical Engineering Using UML - Typical Questions and Faults","authors":"B. Vogel‐Heuser, K. Land, Fandi Bi","doi":"10.1109/CiSt49399.2021.9357186","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357186","url":null,"abstract":"The digitalization of teaching due to the Covid-19 pandemic offers new challenges, yet also new opportunities. To assist and encourage students in their self-study of the unified modeling language (UML), modeling tasks were provided; then student solutions were analyzed and discussed in web meetings. This way, earlier and more in-depth insights into typical faults in the students' modeling solutions could be achieved. Two groups of students were considered, and it was examined whether students make fewer or different faults in modeling depending on their maturity and pre-knowledge.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116821918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/cist49399.2021.9357242
{"title":"6th International Congress on Information Science and Technology","authors":"","doi":"10.1109/cist49399.2021.9357242","DOIUrl":"https://doi.org/10.1109/cist49399.2021.9357242","url":null,"abstract":"","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126337810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357170
Marouane Yassine, David Beauchemin, François Laviolette, Luc Lamontagne
Address parsing consists of identifying the segments that make up an address such as a street name or a postal code. Because of its importance for tasks like record linkage, address parsing has been approached with many techniques. Neural network methods defined a new state-of-the-art for address parsing. While this approach yielded notable results, previous work has only focused on applying neural networks to achieve address parsing of addresses from one source country. We propose an approach in which we employ subword embeddings and a Recurrent Neural Network architecture to build a single model capable of learning to parse addresses from multiple countries at the same time while taking into account the difference in languages and address formatting systems. We achieved accuracies around 99% on the countries used for training with no pre-processing nor post-processing needed. We explore the possibility of transferring the address parsing knowledge obtained by training on some countries' addresses to others with no further training in a zero-shot transfer learning setting. We achieve good results for 80% of the countries (33 out of 41), almost 50% of which (20 out of 41) is near state-of-the-art performance. In addition, we propose an open-source Python implementation of our trained models11https://githuh.com/GRAAL-Research/deepparse.
{"title":"Leveraging Subword Embeddings for Multinational Address Parsing","authors":"Marouane Yassine, David Beauchemin, François Laviolette, Luc Lamontagne","doi":"10.1109/CiSt49399.2021.9357170","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357170","url":null,"abstract":"Address parsing consists of identifying the segments that make up an address such as a street name or a postal code. Because of its importance for tasks like record linkage, address parsing has been approached with many techniques. Neural network methods defined a new state-of-the-art for address parsing. While this approach yielded notable results, previous work has only focused on applying neural networks to achieve address parsing of addresses from one source country. We propose an approach in which we employ subword embeddings and a Recurrent Neural Network architecture to build a single model capable of learning to parse addresses from multiple countries at the same time while taking into account the difference in languages and address formatting systems. We achieved accuracies around 99% on the countries used for training with no pre-processing nor post-processing needed. We explore the possibility of transferring the address parsing knowledge obtained by training on some countries' addresses to others with no further training in a zero-shot transfer learning setting. We achieve good results for 80% of the countries (33 out of 41), almost 50% of which (20 out of 41) is near state-of-the-art performance. In addition, we propose an open-source Python implementation of our trained models11https://githuh.com/GRAAL-Research/deepparse.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128660712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357235
I. Šimonová, Ludmila Faltýnková, K. Kostolányová
The paper introduces results of research in which potential increase in learner's knowledge is considered from the view of four motivation types (Accurators, Coordinators, Directors, Explorers) within the process of smart instruction applied at two topics (Career Development, Healthy Living) of the English for Specific Purposes course. The main research objective is to find out whether learners of all motivation types can succeed in this process. Totally, 119 students, prospective teachers from the Faculty of Education and Faculty of Science, participated in the research. The SAMR (Substitution, Augmentation, Modification, Redefinition) model was applied within the smart instruction using smart devices to approach electronic sources and smart methods towards acquiring the learning content. The smart instruction was conducted for 12 weeks (one semester). Two hypotheses were set, and the quasi-experiment and ex-post-facto method were applied. Data referring to learners' motivation types were collected through the standardized Motivation Type Inventory (MTI) by Plaminek. The increase in learners' knowledge was calculated as the difference between entrance and final didactic tests scores. The results did not show statistically significant difference between single motivation types in the topic of Career Development. However, in Healthy Living, the difference was discovered in the group of Coordinators compared to other three types.
{"title":"Learners'Motivation Types in the Smart Instruction of English for Specific Purposes","authors":"I. Šimonová, Ludmila Faltýnková, K. Kostolányová","doi":"10.1109/CiSt49399.2021.9357235","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357235","url":null,"abstract":"The paper introduces results of research in which potential increase in learner's knowledge is considered from the view of four motivation types (Accurators, Coordinators, Directors, Explorers) within the process of smart instruction applied at two topics (Career Development, Healthy Living) of the English for Specific Purposes course. The main research objective is to find out whether learners of all motivation types can succeed in this process. Totally, 119 students, prospective teachers from the Faculty of Education and Faculty of Science, participated in the research. The SAMR (Substitution, Augmentation, Modification, Redefinition) model was applied within the smart instruction using smart devices to approach electronic sources and smart methods towards acquiring the learning content. The smart instruction was conducted for 12 weeks (one semester). Two hypotheses were set, and the quasi-experiment and ex-post-facto method were applied. Data referring to learners' motivation types were collected through the standardized Motivation Type Inventory (MTI) by Plaminek. The increase in learners' knowledge was calculated as the difference between entrance and final didactic tests scores. The results did not show statistically significant difference between single motivation types in the topic of Career Development. However, in Healthy Living, the difference was discovered in the group of Coordinators compared to other three types.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126868633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}