Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357273
Ouafae Nahli, A. D. Grosso
An Arabic word can be described according to its lexical and morphological information. Lexical analysis consists in gathering both semantic information (meaning and translation) and syntactic properties (parts of speech). Morphological analysis, instead, identifies word patterns that group the words having the same syntactic, inflectional and semantic behaviour. Such descriptions constitute two different but complementary levels of study. This paper illustrates our work, aimed at creating an exhaustive resource consisting of two levels: lexical and morphological. The lexical level collects information extracted from the dictionary $al=qbar{a}mbar{u}s al=munderset{.}{h}bar{imath}underset{.}{t}$. The morphological level describes the word patterns. The two levels are autonomous but complementary. Each word described at the lexical level is linked to its corresponding pattern. The formalization of the word pattern makes it possible to enrich word descriptions with additional morphosyntactic and inflectional information. To obtain a digital systematic resource, we followed the guidelines provided by the Text Encoding Initiative (TEI). We adopted the TEI module devoted to encoding digital dictionaries and lexicons in order to formally represent the medieval primary source $al=qbar{a}mbar{u}s al=muunderset{.}{h}bar{imath}underset{.}{t}$. We also used the TEI interpretation approach to encode the morphological word patterns keeping the two levels separate but at the same time allowing them to be linked.
{"title":"Creating Arabic Lexical Resources in TEI: A Schema for Discontinuous Morphology Encoding","authors":"Ouafae Nahli, A. D. Grosso","doi":"10.1109/CiSt49399.2021.9357273","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357273","url":null,"abstract":"An Arabic word can be described according to its lexical and morphological information. Lexical analysis consists in gathering both semantic information (meaning and translation) and syntactic properties (parts of speech). Morphological analysis, instead, identifies word patterns that group the words having the same syntactic, inflectional and semantic behaviour. Such descriptions constitute two different but complementary levels of study. This paper illustrates our work, aimed at creating an exhaustive resource consisting of two levels: lexical and morphological. The lexical level collects information extracted from the dictionary $al=qbar{a}mbar{u}s al=munderset{.}{h}bar{imath}underset{.}{t}$. The morphological level describes the word patterns. The two levels are autonomous but complementary. Each word described at the lexical level is linked to its corresponding pattern. The formalization of the word pattern makes it possible to enrich word descriptions with additional morphosyntactic and inflectional information. To obtain a digital systematic resource, we followed the guidelines provided by the Text Encoding Initiative (TEI). We adopted the TEI module devoted to encoding digital dictionaries and lexicons in order to formally represent the medieval primary source $al=qbar{a}mbar{u}s al=muunderset{.}{h}bar{imath}underset{.}{t}$. We also used the TEI interpretation approach to encode the morphological word patterns keeping the two levels separate but at the same time allowing them to be linked.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130378305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357200
Imane Ezzine, Laila Benhlima
The effects of COVID-19 have quickly spread around the world, testing the limits of the population and the public health sector. High demand on medical services are offset by disruptions in daily operations as hospitals struggle to function in the face of overcapacity, understaffing and information gaps. Faced with these problems, new technologies are being deployed to fight this pandemic and help medical staff governments to reduce its spread. Among these technologies, we find blockchains and Big Data which have been used in tracking, prediction applications and others. However, despite the help that these new technologies have provided, they remain limited if the data with which they are fed are not of good quality. In this paper, we highlight some benefits of using BIG Data and Blockchain to deal with this pandemic and some data quality issues that still present challenges to decision making. Finally we present a general Blockchain-based framework for data governance that aims to ensure a high level of data trust, security, and privacy.
{"title":"Technology against COVID-19 A Blockchain-based framework for Data Quality","authors":"Imane Ezzine, Laila Benhlima","doi":"10.1109/CiSt49399.2021.9357200","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357200","url":null,"abstract":"The effects of COVID-19 have quickly spread around the world, testing the limits of the population and the public health sector. High demand on medical services are offset by disruptions in daily operations as hospitals struggle to function in the face of overcapacity, understaffing and information gaps. Faced with these problems, new technologies are being deployed to fight this pandemic and help medical staff governments to reduce its spread. Among these technologies, we find blockchains and Big Data which have been used in tracking, prediction applications and others. However, despite the help that these new technologies have provided, they remain limited if the data with which they are fed are not of good quality. In this paper, we highlight some benefits of using BIG Data and Blockchain to deal with this pandemic and some data quality issues that still present challenges to decision making. Finally we present a general Blockchain-based framework for data governance that aims to ensure a high level of data trust, security, and privacy.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123086624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357245
Maitreyee Tewari, Michele Persiani
This work presents a novel method to extract sub-structures in dialogues for the following genres: human-human task driven, human-human chit-chat, human-machine task driven, and human-machine chit-chat dialogues. The model consists of a novel semi-supervised annotation schema of syntactic features, communicative functions, dialogue policy, sequence expansion and sender information. These labels are then transformed into tuples of three, four and five segments, the tuples are used as features and modelled to learn sub-structures in above mentioned genres of dialogues with sequence-to-sequence variational autoencoders. The results analyse the latent space of generic sub-structures decomposed by PCA and ICA, showing an increase in silhouette scores for clustering of the latent space.
{"title":"Variational Autoencoding Dialogue Sub-Structures Using a Novel Hierarchical Annotation Schema","authors":"Maitreyee Tewari, Michele Persiani","doi":"10.1109/CiSt49399.2021.9357245","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357245","url":null,"abstract":"This work presents a novel method to extract sub-structures in dialogues for the following genres: human-human task driven, human-human chit-chat, human-machine task driven, and human-machine chit-chat dialogues. The model consists of a novel semi-supervised annotation schema of syntactic features, communicative functions, dialogue policy, sequence expansion and sender information. These labels are then transformed into tuples of three, four and five segments, the tuples are used as features and modelled to learn sub-structures in above mentioned genres of dialogues with sequence-to-sequence variational autoencoders. The results analyse the latent space of generic sub-structures decomposed by PCA and ICA, showing an increase in silhouette scores for clustering of the latent space.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123875666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/cist49399.2021.9357204
{"title":"CiSt Main Tracks - Focused Conferences","authors":"","doi":"10.1109/cist49399.2021.9357204","DOIUrl":"https://doi.org/10.1109/cist49399.2021.9357204","url":null,"abstract":"","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125441055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357187
Jeena Kleenankandy, Abdul Nazeer
Recursive neural networks (Tree-RNNs) based on dependency trees are ubiquitous in modeling sentence meanings as they effectively capture semantic relationships between non-neighborhood words. However, recognizing semantically dissimilar sentences with the same words and syntax is still a challenge to Tree-RNNs. This work proposes an improvement to Dependency Tree-RNN (DT-RNN) using the grammatical relationship type identified in the dependency parse. Our experiments on semantic relatedness scoring (SRS) and recognizing textual entailment (RTE) in sentence pairs using SICK (Sentence Involving Compositional Knowledge) dataset show encouraging results. The model achieved a 2% improvement in classification accuracy for the RTE task over the DT-RNN model. The results show that Pearson's and Spearman's correlation measures between the model's predicted similarity scores and human ratings are higher than those of standard DT-RNNs.
基于依赖树的递归神经网络(Tree-RNNs)可以有效地捕获非邻域词之间的语义关系,因此在句子意义建模中无处不在。然而,识别具有相同单词和语法的语义不同的句子仍然是tree - rnn的一个挑战。这项工作提出了一种依赖树- rnn (DT-RNN)的改进,使用依赖解析中识别的语法关系类型。我们使用SICK (sentence related Knowledge)数据集对句子对进行语义关联评分(SRS)和文本蕴涵识别(RTE)实验,取得了令人鼓舞的结果。与DT-RNN模型相比,该模型在RTE任务的分类精度上提高了2%。结果表明,该模型预测的相似性得分与人类评分之间的Pearson’s和Spearman’s相关度量高于标准dt - rnn。
{"title":"Recognizing semantic relation in sentence pairs using Tree-RNNs and Typed dependencies","authors":"Jeena Kleenankandy, Abdul Nazeer","doi":"10.1109/CiSt49399.2021.9357187","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357187","url":null,"abstract":"Recursive neural networks (Tree-RNNs) based on dependency trees are ubiquitous in modeling sentence meanings as they effectively capture semantic relationships between non-neighborhood words. However, recognizing semantically dissimilar sentences with the same words and syntax is still a challenge to Tree-RNNs. This work proposes an improvement to Dependency Tree-RNN (DT-RNN) using the grammatical relationship type identified in the dependency parse. Our experiments on semantic relatedness scoring (SRS) and recognizing textual entailment (RTE) in sentence pairs using SICK (Sentence Involving Compositional Knowledge) dataset show encouraging results. The model achieved a 2% improvement in classification accuracy for the RTE task over the DT-RNN model. The results show that Pearson's and Spearman's correlation measures between the model's predicted similarity scores and human ratings are higher than those of standard DT-RNNs.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129578787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357196
Leonhard Hermansdorfer, Rainer Trauth, Johannes Betz, M. Lienkamp
Autonomous vehicles have to meet high safety standards in order to be commercially viable. Before real-world testing of an autonomous vehicle, extensive simulation is required to verify software functionality and to detect unexpected behavior. This incites the need for accurate models to match real system behavior as closely as possible. During driving, planing and control algorithms also need an accurate estimation of the vehicle dynamics in order to handle the vehicle safely. Until now, vehicle dynamics estimation has mostly been performed with physics-based models. Whereas these models allow specific effects to be implemented, accurate models need a variety of parameters. Their identification requires costly resources, e.g., expensive test facilities. Machine learning models enable new approaches to perform these modeling tasks without the necessity of identifying parameters. Neural networks can be trained with recorded vehicle data to represent the vehicle's dynamic behavior. We present a neural network architecture that has advantages over a physics-based model in terms of accuracy. We compare both models to real-world test data from an autonomous racing vehicle, which was recorded on different race tracks with high- and low-grip conditions. The developed neural network architecture is able to replace a single-track model for vehicle dynamics modeling.
{"title":"End-to-End Neural Network for Vehicle Dynamics Modeling","authors":"Leonhard Hermansdorfer, Rainer Trauth, Johannes Betz, M. Lienkamp","doi":"10.1109/CiSt49399.2021.9357196","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357196","url":null,"abstract":"Autonomous vehicles have to meet high safety standards in order to be commercially viable. Before real-world testing of an autonomous vehicle, extensive simulation is required to verify software functionality and to detect unexpected behavior. This incites the need for accurate models to match real system behavior as closely as possible. During driving, planing and control algorithms also need an accurate estimation of the vehicle dynamics in order to handle the vehicle safely. Until now, vehicle dynamics estimation has mostly been performed with physics-based models. Whereas these models allow specific effects to be implemented, accurate models need a variety of parameters. Their identification requires costly resources, e.g., expensive test facilities. Machine learning models enable new approaches to perform these modeling tasks without the necessity of identifying parameters. Neural networks can be trained with recorded vehicle data to represent the vehicle's dynamic behavior. We present a neural network architecture that has advantages over a physics-based model in terms of accuracy. We compare both models to real-world test data from an autonomous racing vehicle, which was recorded on different race tracks with high- and low-grip conditions. The developed neural network architecture is able to replace a single-track model for vehicle dynamics modeling.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124398669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357169
Soumaya Amri, Mohamed Naoum, M. Lazaar, Mohammed Al Achhab
Using smart city technologies and technical advancements in Intelligent Transport Systems, this work aims to improve the safety of road users in different road environments. A new architecture of an intelligent transport system has been proposed in order to ensure the road safety in real time. The proposed Intelligent and Safe Transportation System (ISTS) consists of two components. The first is an intelligent safe traffic management system (ISTMS), the second is a safest route recommendation system (SRRS). The ISTMS uses road user's profile information and road environment data to generate and optimize a database of historical risk matrix. Security measures are also taken into account and optimized by the ISTMS in order to transform studied areas into safe ones. The SRRS uses user's profile information to recommend the safest itinerary and the most secure mode of transportation ensuring the user's safety.
{"title":"Performing of users' road safety at intelligent transportation systems","authors":"Soumaya Amri, Mohamed Naoum, M. Lazaar, Mohammed Al Achhab","doi":"10.1109/CiSt49399.2021.9357169","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357169","url":null,"abstract":"Using smart city technologies and technical advancements in Intelligent Transport Systems, this work aims to improve the safety of road users in different road environments. A new architecture of an intelligent transport system has been proposed in order to ensure the road safety in real time. The proposed Intelligent and Safe Transportation System (ISTS) consists of two components. The first is an intelligent safe traffic management system (ISTMS), the second is a safest route recommendation system (SRRS). The ISTMS uses road user's profile information and road environment data to generate and optimize a database of historical risk matrix. Security measures are also taken into account and optimized by the ISTMS in order to transform studied areas into safe ones. The SRRS uses user's profile information to recommend the safest itinerary and the most secure mode of transportation ensuring the user's safety.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127552135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357201
Mouad Ablad, B. Frikh, B. Ouhbi
Nowadays, Deep learning becomes the most powerful black box predictors, which has achieved a high performance in many fields such as insurance especially in fraud detection, claims management, pricing, etc. Despite these achievements, the main interest of these classic deep learning networks is to focus only on improving the accuracy of the model without assessing the quality of the outputs. In other words, classic deep learning networks do not incorporate uncertainty information but it consists only in returning a point prediction. Knowing how much confidence there is in a prediction is essential for gaining insurers' trust in technology. In this work, we propose a solution to detect automobile insurance fraud with quantified uncertainty, our model uses two methods to quantify uncertainty. The first one is called Monte Carlo Dropout method, which is considered as an approximate Bayesian inference in deep Gaussian processes. The second is named Deep Ensembles method. These two methods mitigate the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We found that our proposed method gives good results in comparison to the existing methods on the automobile insurance data set “carclaims.txt”.
{"title":"Uncertainty Quantification in Deep Learning Context: Application to Insurance","authors":"Mouad Ablad, B. Frikh, B. Ouhbi","doi":"10.1109/CiSt49399.2021.9357201","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357201","url":null,"abstract":"Nowadays, Deep learning becomes the most powerful black box predictors, which has achieved a high performance in many fields such as insurance especially in fraud detection, claims management, pricing, etc. Despite these achievements, the main interest of these classic deep learning networks is to focus only on improving the accuracy of the model without assessing the quality of the outputs. In other words, classic deep learning networks do not incorporate uncertainty information but it consists only in returning a point prediction. Knowing how much confidence there is in a prediction is essential for gaining insurers' trust in technology. In this work, we propose a solution to detect automobile insurance fraud with quantified uncertainty, our model uses two methods to quantify uncertainty. The first one is called Monte Carlo Dropout method, which is considered as an approximate Bayesian inference in deep Gaussian processes. The second is named Deep Ensembles method. These two methods mitigate the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We found that our proposed method gives good results in comparison to the existing methods on the automobile insurance data set “carclaims.txt”.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128949151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357269
Ilham Battas, Ridouane Oulhiq, Hicham Behja, L. Deshayes
The studied mining production chain generally divided into three principal units: Destoning, screening and loading. The role of the screening unit is to screen the phosphate produced by the destoning unit before the loading to trains. Its efficiency depends on several parameters, which makes analysis and decision making for its improvement very complicated. The purpose of this paper is to propose a prediction system to evaluate and monitor in advance the efficiency of the screening unit. This system is based on Knowledge discovery in databases that comprises generally three steps: data pre-processing, development of prediction models and finally validation and verification of the proposed models. The first consists in having in-depth information and knowledge about the application domain, in order to determine the set of parameters influencing the efficiency and to pre-process the data of the these parameters, in order to improve their quality before being used by the second step, which aims to develop predictive models that will be validated and verified with different evaluation criteria during the last step. This work focuses on the first level of development of the system in question, data pre-processing, which has been applied to the mine's screening unit facilities, and the results of this case study are also presented.
{"title":"A Proposed Data Preprocessing Method for an Industrial Prediction Process","authors":"Ilham Battas, Ridouane Oulhiq, Hicham Behja, L. Deshayes","doi":"10.1109/CiSt49399.2021.9357269","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357269","url":null,"abstract":"The studied mining production chain generally divided into three principal units: Destoning, screening and loading. The role of the screening unit is to screen the phosphate produced by the destoning unit before the loading to trains. Its efficiency depends on several parameters, which makes analysis and decision making for its improvement very complicated. The purpose of this paper is to propose a prediction system to evaluate and monitor in advance the efficiency of the screening unit. This system is based on Knowledge discovery in databases that comprises generally three steps: data pre-processing, development of prediction models and finally validation and verification of the proposed models. The first consists in having in-depth information and knowledge about the application domain, in order to determine the set of parameters influencing the efficiency and to pre-process the data of the these parameters, in order to improve their quality before being used by the second step, which aims to develop predictive models that will be validated and verified with different evaluation criteria during the last step. This work focuses on the first level of development of the system in question, data pre-processing, which has been applied to the mine's screening unit facilities, and the results of this case study are also presented.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133891775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/CiSt49399.2021.9357296
Simon Josiek, Sebastian Schleier, Tobias Steindorf, R. Wittrin, Manuel Heinzig, Christian Roschke, Volker Tolkmitt, M. Ritter
The shift towards digital teaching is leading to an increased need for interactive teaching methods. Game-Based Learning combines teaching content with a motivating application. The learning simulation Finanzmars uses elements from Game-Based Learning to prepare the contents of a classical lecture in the field of economics for a learning game. The simulation is aimed at students of business administration or similar courses of study. The contents can be edited and extended directly by the teaching staff using an external configuration tool. The knowledge transfer is flanked by Micro Learning, systematic introductions to increasingly complex subjects, interaction with game elements and increased motivation through in-game successes such as a premium currency. The player finds himself in an economic simulation with the goal of exploiting the resources of Mars in a profitable and optimized way. In the game, construction, upgrading and repair of buildings, the development of future technology through research, the export of resources and expansion to other celestial bodies are available. The functionalities are based on possible courses of action in the real world, whose mechanisms play a central role in the lecture on the topic of finance. By means of logging and the evaluation of in-game objectives, the teachers can track the progress of the students. Our evaluation of 12 Master's students in Business Administration shows that there is an increase in economic expertise. Furthermore, the evaluation of the associated standardized AttrakDiff questionnaire according to DIN EN ISO 9241–11 certifies that the application is action-oriented and user-friendly. The description of the environment with regard to the course of studies and the teaching concept was largely done in the publication “Finanzmars im Kosmos von Blended Learning” by Marc Ritter, Christian Roschke and Volker Tolkmitt, who received the Best Paper Award in Teaching at the CARF Lucerne Conference in 2019. This publication focuses on the systematic elaboration of game design and implementation with a view to increasing learning success.
向数字化教学的转变导致对互动式教学方法的需求增加。基于游戏的学习将教学内容与激励应用相结合。学习模拟Finanzmars使用基于游戏的学习(game - based learning)的元素为学习游戏准备经济学领域经典讲座的内容。模拟是针对工商管理或类似课程的学生。教学人员可以使用外部配置工具直接编辑和扩展内容。知识转移伴随着微学习,系统地介绍越来越复杂的主题,与游戏元素的互动以及通过游戏内的成功(如付费货币)增加动机。玩家发现自己处于经济模拟中,目标是以盈利和优化的方式开发火星资源。在游戏中,可以建造、升级和修复建筑物,通过研究开发未来技术,出口资源并扩展到其他天体。这些功能基于现实世界中可能的行动方案,其机制在金融主题讲座中发挥了核心作用。通过记录和游戏目标的评估,教师可以跟踪学生的进度。我们对12名工商管理硕士学生的评估显示,经济专业知识有所增加。此外,根据DIN EN ISO 9241-11对相关标准化AttrakDiff问卷的评估证明,该应用程序是面向行动和用户友好的。关于学习过程和教学概念的环境描述主要在Marc Ritter, Christian Roschke和Volker Tolkmitt的出版物“Finanzmars im Kosmos von Blended Learning”中完成,他们在2019年的CARF Lucerne会议上获得了教学最佳论文奖。本出版物着重于系统阐述游戏设计和实施,以期提高学习成功率。
{"title":"Game-Based Learning Using the Example of Finanzmars","authors":"Simon Josiek, Sebastian Schleier, Tobias Steindorf, R. Wittrin, Manuel Heinzig, Christian Roschke, Volker Tolkmitt, M. Ritter","doi":"10.1109/CiSt49399.2021.9357296","DOIUrl":"https://doi.org/10.1109/CiSt49399.2021.9357296","url":null,"abstract":"The shift towards digital teaching is leading to an increased need for interactive teaching methods. Game-Based Learning combines teaching content with a motivating application. The learning simulation Finanzmars uses elements from Game-Based Learning to prepare the contents of a classical lecture in the field of economics for a learning game. The simulation is aimed at students of business administration or similar courses of study. The contents can be edited and extended directly by the teaching staff using an external configuration tool. The knowledge transfer is flanked by Micro Learning, systematic introductions to increasingly complex subjects, interaction with game elements and increased motivation through in-game successes such as a premium currency. The player finds himself in an economic simulation with the goal of exploiting the resources of Mars in a profitable and optimized way. In the game, construction, upgrading and repair of buildings, the development of future technology through research, the export of resources and expansion to other celestial bodies are available. The functionalities are based on possible courses of action in the real world, whose mechanisms play a central role in the lecture on the topic of finance. By means of logging and the evaluation of in-game objectives, the teachers can track the progress of the students. Our evaluation of 12 Master's students in Business Administration shows that there is an increase in economic expertise. Furthermore, the evaluation of the associated standardized AttrakDiff questionnaire according to DIN EN ISO 9241–11 certifies that the application is action-oriented and user-friendly. The description of the environment with regard to the course of studies and the teaching concept was largely done in the publication “Finanzmars im Kosmos von Blended Learning” by Marc Ritter, Christian Roschke and Volker Tolkmitt, who received the Best Paper Award in Teaching at the CARF Lucerne Conference in 2019. This publication focuses on the systematic elaboration of game design and implementation with a view to increasing learning success.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127710277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}