Modeling resilience of interdependent critical infrastructure is recently an important task in literature researches. The guard of these critical infrastructures (CI) is fundamental for the efficient functioning of society in particular and nations in general. The CI is composed of several entities that have highly interconnected and collaborated with other organizations to accomplish services according to mutual interdependencies and interconnections. These interdependencies reinforce the systems to be more resilient in case of vulnerabilities and failures. In this sense, sophisticated management is required to improve the protection of these infrastructures and ensure their safety, and continuous operation in cases of failures and disruptions of services. In this paper, we propose a new approach of resilience to manage disruptions and failures in critical infrastructure using multi-level interdependencies. The aim of this paper is threefold: first, we propose a road map for modeling multilevel interdependencies. Second, we introduce a new paradigm of nodes classification using a convolutional neural network. Third, we propose agents to each CI to have a global view of the resilience policy of the others using multi-levels interdependencies. Simulations results show that by adopting our proposed approach using multi-level interdependencies, improved management for resilience is gained in these CIs.
{"title":"Multi Level interdependencies Management for resilience in Critical Infrastructures","authors":"Ouafae Kasmi, Amine Baïna, M. Bellafkih","doi":"10.1145/3419604.3419791","DOIUrl":"https://doi.org/10.1145/3419604.3419791","url":null,"abstract":"Modeling resilience of interdependent critical infrastructure is recently an important task in literature researches. The guard of these critical infrastructures (CI) is fundamental for the efficient functioning of society in particular and nations in general. The CI is composed of several entities that have highly interconnected and collaborated with other organizations to accomplish services according to mutual interdependencies and interconnections. These interdependencies reinforce the systems to be more resilient in case of vulnerabilities and failures. In this sense, sophisticated management is required to improve the protection of these infrastructures and ensure their safety, and continuous operation in cases of failures and disruptions of services. In this paper, we propose a new approach of resilience to manage disruptions and failures in critical infrastructure using multi-level interdependencies. The aim of this paper is threefold: first, we propose a road map for modeling multilevel interdependencies. Second, we introduce a new paradigm of nodes classification using a convolutional neural network. Third, we propose agents to each CI to have a global view of the resilience policy of the others using multi-levels interdependencies. Simulations results show that by adopting our proposed approach using multi-level interdependencies, improved management for resilience is gained in these CIs.","PeriodicalId":250715,"journal":{"name":"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122065040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Providing configurable process model with high quality is a primary objective to derive process variants with better accuracy and facilitate process model reuse. For this purpose, many research works have been interested in configurable process mining techniques to discover and configure processes from event logs. Moreover, to use the knowledge captured by event logs when mining processes, the concept of semantic process mining is introduced. It allows for combining semantic technologies with process mining. Despite the diversity of works in mining and customizing configurable process models, the application of these techniques is still limited to use semantics in minimizing the complexity of discovered processes. However, it seems to be pertinent to discover semantically enriched configurable process models directly from event logs. Consequently, this can facilitate using semantic in configuring, verifying conformance or enhancing discovered configurable processes. In this paper, we present a comparative study of existing works that focus on mining configurable process models with respect to semantic technologies. Our aim is to propose a new framework to automatically discover semantically enriched configurable processes.
{"title":"Towards Mining Semantically Enriched Configurable Process Models","authors":"Aicha Khannat, Hanae Sbaï, L. Kjiri","doi":"10.1145/3419604.3419797","DOIUrl":"https://doi.org/10.1145/3419604.3419797","url":null,"abstract":"Providing configurable process model with high quality is a primary objective to derive process variants with better accuracy and facilitate process model reuse. For this purpose, many research works have been interested in configurable process mining techniques to discover and configure processes from event logs. Moreover, to use the knowledge captured by event logs when mining processes, the concept of semantic process mining is introduced. It allows for combining semantic technologies with process mining. Despite the diversity of works in mining and customizing configurable process models, the application of these techniques is still limited to use semantics in minimizing the complexity of discovered processes. However, it seems to be pertinent to discover semantically enriched configurable process models directly from event logs. Consequently, this can facilitate using semantic in configuring, verifying conformance or enhancing discovered configurable processes. In this paper, we present a comparative study of existing works that focus on mining configurable process models with respect to semantic technologies. Our aim is to propose a new framework to automatically discover semantically enriched configurable processes.","PeriodicalId":250715,"journal":{"name":"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications","volume":"36 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114117410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper targets the problem of automatic meter identification and error management in Arabic poetry. Many approaches use high level abstractions of poems in their prosodic forms: feet patterns, cords and pegs forms, or syllables. Our algorithm manipulates directely binary representations of meters and prosodic forms of verses, with computing distances between those representations. Our algorithm does not need to handle explicitely neither the foot patterns relaxations nor defects which makes our algorithm compared to studied works simpler (i.e. no symbolic combinatory analysis rules for exception patterns), and efficient in time and memory with dynamic programming of Levenshtein distance.
{"title":"An Efficient Lightweight Algorithm for Automatic Meters Identification and Error Management in Arabic Poetry","authors":"Karim Baïna, Hamza Moutassaref","doi":"10.1145/3419604.3419781","DOIUrl":"https://doi.org/10.1145/3419604.3419781","url":null,"abstract":"This paper targets the problem of automatic meter identification and error management in Arabic poetry. Many approaches use high level abstractions of poems in their prosodic forms: feet patterns, cords and pegs forms, or syllables. Our algorithm manipulates directely binary representations of meters and prosodic forms of verses, with computing distances between those representations. Our algorithm does not need to handle explicitely neither the foot patterns relaxations nor defects which makes our algorithm compared to studied works simpler (i.e. no symbolic combinatory analysis rules for exception patterns), and efficient in time and memory with dynamic programming of Levenshtein distance.","PeriodicalId":250715,"journal":{"name":"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126334485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, privacy remains one of the most important challenges for the enterprises that handle personal data. Many mechanisms are widely used to tackle this challenge and make the use of Internet more secure and respectful of the privacy. For this aim, anonymizing data in database, by reversible or irreversible techniques, is one of such used mechanisms. Varieties of implementation of these techniques are provided and available, however, the choice of the suitable category and technique for a specific context is not an easy task. In this paper we focus on irreversible anonymizing category in database and we propose an approach that can help to make this choice easier based on classification according to criteria. Some of these last are well known on research fields and we define others related to the application context and data nature. As a result, the security officer could identify the most suitable technique to preserve privacy.
{"title":"A Qualitative-Driven Study of Irreversible Data Anonymizing Techniques in Databases","authors":"Siham Arfaoui, A. Belmekki, Abdellatif Mezrioui","doi":"10.1145/3419604.3419788","DOIUrl":"https://doi.org/10.1145/3419604.3419788","url":null,"abstract":"Nowadays, privacy remains one of the most important challenges for the enterprises that handle personal data. Many mechanisms are widely used to tackle this challenge and make the use of Internet more secure and respectful of the privacy. For this aim, anonymizing data in database, by reversible or irreversible techniques, is one of such used mechanisms. Varieties of implementation of these techniques are provided and available, however, the choice of the suitable category and technique for a specific context is not an easy task. In this paper we focus on irreversible anonymizing category in database and we propose an approach that can help to make this choice easier based on classification according to criteria. Some of these last are well known on research fields and we define others related to the application context and data nature. As a result, the security officer could identify the most suitable technique to preserve privacy.","PeriodicalId":250715,"journal":{"name":"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130004766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laila El hiouile, A. Errami, N. Azami, R. Majdoul, L. Deshayes
Phosphorus is an important and finite resource that is utilized mainly to produce phosphate fertilizers that assist in crop production. From phosphate ore to phosphate a process of beneficiation is required to remove the unnecessary minerals contains in the phosphate ore and to increase the grade concentration of mining product. The screening unit is a very important and critical step in this process. However, during this stage, many dysfunctions and anomalies can occur which impact the yield and quality of the product. Hence, it is essential to be monitored for real-time quality control. The purpose of this work is to automate surveillance and anomaly detection on the screening unit by using artificial vision techniques. Classical and supervised image classification approach has been used based on tree manual descriptors; HOG, SIFT, and LBP combined each with the support vector machine classifier. The evaluation of the three combinations shows that the HOG-SVM combination has the best trade-off between both accuracy and runtime.
{"title":"Evaluation of Classical Descriptors coupled to Support Vector Machine Classifier for Phosphate ore Screening monitoring","authors":"Laila El hiouile, A. Errami, N. Azami, R. Majdoul, L. Deshayes","doi":"10.1145/3419604.3419785","DOIUrl":"https://doi.org/10.1145/3419604.3419785","url":null,"abstract":"Phosphorus is an important and finite resource that is utilized mainly to produce phosphate fertilizers that assist in crop production. From phosphate ore to phosphate a process of beneficiation is required to remove the unnecessary minerals contains in the phosphate ore and to increase the grade concentration of mining product. The screening unit is a very important and critical step in this process. However, during this stage, many dysfunctions and anomalies can occur which impact the yield and quality of the product. Hence, it is essential to be monitored for real-time quality control. The purpose of this work is to automate surveillance and anomaly detection on the screening unit by using artificial vision techniques. Classical and supervised image classification approach has been used based on tree manual descriptors; HOG, SIFT, and LBP combined each with the support vector machine classifier. The evaluation of the three combinations shows that the HOG-SVM combination has the best trade-off between both accuracy and runtime.","PeriodicalId":250715,"journal":{"name":"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications","volume":"159 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125932929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The adoption of complex machine learning (ML) models in recent years has brought along a new challenge related to how to interpret, understand, and explain the reasoning behind these complex models' predictions. Treating complex ML systems as trustworthy black boxes without domain knowledge checking has led to some disastrous outcomes. In this context, interpretability and explainability are often used unintelligibly, and fairness, on the other hand, has become lately popular due to some discrimination problems in ML. While closely related, interpretability and explainability denote different features of prediction. In this sight, the aim of this paper is to give an overview of the interpretability, explainability and the fairness concepts in the literature and to evaluate the performance of the Patient Rule Induction Method (PRIM) concerning these aspects.
{"title":"State of the art of Fairness, Interpretability and Explainability in Machine Learning: Case of PRIM","authors":"Rym Nassih, A. Berrado","doi":"10.1145/3419604.3419776","DOIUrl":"https://doi.org/10.1145/3419604.3419776","url":null,"abstract":"The adoption of complex machine learning (ML) models in recent years has brought along a new challenge related to how to interpret, understand, and explain the reasoning behind these complex models' predictions. Treating complex ML systems as trustworthy black boxes without domain knowledge checking has led to some disastrous outcomes. In this context, interpretability and explainability are often used unintelligibly, and fairness, on the other hand, has become lately popular due to some discrimination problems in ML. While closely related, interpretability and explainability denote different features of prediction. In this sight, the aim of this paper is to give an overview of the interpretability, explainability and the fairness concepts in the literature and to evaluate the performance of the Patient Rule Induction Method (PRIM) concerning these aspects.","PeriodicalId":250715,"journal":{"name":"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125061822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In big data era, text classification is considered as one of the most important machine learning application domain. However, to build an efficient algorithm for classification, feature selection is a fundamental step to reduce dimensionality, achieve better accuracy and improve time execution. In the literature, most of the feature ranking techniques are document based. The major weakness of this approach is that it favours the terms occurring frequently in the documents and neglects the correlation between the terms and the categories. In this work, unlike the traditional approaches which deal with documents individually, we use mapreduce paradigm to process the documents of each category as a single document. Then, we introduce a parallel frequency-category feature selection method independently of any classifier to select the most relevant features. Experimental results on the 20-Newsgroups dataset showed that our approach improves the classification accuracy to 90.3%. Moreover, the system maintains the simplicity and lower execution time.
{"title":"A Frequency-Category Based Feature Selection in Big Data for Text Classification","authors":"Houda Amazal, M. Ramdani, M. Kissi","doi":"10.1145/3419604.3419620","DOIUrl":"https://doi.org/10.1145/3419604.3419620","url":null,"abstract":"In big data era, text classification is considered as one of the most important machine learning application domain. However, to build an efficient algorithm for classification, feature selection is a fundamental step to reduce dimensionality, achieve better accuracy and improve time execution. In the literature, most of the feature ranking techniques are document based. The major weakness of this approach is that it favours the terms occurring frequently in the documents and neglects the correlation between the terms and the categories. In this work, unlike the traditional approaches which deal with documents individually, we use mapreduce paradigm to process the documents of each category as a single document. Then, we introduce a parallel frequency-category feature selection method independently of any classifier to select the most relevant features. Experimental results on the 20-Newsgroups dataset showed that our approach improves the classification accuracy to 90.3%. Moreover, the system maintains the simplicity and lower execution time.","PeriodicalId":250715,"journal":{"name":"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128776100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet of Things, is an innovative technology which allows the connection of physical things with the digital world through the use of heterogeneous networks and communication technologies. The Routing Protocol for low power and Lossy networks (RPL) is standardized as a routing protocol for LLNs. However, more and more of experimental results demonstrate that RPL performs poorly in throughput and adaptability to network dynamics. In this study, we applied properties of arrangement graphs to design a newly structured routing protocol, extension of RPL, named as Arrangement Graph based Adaptive routing protocol ARG-RPL that enhances the supports of high throughput, adaptivity and mobility for RPL without any modification or assumption on the OFs. In such protocol, the IDs between the two adjacent nodes differ only one digit and thus, self configuration and self optimization in LLNs networks are easy while keeping the low maintenance cost. Distributed algorithms have been developed, consisting of two stages: initialization stage and reactive routing discovery stage. We implement ARG-RPL on the Contiki operating system, and construct extensive evaluation using a large-scale simulations on Cooja. Analysis of experimental results show that the establishment of the system and the routing processing could achieve better performance than those obtained in the RPL and ER-RPL.
物联网是一种创新技术,它允许通过使用异构网络和通信技术将物理事物与数字世界连接起来。RPL (Routing Protocol for low power and Lossy networks)是一种标准化的lln路由协议。然而,越来越多的实验结果表明,RPL在吞吐量和对网络动态的适应性方面表现不佳。在本研究中,我们利用排列图的特性,设计了一种新的结构化路由协议,作为RPL的扩展,命名为基于排列图的自适应路由协议ARG-RPL,增强了RPL对高吞吐量、自适应和可移动性的支持,而不需要对OFs进行任何修改或假设。在该协议中,相邻两个节点之间的id仅相差一个数字,因此lln网络易于自配置和自优化,同时保持较低的维护成本。分布式路由算法分为初始化阶段和响应式路由发现阶段。我们在Contiki操作系统上实现了ARG-RPL,并在Cooja上使用大规模模拟构建了广泛的评估。实验结果分析表明,该系统的建立和路由处理比RPL和ER-RPL获得了更好的性能。
{"title":"ARG-RPL: Arrangement Graph-, Region-Based Routing Protocol for Internet of Things","authors":"Abdellatif Serhani, N. Naja, A. Jamali","doi":"10.1145/3419604.3419761","DOIUrl":"https://doi.org/10.1145/3419604.3419761","url":null,"abstract":"Internet of Things, is an innovative technology which allows the connection of physical things with the digital world through the use of heterogeneous networks and communication technologies. The Routing Protocol for low power and Lossy networks (RPL) is standardized as a routing protocol for LLNs. However, more and more of experimental results demonstrate that RPL performs poorly in throughput and adaptability to network dynamics. In this study, we applied properties of arrangement graphs to design a newly structured routing protocol, extension of RPL, named as Arrangement Graph based Adaptive routing protocol ARG-RPL that enhances the supports of high throughput, adaptivity and mobility for RPL without any modification or assumption on the OFs. In such protocol, the IDs between the two adjacent nodes differ only one digit and thus, self configuration and self optimization in LLNs networks are easy while keeping the low maintenance cost. Distributed algorithms have been developed, consisting of two stages: initialization stage and reactive routing discovery stage. We implement ARG-RPL on the Contiki operating system, and construct extensive evaluation using a large-scale simulations on Cooja. Analysis of experimental results show that the establishment of the system and the routing processing could achieve better performance than those obtained in the RPL and ER-RPL.","PeriodicalId":250715,"journal":{"name":"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126930415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Customers Segmentation has been a topic of interest for a lot of industry, academics, and marketing leaders. The potential value of a customer to a company can be a core ingredient in decision-making. One of the big challenges in customer-based organizations is customer cognition, understanding the difference between them, and scoring them. But now with all capabilities we have, using new technologies like machine learning algorithm and data treatment we can create a very powerful framework that allow us to best understand customers needs and behaviors, and act appropriately to satisfy their needs. In the present paper, we propose a new model based on RFM model Recency, Frequency, and Monetary and k-mean algorithm to resolve those challenges. This model will allow us to use clustering, scoring, and distribution to have a clear idea about what action we should take to improve customer satisfaction.
{"title":"Customer Segmentation With Machine Learning: New Strategy For Targeted Actions","authors":"Lahcen Abidar, Dounia Zaidouni, Abdeslam Ennouaary","doi":"10.1145/3419604.3419794","DOIUrl":"https://doi.org/10.1145/3419604.3419794","url":null,"abstract":"Customers Segmentation has been a topic of interest for a lot of industry, academics, and marketing leaders. The potential value of a customer to a company can be a core ingredient in decision-making. One of the big challenges in customer-based organizations is customer cognition, understanding the difference between them, and scoring them. But now with all capabilities we have, using new technologies like machine learning algorithm and data treatment we can create a very powerful framework that allow us to best understand customers needs and behaviors, and act appropriately to satisfy their needs. In the present paper, we propose a new model based on RFM model Recency, Frequency, and Monetary and k-mean algorithm to resolve those challenges. This model will allow us to use clustering, scoring, and distribution to have a clear idea about what action we should take to improve customer satisfaction.","PeriodicalId":250715,"journal":{"name":"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126054339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cardiovascular diseases (CVD) are the principal cause of death globally. Electrocardiography (ECG) is a widely adopted tool to quantify heart activities to detect any heart abnormalities. Arrhythmia is one of these CVDs that heavily relies on continuous ECG recordings in order to detect and predict irregularities in the heart rhythms. Various Deep Learning (DL) approaches has been heavily used to classify and predict different heart rhythms. However, most of the proposed works do not consider the various hyperparameter optimization and tuning to get the full potential of the DL model and achieve higher accuracy. Besides, very few works implemented the full monitoring cycle and close the loop to propose some clinical and non-clinical recommendations. Therefore, in this paper, we adopt the Convolutional Neural Network (CNN) model and we apply various parameter optimization to capture various properties of the data, the training, and the model. We also close the monitoring loop and suggest tailored recommendations for each category of arrhythmia that go beyond simple to more deeper diagnosis using the Global Registry of Acute Coronary Events (GRACE), and the European Guidelines on CVDs prevention in clinical practice (ESC/EAS 2016). We conducted a set of experiments to evaluate our model and the set of hyperparameter optimization we have experienced and the results we have obtained showed significant improvement in the prediction accuracy after a couple of optimization iterations.
{"title":"ECG-based Arrhythmia Classification & Clinical Suggestions: An Incremental Approach of Hyperparameter Tuning","authors":"M. Serhani, A. Navaz, Hany Al Ashwal, N. Al-Qirim","doi":"10.1145/3419604.3419787","DOIUrl":"https://doi.org/10.1145/3419604.3419787","url":null,"abstract":"Cardiovascular diseases (CVD) are the principal cause of death globally. Electrocardiography (ECG) is a widely adopted tool to quantify heart activities to detect any heart abnormalities. Arrhythmia is one of these CVDs that heavily relies on continuous ECG recordings in order to detect and predict irregularities in the heart rhythms. Various Deep Learning (DL) approaches has been heavily used to classify and predict different heart rhythms. However, most of the proposed works do not consider the various hyperparameter optimization and tuning to get the full potential of the DL model and achieve higher accuracy. Besides, very few works implemented the full monitoring cycle and close the loop to propose some clinical and non-clinical recommendations. Therefore, in this paper, we adopt the Convolutional Neural Network (CNN) model and we apply various parameter optimization to capture various properties of the data, the training, and the model. We also close the monitoring loop and suggest tailored recommendations for each category of arrhythmia that go beyond simple to more deeper diagnosis using the Global Registry of Acute Coronary Events (GRACE), and the European Guidelines on CVDs prevention in clinical practice (ESC/EAS 2016). We conducted a set of experiments to evaluate our model and the set of hyperparameter optimization we have experienced and the results we have obtained showed significant improvement in the prediction accuracy after a couple of optimization iterations.","PeriodicalId":250715,"journal":{"name":"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129009624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}