Pub Date : 2020-06-01DOI: 10.1109/ISCV49265.2020.9204111
Hajar Chouhayebi, J. Riffi, Mohamed Adnane Mahraz, Ali Yahyaouy, H. Tairi, Nawal Alioua
the goal of facial expression Recognition is to detect human emotion through facial images. But the biggest challenge of recognizing facial expression is how to extract distinctive characteristics from images of the human face to differentiate diverse emotions. To tackle this challenge, we propose a FER algorithm using geometric features. In the first step, facial landmarks are detected from input sequence video using Dlib Library and geometric features are extracted, considering the spatial position between landmarks. These feature vectors are then implemented in Support Vector Machine (SVM) classifier to classify facial expressions. The Experimental results demonstrate that our proposed method applied on a fusion of two databases (personal database and BUHMAP) shows 94.5% accuracy.
{"title":"Facial expression recognition based on geometric features","authors":"Hajar Chouhayebi, J. Riffi, Mohamed Adnane Mahraz, Ali Yahyaouy, H. Tairi, Nawal Alioua","doi":"10.1109/ISCV49265.2020.9204111","DOIUrl":"https://doi.org/10.1109/ISCV49265.2020.9204111","url":null,"abstract":"the goal of facial expression Recognition is to detect human emotion through facial images. But the biggest challenge of recognizing facial expression is how to extract distinctive characteristics from images of the human face to differentiate diverse emotions. To tackle this challenge, we propose a FER algorithm using geometric features. In the first step, facial landmarks are detected from input sequence video using Dlib Library and geometric features are extracted, considering the spatial position between landmarks. These feature vectors are then implemented in Support Vector Machine (SVM) classifier to classify facial expressions. The Experimental results demonstrate that our proposed method applied on a fusion of two databases (personal database and BUHMAP) shows 94.5% accuracy.","PeriodicalId":313743,"journal":{"name":"2020 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133193565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISCV49265.2020.9204110
F. Rahmani, N. Touhami, N. Taher, A. B. Kchairi
This work presents a reconfigurable radiation pattern antenna for WLAN wireless system. The reconfigurable antenna is fed by a coaxial cable, and consists of a star patch, eight hexagon-shaped radiation cells and a circular planar ground. The pattern-reconfigurable antenna proposed achieves eight reconfigurable states by controlling the ON and OFF of eight PIN diodes to change the direction. The state change is performed by PIN diodes connected in the gap between the central star patch and the radiation cells. The antenna has a bandwidth of 360 MHz and can steer the beam in the direction 20°, 340° in azimuth plane. The antenna main beam can be switched to eight directions in elevation plane. The simulated results are also presented and investigated.
{"title":"Reconfigurable Radiation Pattern Antenna with eight Switchable Beams in Azimuth Plane for WLAN Wireless System","authors":"F. Rahmani, N. Touhami, N. Taher, A. B. Kchairi","doi":"10.1109/ISCV49265.2020.9204110","DOIUrl":"https://doi.org/10.1109/ISCV49265.2020.9204110","url":null,"abstract":"This work presents a reconfigurable radiation pattern antenna for WLAN wireless system. The reconfigurable antenna is fed by a coaxial cable, and consists of a star patch, eight hexagon-shaped radiation cells and a circular planar ground. The pattern-reconfigurable antenna proposed achieves eight reconfigurable states by controlling the ON and OFF of eight PIN diodes to change the direction. The state change is performed by PIN diodes connected in the gap between the central star patch and the radiation cells. The antenna has a bandwidth of 360 MHz and can steer the beam in the direction 20°, 340° in azimuth plane. The antenna main beam can be switched to eight directions in elevation plane. The simulated results are also presented and investigated.","PeriodicalId":313743,"journal":{"name":"2020 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123088704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISCV49265.2020.9204197
Fadwa Benjelloun, Imane El Manaa, M. A. Sabri, Ali Yahyaouy, A. Aarab
Image segmentation, object detection and classification are three closely related tasks that can be greatly improved when they are jointly solved by feeding information from one task to another. Different methods have been proposed by the researchers, some of which have given good results and others fail in certain circumstances. In our paper, we compare two techniques for recognizing moving objects in a video scene. The first approach is based on deep learning. We implemented the Fast Yolo model to detect objects. The second approach is based on the segmentation of objects, we used the Delaunay Triangulation method to recover homogeneous regions. We have combined the features of the HOG, color histogram, and GLCM associated with each object. The classification phase is carried out by Alexnet for both approaches. The experiment was carried out on several video clips of highways and local roads with different traffic and lighting conditions.
{"title":"The comparison between two methods of object detection: Fast Yolo model and Delaunay Triangulation","authors":"Fadwa Benjelloun, Imane El Manaa, M. A. Sabri, Ali Yahyaouy, A. Aarab","doi":"10.1109/ISCV49265.2020.9204197","DOIUrl":"https://doi.org/10.1109/ISCV49265.2020.9204197","url":null,"abstract":"Image segmentation, object detection and classification are three closely related tasks that can be greatly improved when they are jointly solved by feeding information from one task to another. Different methods have been proposed by the researchers, some of which have given good results and others fail in certain circumstances. In our paper, we compare two techniques for recognizing moving objects in a video scene. The first approach is based on deep learning. We implemented the Fast Yolo model to detect objects. The second approach is based on the segmentation of objects, we used the Delaunay Triangulation method to recover homogeneous regions. We have combined the features of the HOG, color histogram, and GLCM associated with each object. The classification phase is carried out by Alexnet for both approaches. The experiment was carried out on several video clips of highways and local roads with different traffic and lighting conditions.","PeriodicalId":313743,"journal":{"name":"2020 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116957264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISCV49265.2020.9204253
Mohammed Kadri, H. Khalloufi, Ahmed Azough
Cultural tourism is a growing sector. It is one of the best ways to discover the cultural heritage and way of life of a region and its people. However, without modernization and digital mediation, this sector can rapidly deteriorate. In this paper, a novel, playful and informative cultural touristic experience is presented. It consists of a virtual Space Door accessible through augmented reality and leading to a virtual museum built using virtual reality. Evaluation of the prototype was conducted in a real environment to confirm usability, ease of use and interest of the prototype.
{"title":"V-Museum: A Virtual Museum Based on Augmented and Virtual Realities for Cultural Heritage Mediation","authors":"Mohammed Kadri, H. Khalloufi, Ahmed Azough","doi":"10.1109/ISCV49265.2020.9204253","DOIUrl":"https://doi.org/10.1109/ISCV49265.2020.9204253","url":null,"abstract":"Cultural tourism is a growing sector. It is one of the best ways to discover the cultural heritage and way of life of a region and its people. However, without modernization and digital mediation, this sector can rapidly deteriorate. In this paper, a novel, playful and informative cultural touristic experience is presented. It consists of a virtual Space Door accessible through augmented reality and leading to a virtual museum built using virtual reality. Evaluation of the prototype was conducted in a real environment to confirm usability, ease of use and interest of the prototype.","PeriodicalId":313743,"journal":{"name":"2020 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115235161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISCV49265.2020.9204044
Leila Amdah, A. Anwar
Business process modeling is now an essential point in building businesses; there is a wide choice of modeling language in the market, the most popular of which are UML and BPMN. However, when it comes to modeling a specific area, these languages remain restricted. A DSL, meanwhile, allows a concise representation of the semantics of a particular business field, which allows the development of coherent and expressive business process models. Thus, these models can be use not only for modeling a system but also for generating executable applications. Collaborative business processes are increasingly present in practice. Their modeling, integration or execution becomes more and more complex because it involves an exchange of resources and data between several partners. Current modeling languages such as BPMN do not allow detailed modeling of these environments. Thus, in this paper, we propose a DSL to model business processes in a collaborative environment. The creation of the latter goes through these stages: a) defining the abstract syntax of our language, which consists in the creation of our own metamodel. b) Define a semantics, which allow presenting the functioning of each element of our language. c) Finally, define a graphical language syntax that allows a clear visualization for modeling.
{"title":"A DSL for collaborative Business Process","authors":"Leila Amdah, A. Anwar","doi":"10.1109/ISCV49265.2020.9204044","DOIUrl":"https://doi.org/10.1109/ISCV49265.2020.9204044","url":null,"abstract":"Business process modeling is now an essential point in building businesses; there is a wide choice of modeling language in the market, the most popular of which are UML and BPMN. However, when it comes to modeling a specific area, these languages remain restricted. A DSL, meanwhile, allows a concise representation of the semantics of a particular business field, which allows the development of coherent and expressive business process models. Thus, these models can be use not only for modeling a system but also for generating executable applications. Collaborative business processes are increasingly present in practice. Their modeling, integration or execution becomes more and more complex because it involves an exchange of resources and data between several partners. Current modeling languages such as BPMN do not allow detailed modeling of these environments. Thus, in this paper, we propose a DSL to model business processes in a collaborative environment. The creation of the latter goes through these stages: a) defining the abstract syntax of our language, which consists in the creation of our own metamodel. b) Define a semantics, which allow presenting the functioning of each element of our language. c) Finally, define a graphical language syntax that allows a clear visualization for modeling.","PeriodicalId":313743,"journal":{"name":"2020 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126879620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISCV49265.2020.9204282
M. Qjidaa, A. Ben-fares, Y. Mechbal, H. Amakdouf, M. Maaroufi, B. Alami, H. Qjidaa
To control the spread of the COVID-19 virus and to gain critical time in controlling the spread of the disease, rapid and accurate diagnostic methods based on artificial intelligence are urgently needed. In this article, we propose a clinical decision support system for the early detection of COVID 19 using deep learning based on chest radiographic images. For this we will develop an in-depth learning method which could extract the graphical characteristics of COVID-19 in order to provide a clinical diagnosis before the test of the pathogen. For this, we collected 100 images of cases of COVID-19 confirmed by pathogens, 100 images diagnosed with typical viral pneumonia and 100 images of normal cases. The architecture of the proposed model first goes through a preprocessing of the input images followed by an increase in data. Then the model begins a step to extract the characteristics followed by the learning step. Finally, the model begins a classification and prediction process with a fully connected network formed of several classifiers. Deep learning and classification were carried out using the VGG convolutional neural network. The proposed model achieved an accuracy of 92.5% in internal validation and 87.5% in external validation. For the AUC criterion we obtained a value of 97% in internal validation and 95% in external validation. Regarding the sensitivity criterion, we obtained a value of 92% in internal validation and 87% in external validation. The results obtained by our model in the test phase show that our model is very effective in detecting COVID-19 and can be offered to health communities as a precise, rapid and effective clinical decision support system in COVID-19 detection.
{"title":"Development of a clinical decision support system for the early detection of COVID-19 using deep learning based on chest radiographic images","authors":"M. Qjidaa, A. Ben-fares, Y. Mechbal, H. Amakdouf, M. Maaroufi, B. Alami, H. Qjidaa","doi":"10.1109/ISCV49265.2020.9204282","DOIUrl":"https://doi.org/10.1109/ISCV49265.2020.9204282","url":null,"abstract":"To control the spread of the COVID-19 virus and to gain critical time in controlling the spread of the disease, rapid and accurate diagnostic methods based on artificial intelligence are urgently needed. In this article, we propose a clinical decision support system for the early detection of COVID 19 using deep learning based on chest radiographic images. For this we will develop an in-depth learning method which could extract the graphical characteristics of COVID-19 in order to provide a clinical diagnosis before the test of the pathogen. For this, we collected 100 images of cases of COVID-19 confirmed by pathogens, 100 images diagnosed with typical viral pneumonia and 100 images of normal cases. The architecture of the proposed model first goes through a preprocessing of the input images followed by an increase in data. Then the model begins a step to extract the characteristics followed by the learning step. Finally, the model begins a classification and prediction process with a fully connected network formed of several classifiers. Deep learning and classification were carried out using the VGG convolutional neural network. The proposed model achieved an accuracy of 92.5% in internal validation and 87.5% in external validation. For the AUC criterion we obtained a value of 97% in internal validation and 95% in external validation. Regarding the sensitivity criterion, we obtained a value of 92% in internal validation and 87% in external validation. The results obtained by our model in the test phase show that our model is very effective in detecting COVID-19 and can be offered to health communities as a precise, rapid and effective clinical decision support system in COVID-19 detection.","PeriodicalId":313743,"journal":{"name":"2020 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125084765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISCV49265.2020.9204277
I. Tikito, N. Souissi
With the digital transformation, businesses and public administrations must change the place of data in the value chain to serve all areas of the business and open up information systems. The value of the knowledge extracted from this data is directly linked to the quality of data collection. Mobile devices are particularly suitable for reporting data. They are very widespread, very suitable and can be used at any time. These characteristics mean that the use of mobile support for data collection corresponds to a paradigm shift more than a simple new additional technology compared to the panoply of existing tools. The explosion of information sharing and data, which stems from our daily by these devices is stored mostly in the cloud servers. Thus, to reduce the number of data transferred and generated by mobile devices to the cloud servers, the edge computing allows to process data at the network edge where they are generated directly reducing certain characteristics of Big Data. Big data involves the collection of complex data on the “V” dimensions which describe the quantity and type of data collected, as well as their importance and relevance to the challenges of the requester. However, the smart data goes a step further and consist to extract from the data collected only the most relevant information for the client in order to make predictions. Our results show that using an intelligent data collection process in mobile computing could generate savings in terms of data storage and analysis at the cloud level.
{"title":"Smart Data Collection in Mobile Edge Computing Environment","authors":"I. Tikito, N. Souissi","doi":"10.1109/ISCV49265.2020.9204277","DOIUrl":"https://doi.org/10.1109/ISCV49265.2020.9204277","url":null,"abstract":"With the digital transformation, businesses and public administrations must change the place of data in the value chain to serve all areas of the business and open up information systems. The value of the knowledge extracted from this data is directly linked to the quality of data collection. Mobile devices are particularly suitable for reporting data. They are very widespread, very suitable and can be used at any time. These characteristics mean that the use of mobile support for data collection corresponds to a paradigm shift more than a simple new additional technology compared to the panoply of existing tools. The explosion of information sharing and data, which stems from our daily by these devices is stored mostly in the cloud servers. Thus, to reduce the number of data transferred and generated by mobile devices to the cloud servers, the edge computing allows to process data at the network edge where they are generated directly reducing certain characteristics of Big Data. Big data involves the collection of complex data on the “V” dimensions which describe the quantity and type of data collected, as well as their importance and relevance to the challenges of the requester. However, the smart data goes a step further and consist to extract from the data collected only the most relevant information for the client in order to make predictions. Our results show that using an intelligent data collection process in mobile computing could generate savings in terms of data storage and analysis at the cloud level.","PeriodicalId":313743,"journal":{"name":"2020 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125156680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISCV49265.2020.9204318
Melhaoui Maryem, El Ougli Abdelghani, Tidhaf Belkassem
A wireless sensor network (WSN) is a large set of distributed sensor nodes intended to sense physical surrounding data and transmit it to a base station (BS). These sensors are supposed to collect, aggregate, analyze and communicate physical data that will be transformed into profitable information.WSNs are usually implemented in critical areas and sensor nodes are non-rechargeable. Therefore, they present limitations in terms of power supplies, which brings about numerous challenges related to the optimization of the energy consumption.Since an important amount of sensors energy is consumed by the transmission unit, it is pivotal to better how data is transmitted among the network. In connection with this point, optimizing routing protocols in WSNs becomes a prerequisite axe of interest for the sake of enhancing the energy conservation and consequently expending the network lifetime.In this regard, this paper describes firstly a classical the most famous clustering based routing protocol, then discuss its improvements that use intelligent algorithms. Subsequently, the article depicts many different fuzzy logic based routing protocols and exhibits summarily their advantages and limitations.
{"title":"Routing in Wireless Sensor Networks using Fuzzy Logic: A survey","authors":"Melhaoui Maryem, El Ougli Abdelghani, Tidhaf Belkassem","doi":"10.1109/ISCV49265.2020.9204318","DOIUrl":"https://doi.org/10.1109/ISCV49265.2020.9204318","url":null,"abstract":"A wireless sensor network (WSN) is a large set of distributed sensor nodes intended to sense physical surrounding data and transmit it to a base station (BS). These sensors are supposed to collect, aggregate, analyze and communicate physical data that will be transformed into profitable information.WSNs are usually implemented in critical areas and sensor nodes are non-rechargeable. Therefore, they present limitations in terms of power supplies, which brings about numerous challenges related to the optimization of the energy consumption.Since an important amount of sensors energy is consumed by the transmission unit, it is pivotal to better how data is transmitted among the network. In connection with this point, optimizing routing protocols in WSNs becomes a prerequisite axe of interest for the sake of enhancing the energy conservation and consequently expending the network lifetime.In this regard, this paper describes firstly a classical the most famous clustering based routing protocol, then discuss its improvements that use intelligent algorithms. Subsequently, the article depicts many different fuzzy logic based routing protocols and exhibits summarily their advantages and limitations.","PeriodicalId":313743,"journal":{"name":"2020 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123788315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISCV49265.2020.9204052
Wafae Mrabti, B. Bellach, F. Morain-Nicolier, H. Tairi
Tracking human being from real scenes has attracted great interest in the computer vision community. We aim in this paper to provide a visual tracking system that is based on a dissimilarity measure. The proposed method includes the gray Local Dissimilarity Map and the Kalman Filter. Experimental results on several image sequences illustrate that the proposed method performs well in several challenging aspects of real world scenes.
{"title":"Tracking a human being via the gray local dissimilarity map","authors":"Wafae Mrabti, B. Bellach, F. Morain-Nicolier, H. Tairi","doi":"10.1109/ISCV49265.2020.9204052","DOIUrl":"https://doi.org/10.1109/ISCV49265.2020.9204052","url":null,"abstract":"Tracking human being from real scenes has attracted great interest in the computer vision community. We aim in this paper to provide a visual tracking system that is based on a dissimilarity measure. The proposed method includes the gray Local Dissimilarity Map and the Kalman Filter. Experimental results on several image sequences illustrate that the proposed method performs well in several challenging aspects of real world scenes.","PeriodicalId":313743,"journal":{"name":"2020 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121499095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISCV49265.2020.9204024
Abderrahim Zannou, Abdelhak Boulaalam, E. Nfaoui
The Internet of Things (IoT) is an advanced paradigm of the Internet, it makes everything and everyone to be connected and interacted from anywhere, at any time, and using any path and network. This new paradigm is characterized by constraint nodes and lossy networks where the available resources are limited and the network structure is unstable. The random execution of requests can lead to the failure of some nodes, as a consequence, the network lifetime will be reduced. In this paper, we proposed a new strategy to classify the nodes into three levels based on their capabilities and using a neural network. The classification allows the nodes to be aware of the best nodes that can execute or process a given service or a task, by predicting the capability of a joined node in the lossy network. The simulation results show that our proposed model has a high accuracy for prediction nodes and makes the network lifetime prolonged.
{"title":"A Node Capability Classification in Internet of Things","authors":"Abderrahim Zannou, Abdelhak Boulaalam, E. Nfaoui","doi":"10.1109/ISCV49265.2020.9204024","DOIUrl":"https://doi.org/10.1109/ISCV49265.2020.9204024","url":null,"abstract":"The Internet of Things (IoT) is an advanced paradigm of the Internet, it makes everything and everyone to be connected and interacted from anywhere, at any time, and using any path and network. This new paradigm is characterized by constraint nodes and lossy networks where the available resources are limited and the network structure is unstable. The random execution of requests can lead to the failure of some nodes, as a consequence, the network lifetime will be reduced. In this paper, we proposed a new strategy to classify the nodes into three levels based on their capabilities and using a neural network. The classification allows the nodes to be aware of the best nodes that can execute or process a given service or a task, by predicting the capability of a joined node in the lossy network. The simulation results show that our proposed model has a high accuracy for prediction nodes and makes the network lifetime prolonged.","PeriodicalId":313743,"journal":{"name":"2020 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131965219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}