Pub Date : 2023-04-07DOI: 10.26599/BDMA.2022.9020044
Zouhaier Brahmia;Fabio Grandi;Rafik Bouaziz
Temporal ontologies allow to represent not only concepts, their properties, and their relationships, but also time-varying information through explicit versioning of definitions or through the four-dimensional perdurantist view. They are widely used to formally represent temporal data semantics in several applications belonging to different fields (e.g., Semantic Web, expert systems, knowledge bases, big data, and artificial intelligence). They facilitate temporal knowledge representation and discovery, with the support of temporal data querying and reasoning. However, there is no standard or consensual temporal ontology query language. In a previous work, we have proposed an approach named τJOWL (temporal OWL 2 from temporal JSON, where OWL 2 stands for “OWL 2 Web Ontology Language” and JSON stands for “JavaScript Object Notation”). τJOWL allows (1) to automatically build a temporal OWL 2 ontology of data, following the Closed World Assumption (CWA), from temporal JSON-based big data, and (2) to manage its incremental maintenance accommodating their evolution, in a temporal and multi-schema-version environment. In this paper, we propose a temporal ontology query language for rJOWL, named rSQWRL (temporal SQWRL), designed as a temporal extension of the ontology query language-Semantic Query-enhanced Web Rule Language (SQWRL). The new language has been inspired by the features of the consensual temporal query language TSQL2 (Temporal SQL2), well known in the temporal (relational) database community. The aim of the proposal is to enable and simplify the task of retrieving any desired ontology version or of specifying any (complex) temporal query on time-varying ontologies generated from time-varying big data. Some examples, in the Internet of Healthcare Things (IoHT) domain, are provided to motivate and illustrate our proposal.
时间本体不仅可以表示概念、它们的属性和它们的关系,还可以通过定义的显式版本控制或通过四维持久主义视图来表示时变信息。它们被广泛用于在属于不同领域的几个应用程序(例如,语义网、专家系统、知识库、大数据和人工智能)中正式表示时态数据语义。它们在时间数据查询和推理的支持下,促进了时间知识的表示和发现。然而,目前还没有标准的或一致的时态本体查询语言。在之前的工作中,我们提出了一种名为τJOWL的方法(时态OWL2来自时态JSON,其中OWL2代表“OWL2 Web本体语言”,JSON代表“JavaScript对象表示法”)。τJOWL允许(1)根据封闭世界假设(CWA),从基于时态JSON的大数据中自动构建时态OWL2数据本体,以及(2)在时态和多模式版本环境中管理其增量维护,以适应其演变。在本文中,我们为rJOWL提出了一种时态本体查询语言,称为rSQWRL(时态SQWRL),它是本体查询语言语义查询增强型Web规则语言(SQWRL,Semantic query enhanced Web Rule language)的时态扩展。这种新语言的灵感来自于一致时态查询语言TSQL2(TemporalSQL2)的特性,TSQL2在时态(关系)数据库社区中很有名。该提案的目的是实现并简化检索任何期望的本体版本的任务,或指定对由时变大数据生成的时变本体的任何(复杂)时间查询的任务。提供了医疗保健物联网(IoHT)领域的一些例子来激励和说明我们的建议。
{"title":"τSQWRL: A TSQL2-Like Query Language for Temporal Ontologies Generated from JSON Big Data","authors":"Zouhaier Brahmia;Fabio Grandi;Rafik Bouaziz","doi":"10.26599/BDMA.2022.9020044","DOIUrl":"https://doi.org/10.26599/BDMA.2022.9020044","url":null,"abstract":"Temporal ontologies allow to represent not only concepts, their properties, and their relationships, but also time-varying information through explicit versioning of definitions or through the four-dimensional perdurantist view. They are widely used to formally represent temporal data semantics in several applications belonging to different fields (e.g., Semantic Web, expert systems, knowledge bases, big data, and artificial intelligence). They facilitate temporal knowledge representation and discovery, with the support of temporal data querying and reasoning. However, there is no standard or consensual temporal ontology query language. In a previous work, we have proposed an approach named τJOWL (temporal OWL 2 from temporal JSON, where OWL 2 stands for “OWL 2 Web Ontology Language” and JSON stands for “JavaScript Object Notation”). τJOWL allows (1) to automatically build a temporal OWL 2 ontology of data, following the Closed World Assumption (CWA), from temporal JSON-based big data, and (2) to manage its incremental maintenance accommodating their evolution, in a temporal and multi-schema-version environment. In this paper, we propose a temporal ontology query language for rJOWL, named rSQWRL (temporal SQWRL), designed as a temporal extension of the ontology query language-Semantic Query-enhanced Web Rule Language (SQWRL). The new language has been inspired by the features of the consensual temporal query language TSQL2 (Temporal SQL2), well known in the temporal (relational) database community. The aim of the proposal is to enable and simplify the task of retrieving any desired ontology version or of specifying any (complex) temporal query on time-varying ontologies generated from time-varying big data. Some examples, in the Internet of Healthcare Things (IoHT) domain, are provided to motivate and illustrate our proposal.","PeriodicalId":52355,"journal":{"name":"Big Data Mining and Analytics","volume":"6 3","pages":"288-300"},"PeriodicalIF":13.6,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8254253/10097649/10097652.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67837480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-07DOI: 10.26599/BDMA.2022.9020053
{"title":"Call for Papers: Special Issue on Intelligent Network Video Advances Based on Transformers","authors":"","doi":"10.26599/BDMA.2022.9020053","DOIUrl":"https://doi.org/10.26599/BDMA.2022.9020053","url":null,"abstract":"","PeriodicalId":52355,"journal":{"name":"Big Data Mining and Analytics","volume":"6 3","pages":"390-390"},"PeriodicalIF":13.6,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8254253/10097649/10097663.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67838274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human Action Recognition (HAR) attempts to recognize the human action from images and videos. The major challenge in HAR is the design of an action descriptor that makes the HAR system robust for different environments. A novel action descriptor is proposed in this study, based on two independent spatial and spectral filters. The proposed descriptor uses a Difference of Gaussian (DoG) filter to extract scale-invariant features and a Difference of Wavelet (DoW) filter to extract spectral information. To create a composite feature vector for a particular test action picture, the Discriminant of Guassian (DoG) and Difference of Wavelet (DoW) features are combined. Linear Discriminant Analysis (LDA), a widely used dimensionality reduction technique, is also used to eliminate duplicate data. Finally, a closest neighbor method is used to classify the dataset. Weizmann and UCF 11 datasets were used to run extensive simulations of the suggested strategy, and the accuracy assessed after the simulations were run on Weizmann datasets for five-fold cross validation is shown to perform well. The average accuracy of DoG + DoW is observed as 83.6635% while the average accuracy of Discrinanat of Guassian (DoG) and Difference of Wavelet (DoW) is observed as 80.2312% and 77.4215%, respectively. The average accuracy measured after the simulation of proposed methods over UCF 11 action dataset for five-fold cross validation DoG + DoW is observed as 62.5231% while the average accuracy of Difference of Guassian (DoG) and Difference of Wavelet (DoW) is observed as 60.3214% and 58.1247%, respectively. From the above accuracy observations, the accuracy of Weizmann is high compared to the accuracy of UCF 11, hence verifying the effectiveness in the improvisation of recognition accuracy.
{"title":"Human Action Recognition Using Difference of Gaussian and Difference of Wavelet","authors":"Gopampallikar Vinoda Reddy;Kongara Deepika;Lakshmanan Malliga;Duraivelu Hemanand;Chinnadurai Senthilkumar;Subburayalu Gopalakrishnan;Yousef Farhaoui","doi":"10.26599/BDMA.2022.9020040","DOIUrl":"https://doi.org/10.26599/BDMA.2022.9020040","url":null,"abstract":"Human Action Recognition (HAR) attempts to recognize the human action from images and videos. The major challenge in HAR is the design of an action descriptor that makes the HAR system robust for different environments. A novel action descriptor is proposed in this study, based on two independent spatial and spectral filters. The proposed descriptor uses a Difference of Gaussian (DoG) filter to extract scale-invariant features and a Difference of Wavelet (DoW) filter to extract spectral information. To create a composite feature vector for a particular test action picture, the Discriminant of Guassian (DoG) and Difference of Wavelet (DoW) features are combined. Linear Discriminant Analysis (LDA), a widely used dimensionality reduction technique, is also used to eliminate duplicate data. Finally, a closest neighbor method is used to classify the dataset. Weizmann and UCF 11 datasets were used to run extensive simulations of the suggested strategy, and the accuracy assessed after the simulations were run on Weizmann datasets for five-fold cross validation is shown to perform well. The average accuracy of DoG + DoW is observed as 83.6635% while the average accuracy of Discrinanat of Guassian (DoG) and Difference of Wavelet (DoW) is observed as 80.2312% and 77.4215%, respectively. The average accuracy measured after the simulation of proposed methods over UCF 11 action dataset for five-fold cross validation DoG + DoW is observed as 62.5231% while the average accuracy of Difference of Guassian (DoG) and Difference of Wavelet (DoW) is observed as 60.3214% and 58.1247%, respectively. From the above accuracy observations, the accuracy of Weizmann is high compared to the accuracy of UCF 11, hence verifying the effectiveness in the improvisation of recognition accuracy.","PeriodicalId":52355,"journal":{"name":"Big Data Mining and Analytics","volume":"6 3","pages":"336-346"},"PeriodicalIF":13.6,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8254253/10097649/10097655.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67838276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-07DOI: 10.26599/BDMA.2022.9020022
Abdelaaziz Hessane;Ahmed El Youssefi;Yousef Farhaoui;Badraddine Aghoutane;Fatima Amounas
Date palm production is critical to oasis agriculture, owing to its economic importance and nutritional advantages. Numerous diseases endanger this precious tree, putting a strain on the economy and environment. White scale Parlatoria blanchardi is a damaging bug that degrades the quality of dates. When an infestation reaches a specific degree, it might result in the tree's death. To counter this threat, precise detection of infected leaves and its infestation degree is important to decide if chemical treatment is necessary. This decision is crucial for farmers who wish to minimize yield losses while preserving production quality. For this purpose, we propose a feature extraction and machine learning (ML) technique based framework for classifying the stages of infestation by white scale disease (WSD) in date palm trees by investigating their leaflets images. 80 gray level co-occurrence matrix (GLCM) texture features and 9 hue, saturation, and value (HSV) color moments features are extracted from both grayscale and color images of the used dataset. To classify the WSD into its four classes (healthy, low infestation degree, medium infestation degree, and high infestation degree), two types of ML algorithms were tested; classical machine learning methods, namely, support vector machine (SVM) and k-nearest neighbors (KNN), and ensemble learning methods such as random forest (RF) and light gradient boosting machine (LightGBM). The ML models were trained and evaluated using two datasets: the first is composed of the extracted GLCM features only, and the second combines GLCM and HSV descriptors. The results indicate that SVM classifier outperformed on combined GLCM and HSV features with an accuracy of 98.29%. The proposed framework could be beneficial to the oasis agricultural community in terms of early detection of date palm white scale disease (DPWSD) and assisting in the adoption of preventive measures to protect both date palm trees and crop yield.
{"title":"A Machine Learning Based Framework for a Stage-Wise Classification of Date Palm White Scale Disease","authors":"Abdelaaziz Hessane;Ahmed El Youssefi;Yousef Farhaoui;Badraddine Aghoutane;Fatima Amounas","doi":"10.26599/BDMA.2022.9020022","DOIUrl":"https://doi.org/10.26599/BDMA.2022.9020022","url":null,"abstract":"Date palm production is critical to oasis agriculture, owing to its economic importance and nutritional advantages. Numerous diseases endanger this precious tree, putting a strain on the economy and environment. White scale Parlatoria blanchardi is a damaging bug that degrades the quality of dates. When an infestation reaches a specific degree, it might result in the tree's death. To counter this threat, precise detection of infected leaves and its infestation degree is important to decide if chemical treatment is necessary. This decision is crucial for farmers who wish to minimize yield losses while preserving production quality. For this purpose, we propose a feature extraction and machine learning (ML) technique based framework for classifying the stages of infestation by white scale disease (WSD) in date palm trees by investigating their leaflets images. 80 gray level co-occurrence matrix (GLCM) texture features and 9 hue, saturation, and value (HSV) color moments features are extracted from both grayscale and color images of the used dataset. To classify the WSD into its four classes (healthy, low infestation degree, medium infestation degree, and high infestation degree), two types of ML algorithms were tested; classical machine learning methods, namely, support vector machine (SVM) and k-nearest neighbors (KNN), and ensemble learning methods such as random forest (RF) and light gradient boosting machine (LightGBM). The ML models were trained and evaluated using two datasets: the first is composed of the extracted GLCM features only, and the second combines GLCM and HSV descriptors. The results indicate that SVM classifier outperformed on combined GLCM and HSV features with an accuracy of 98.29%. The proposed framework could be beneficial to the oasis agricultural community in terms of early detection of date palm white scale disease (DPWSD) and assisting in the adoption of preventive measures to protect both date palm trees and crop yield.","PeriodicalId":52355,"journal":{"name":"Big Data Mining and Analytics","volume":"6 3","pages":"263-272"},"PeriodicalIF":13.6,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8254253/10097649/10097658.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67837482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Industrial Internet of Things (IIoT) represents the expansion of the Internet of Things (IoT) in industrial sectors. It is designed to implicate embedded technologies in manufacturing fields to enhance their operations. However, IIoT involves some security vulnerabilities that are more damaging than those of IoT. Accordingly, Intrusion Detection Systems (IDSs) have been developed to forestall inevitable harmful intrusions. IDSs survey the environment to identify intrusions in real time. This study designs an intrusion detection model exploiting feature engineering and machine learning for IIoT security. We combine Isolation Forest (IF) with Pearson's Correlation Coefficient (PCC) to reduce computational cost and prediction time. IF is exploited to detect and remove outliers from datasets. We apply PCC to choose the most appropriate features. PCC and IF are applied exchangeably (PCCIF and IFPCC). The Random Forest (RF) classifier is implemented to enhance IDS performances. For evaluation, we use the Bot-IoT and NF-UNSW-NB15-v2 datasets. RF-PCCIF and RF-IFPCC show noteworthy results with 99.98% and 99.99% Accuracy (ACC) and 6.18s and 6.25s prediction time on Bot-IoT, respectively. The two models also score 99.30% and 99.18% ACC and 6.71 s and 6.87s prediction time on NF-UNSW-NB15-v2, respectively. Results prove that our designed model has several advantages and higher performance than related models.
{"title":"An Ensemble Learning Based Intrusion Detection Model for Industrial IoT Security","authors":"Mouaad Mohy-Eddine;Azidine Guezzaz;Said Benkirane;Mourade Azrour;Yousef Farhaoui","doi":"10.26599/BDMA.2022.9020032","DOIUrl":"https://doi.org/10.26599/BDMA.2022.9020032","url":null,"abstract":"Industrial Internet of Things (IIoT) represents the expansion of the Internet of Things (IoT) in industrial sectors. It is designed to implicate embedded technologies in manufacturing fields to enhance their operations. However, IIoT involves some security vulnerabilities that are more damaging than those of IoT. Accordingly, Intrusion Detection Systems (IDSs) have been developed to forestall inevitable harmful intrusions. IDSs survey the environment to identify intrusions in real time. This study designs an intrusion detection model exploiting feature engineering and machine learning for IIoT security. We combine Isolation Forest (IF) with Pearson's Correlation Coefficient (PCC) to reduce computational cost and prediction time. IF is exploited to detect and remove outliers from datasets. We apply PCC to choose the most appropriate features. PCC and IF are applied exchangeably (PCCIF and IFPCC). The Random Forest (RF) classifier is implemented to enhance IDS performances. For evaluation, we use the Bot-IoT and NF-UNSW-NB15-v2 datasets. RF-PCCIF and RF-IFPCC show noteworthy results with 99.98% and 99.99% Accuracy (ACC) and 6.18s and 6.25s prediction time on Bot-IoT, respectively. The two models also score 99.30% and 99.18% ACC and 6.71 s and 6.87s prediction time on NF-UNSW-NB15-v2, respectively. Results prove that our designed model has several advantages and higher performance than related models.","PeriodicalId":52355,"journal":{"name":"Big Data Mining and Analytics","volume":"6 3","pages":"273-287"},"PeriodicalIF":13.6,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8254253/10097649/10097653.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67999317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing (CC) is a novel technology that has made it easier to access network and computer resources on demand such as storage and data management services. In addition, it aims to strengthen systems and make them useful. Regardless of these advantages, cloud providers suffer from many security limits. Particularly, the security of resources and services represents a real challenge for cloud technologies. For this reason, a set of solutions have been implemented to improve cloud security by monitoring resources, services, and networks, then detect attacks. Actually, intrusion detection system (IDS) is an enhanced mechanism used to control traffic within networks and detect abnormal activities. This paper presents a cloud-based intrusion detection model based on random forest (RF) and feature engineering. Specifically, the RF classifier is obtained and integrated to enhance accuracy (ACC) of the proposed detection model. The proposed model approach has been evaluated and validated on two datasets and gives 98.3% ACC and 99.99% ACC using Bot-IoT and NSL-KDD datasets, respectively. Consequently, the obtained results present good performances in terms of ACC, precision, and recall when compared to the recent related works.
{"title":"Cloud-Based Intrusion Detection Approach Using Machine Learning Techniques","authors":"Hanaa Attou;Azidine Guezzaz;Said Benkirane;Mourade Azrour;Yousef Farhaoui","doi":"10.26599/BDMA.2022.9020038","DOIUrl":"https://doi.org/10.26599/BDMA.2022.9020038","url":null,"abstract":"Cloud computing (CC) is a novel technology that has made it easier to access network and computer resources on demand such as storage and data management services. In addition, it aims to strengthen systems and make them useful. Regardless of these advantages, cloud providers suffer from many security limits. Particularly, the security of resources and services represents a real challenge for cloud technologies. For this reason, a set of solutions have been implemented to improve cloud security by monitoring resources, services, and networks, then detect attacks. Actually, intrusion detection system (IDS) is an enhanced mechanism used to control traffic within networks and detect abnormal activities. This paper presents a cloud-based intrusion detection model based on random forest (RF) and feature engineering. Specifically, the RF classifier is obtained and integrated to enhance accuracy (ACC) of the proposed detection model. The proposed model approach has been evaluated and validated on two datasets and gives 98.3% ACC and 99.99% ACC using Bot-IoT and NSL-KDD datasets, respectively. Consequently, the obtained results present good performances in terms of ACC, precision, and recall when compared to the recent related works.","PeriodicalId":52355,"journal":{"name":"Big Data Mining and Analytics","volume":"6 3","pages":"311-320"},"PeriodicalIF":13.6,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8254253/10097649/10097662.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67999319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-07DOI: 10.26599/BDMA.2022.9020013
Edeh Michael Onyema;Rijwan Khan;Nwafor Chika Eucheria;Tribhuwan Kumar
The speed of spread of Coronavirus Disease 2019 led to global lockdowns and disruptions in the academic sector. The study examined the impact of mobile technology on physics education during lockdowns. Data were collected through an online survey and later evaluated using regression tools, frequency, and an analysis of variance (ANOVA). The findings revealed that the usage of mobile technology had statistically significant effects on physics instructors' and students' academics during the coronavirus lockdown. Most of the participants admitted that the use of mobile technologies such as smartphones, laptops, PDAs, Zoom, mobile apps, etc. were very useful and helpful for continued education amid the pandemic restrictions. Online teaching is very effective during lock-down with smartphones and laptops on different platforms. The paper brings the limelight to the growing power of mobile technology solutions in physics education.
{"title":"Impact of Mobile Technology and Use of Big Data in Physics Education During Coronavirus Lockdown","authors":"Edeh Michael Onyema;Rijwan Khan;Nwafor Chika Eucheria;Tribhuwan Kumar","doi":"10.26599/BDMA.2022.9020013","DOIUrl":"https://doi.org/10.26599/BDMA.2022.9020013","url":null,"abstract":"The speed of spread of Coronavirus Disease 2019 led to global lockdowns and disruptions in the academic sector. The study examined the impact of mobile technology on physics education during lockdowns. Data were collected through an online survey and later evaluated using regression tools, frequency, and an analysis of variance (ANOVA). The findings revealed that the usage of mobile technology had statistically significant effects on physics instructors' and students' academics during the coronavirus lockdown. Most of the participants admitted that the use of mobile technologies such as smartphones, laptops, PDAs, Zoom, mobile apps, etc. were very useful and helpful for continued education amid the pandemic restrictions. Online teaching is very effective during lock-down with smartphones and laptops on different platforms. The paper brings the limelight to the growing power of mobile technology solutions in physics education.","PeriodicalId":52355,"journal":{"name":"Big Data Mining and Analytics","volume":"6 3","pages":"381-389"},"PeriodicalIF":13.6,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8254253/10097649/10097656.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67838273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of hand gesture recognition systems has gained more attention in recent days, due to its support of modern human-computer interfaces. Moreover, sign language recognition is mainly developed for enabling communication between deaf and dumb people. In conventional works, various image processing techniques like segmentation, optimization, and classification are deployed for hand gesture recognition. Still, it limits the major problems of inefficient handling of large dimensional datasets and requires more time consumption, increased false positives, error rate, and misclassification outputs. Hence, this research work intends to develop an efficient hand gesture image recognition system by using advanced image processing techniques. During image segmentation, skin color detection and morphological operations are performed for accurately segmenting the hand gesture portion. Then, the Heuristic Manta-ray Foraging Optimization (HMFO) technique is employed for optimally selecting the features by computing the best fitness value. Moreover, the reduced dimensionality of features helps to increase the accuracy of classification with a reduced error rate. Finally, an Adaptive Extreme Learning Machine (AELM) based classification technique is employed for predicting the recognition output. During results validation, various evaluation measures have been used to compare the proposed model's performance with other classification approaches.
{"title":"An Intelligent Heuristic Manta-Ray Foraging Optimization and Adaptive Extreme Learning Machine for Hand Gesture Image Recognition","authors":"Seetharam Khetavath;Navalpur Chinnappan Sendhilkumar;Pandurangan Mukunthan;Selvaganesan Jana;Subburayalu Gopalakrishnan;Lakshmanan Malliga;Sankuru Ravi Chand;Yousef Farhaoui","doi":"10.26599/BDMA.2022.9020036","DOIUrl":"https://doi.org/10.26599/BDMA.2022.9020036","url":null,"abstract":"The development of hand gesture recognition systems has gained more attention in recent days, due to its support of modern human-computer interfaces. Moreover, sign language recognition is mainly developed for enabling communication between deaf and dumb people. In conventional works, various image processing techniques like segmentation, optimization, and classification are deployed for hand gesture recognition. Still, it limits the major problems of inefficient handling of large dimensional datasets and requires more time consumption, increased false positives, error rate, and misclassification outputs. Hence, this research work intends to develop an efficient hand gesture image recognition system by using advanced image processing techniques. During image segmentation, skin color detection and morphological operations are performed for accurately segmenting the hand gesture portion. Then, the Heuristic Manta-ray Foraging Optimization (HMFO) technique is employed for optimally selecting the features by computing the best fitness value. Moreover, the reduced dimensionality of features helps to increase the accuracy of classification with a reduced error rate. Finally, an Adaptive Extreme Learning Machine (AELM) based classification technique is employed for predicting the recognition output. During results validation, various evaluation measures have been used to compare the proposed model's performance with other classification approaches.","PeriodicalId":52355,"journal":{"name":"Big Data Mining and Analytics","volume":"6 3","pages":"321-335"},"PeriodicalIF":13.6,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8254253/10097649/10097660.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67837478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-07DOI: 10.26599/BDMA.2022.9020035
Said Ziani;Yousef Farhaoui;Mohammed Moutaib
This paper deals with detecting fetal electrocardiogram FECG signals from single-channel abdominal lead. It is based on the Convolutional Neural Network (CNN) combined with advanced mathematical methods, such as Independent Component Analysis (ICA), Singular Value Decomposition (SVD), and a dimension-reduction technique like Nonnegative Matrix Factorization (NMF). Due to the highly disproportionate frequency of the fetus's heart rate compared to the mother's, the time-scale representation clearly distinguishes the fetal electrical activity in terms of energy. Furthermore, we can disentangle the various components of fetal ECG, which serve as inputs to the CNN model to optimize the actual FECG signal, denoted by FECGr, which is recovered using the SVD-ICA process. The findings demonstrate the efficiency of this innovative approach, which may be deployed in real-time.
{"title":"Extraction of Fetal Electrocardiogram by Combining Deep Learning and SVD-ICA-NMF Methods","authors":"Said Ziani;Yousef Farhaoui;Mohammed Moutaib","doi":"10.26599/BDMA.2022.9020035","DOIUrl":"https://doi.org/10.26599/BDMA.2022.9020035","url":null,"abstract":"This paper deals with detecting fetal electrocardiogram FECG signals from single-channel abdominal lead. It is based on the Convolutional Neural Network (CNN) combined with advanced mathematical methods, such as Independent Component Analysis (ICA), Singular Value Decomposition (SVD), and a dimension-reduction technique like Nonnegative Matrix Factorization (NMF). Due to the highly disproportionate frequency of the fetus's heart rate compared to the mother's, the time-scale representation clearly distinguishes the fetal electrical activity in terms of energy. Furthermore, we can disentangle the various components of fetal ECG, which serve as inputs to the CNN model to optimize the actual FECG signal, denoted by FECGr, which is recovered using the SVD-ICA process. The findings demonstrate the efficiency of this innovative approach, which may be deployed in real-time.","PeriodicalId":52355,"journal":{"name":"Big Data Mining and Analytics","volume":"6 3","pages":"301-310"},"PeriodicalIF":13.6,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8254253/10097649/10097661.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67837479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-07DOI: 10.26599/BDMA.2023.9020002
Fan Yang;Mao Xu;Wenqiang Lei;Jiancheng Lv
Fluidic Catalytic Cracking (FCC) is a complex petrochemical process affected by many highly non-linear and interrelated factors. Product yield analysis, flue gas desulfurization prediction, and abnormal condition warning are several key research directions in FCC. This paper will sort out the relevant research results of the existing Artificial Intelligence (AI) algorithms applied to the analysis and optimization of catalytic cracking processes, with a view to providing help for the follow-up research. Compared with the traditional mathematical mechanism method, the AI method can effectively solve the difficulties in FCC process modeling, such as high-dimensional, nonlinear, strong correlation, and large delay. AI methods applied in product yield analysis build models based on massive data. By fitting the functional relationship between operating variables and products, the excessive simplification of mechanism model can be avoided, resulting in high model accuracy. AI methods applied in flue gas desulfurization can be usually divided into two stages: modeling and optimization. In the modeling stage, data-driven methods are often used to build the system model or rule base; In the optimization stage, heuristic search or reinforcement learning methods can be applied to find the optimal operating parameters based on the constructed model or rule base. AI methods, including data-driven and knowledge-driven algorithms, are widely used in the abnormal condition warning. Knowledge-driven methods have advantages in interpretability and generalization, but disadvantages in construction difficulty and prediction recall. While the data-driven methods are just the opposite. Thus, some studies combine these two methods to obtain better results.
{"title":"Artificial Intelligence Methods Applied to Catalytic Cracking Processes","authors":"Fan Yang;Mao Xu;Wenqiang Lei;Jiancheng Lv","doi":"10.26599/BDMA.2023.9020002","DOIUrl":"https://doi.org/10.26599/BDMA.2023.9020002","url":null,"abstract":"Fluidic Catalytic Cracking (FCC) is a complex petrochemical process affected by many highly non-linear and interrelated factors. Product yield analysis, flue gas desulfurization prediction, and abnormal condition warning are several key research directions in FCC. This paper will sort out the relevant research results of the existing Artificial Intelligence (AI) algorithms applied to the analysis and optimization of catalytic cracking processes, with a view to providing help for the follow-up research. Compared with the traditional mathematical mechanism method, the AI method can effectively solve the difficulties in FCC process modeling, such as high-dimensional, nonlinear, strong correlation, and large delay. AI methods applied in product yield analysis build models based on massive data. By fitting the functional relationship between operating variables and products, the excessive simplification of mechanism model can be avoided, resulting in high model accuracy. AI methods applied in flue gas desulfurization can be usually divided into two stages: modeling and optimization. In the modeling stage, data-driven methods are often used to build the system model or rule base; In the optimization stage, heuristic search or reinforcement learning methods can be applied to find the optimal operating parameters based on the constructed model or rule base. AI methods, including data-driven and knowledge-driven algorithms, are widely used in the abnormal condition warning. Knowledge-driven methods have advantages in interpretability and generalization, but disadvantages in construction difficulty and prediction recall. While the data-driven methods are just the opposite. Thus, some studies combine these two methods to obtain better results.","PeriodicalId":52355,"journal":{"name":"Big Data Mining and Analytics","volume":"6 3","pages":"361-380"},"PeriodicalIF":13.6,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8254253/10097649/10097651.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67838275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}