: In recent days, from the speech signal the recognition of emotion is considered as an extensive advanced investigation subject because the speech signal is considered as the rapid and natural method to communicate with humans. Numerous examinations have been progressed related to this topic. This paper develops the emotions recognition from the speech signal in an accurate way, with the knowledge of numerous examined models. Therefore, to study the multimodal fusion of speech features, a Deep Convolutional Neural Network model is proposed. Moreover, the hybrid Genetic Algorithm (GA)-Grey Wolf Optimization (GWO) algorithm is presented that is the combination of both the GA and GWO technique features towards training the network. Finally, the developed recognition model is verified and compared with the existing techniques in correlation with diverse performance measures such as Accuracy, Sensitivity, Precision, Specificity, False Positive Rate (FPR), False Discovery Rate (FDR), False Negative Rate (FNR), F1Score, Negative Predictive Value (NPV)
{"title":"Emotion Recognition from Speech Signals Using DCNN with Hybrid GA-GWO Algorithm","authors":"R. V. Darekar, A. Dhande","doi":"10.46253/j.mr.v2i4.a2","DOIUrl":"https://doi.org/10.46253/j.mr.v2i4.a2","url":null,"abstract":": In recent days, from the speech signal the recognition of emotion is considered as an extensive advanced investigation subject because the speech signal is considered as the rapid and natural method to communicate with humans. Numerous examinations have been progressed related to this topic. This paper develops the emotions recognition from the speech signal in an accurate way, with the knowledge of numerous examined models. Therefore, to study the multimodal fusion of speech features, a Deep Convolutional Neural Network model is proposed. Moreover, the hybrid Genetic Algorithm (GA)-Grey Wolf Optimization (GWO) algorithm is presented that is the combination of both the GA and GWO technique features towards training the network. Finally, the developed recognition model is verified and compared with the existing techniques in correlation with diverse performance measures such as Accuracy, Sensitivity, Precision, Specificity, False Positive Rate (FPR), False Discovery Rate (FDR), False Negative Rate (FNR), F1Score, Negative Predictive Value (NPV)","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125195172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
: Generally, Text mining indicates the process of extracting maximum-quality information from the text. Moreover, it is mostly exploited in applications such as text categorization, text clustering, and text classification and so forth. In recent times, the text clustering is considered as the facilitating and challenging task exploited to cluster the text document. Because of the few inappropriate terms and large dimension, accuracy of text clustering is reduced. In this work, the semantic word processing and Enhanced CSO algorithm are presented for automatic text clustering. At first, input documents are stated as input to the preprocessing step that provides the useful keyword for clustering and feature extraction. After that, the ensuing keyword is applied to wordnet ontology to discover the hyponyms and synonyms of every keyword. Then, the frequency is determined for every keyword used to model the text feature library. Since it comprises the larger dimension, the entropy is exploited to choose the most significant feature. Hence, the proposed approach is exploited to assign the class labels to generate different clusters of text documents. The experimentation outcomes and performance is examined and compared with conventional algorithms such as ABC, GA, and PSO.
{"title":"A Semantic Word Processing Using Enhanced Cat Swarm Optimization Algorithm for Automatic Text Clustering","authors":"","doi":"10.46253/j.mr.v2i4.a3","DOIUrl":"https://doi.org/10.46253/j.mr.v2i4.a3","url":null,"abstract":": Generally, Text mining indicates the process of extracting maximum-quality information from the text. Moreover, it is mostly exploited in applications such as text categorization, text clustering, and text classification and so forth. In recent times, the text clustering is considered as the facilitating and challenging task exploited to cluster the text document. Because of the few inappropriate terms and large dimension, accuracy of text clustering is reduced. In this work, the semantic word processing and Enhanced CSO algorithm are presented for automatic text clustering. At first, input documents are stated as input to the preprocessing step that provides the useful keyword for clustering and feature extraction. After that, the ensuing keyword is applied to wordnet ontology to discover the hyponyms and synonyms of every keyword. Then, the frequency is determined for every keyword used to model the text feature library. Since it comprises the larger dimension, the entropy is exploited to choose the most significant feature. Hence, the proposed approach is exploited to assign the class labels to generate different clusters of text documents. The experimentation outcomes and performance is examined and compared with conventional algorithms such as ABC, GA, and PSO.","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123810564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
: In the interactive environment, information security is considered as the main issue with the development of information technology. Here, there is no protection for the messages transmitted to and from the receiver. A method called image steganography is used, which assures security to the concealed communication and protection of the information. In some of the receiver images, image steganography conceals the secret message and transmits the secret message so that the message is noticeable only to the transmitter and the receiver. Hence, this paper presents an algorithm for image steganography by exploiting sparse representation, and a method called Enhanced Whale Optimization Algorithm (WOA) in order to effectual selection of the pixels in order to embed the secret audio signal in the image. Enhanced WOA based pixel chosen process exploits a fitness function that is on the basis of the cost function. In order to evaluate the fitness, cost function computes the entropy, edge, and pixel intensity. Experimentation has been performed and a comparison of the proposed algorithm with the conventional algorithms regarding the PSNR and MSE. Moreover, it decides the proposed Enhanced WOA, as an effectual algorithm. to resolve the aforesaid issues, a (k, n) threshold partial reversible Absolute Moment Block Truncation Coding (AMBTC) on the basis of the SIS model with authentication and steganography was developed. Using the polynomial on the basis of the SIS in GF (28), a secret image was partition into n noise-similar to shares. They were hidden into the AMBTC cover image with parity bits using the developed embedding methods, and n meaningful stego images were modeled in order to competently deal with the shares. Authentication was used as a result that the reliability of stego image was confirmed. Adequate stego images can completely restructure the secret.
{"title":"Enhanced Whale Optimization Algorithm and Wavelet Transform for Image Steganography","authors":"","doi":"10.46253/j.mr.v2i3.a3","DOIUrl":"https://doi.org/10.46253/j.mr.v2i3.a3","url":null,"abstract":": In the interactive environment, information security is considered as the main issue with the development of information technology. Here, there is no protection for the messages transmitted to and from the receiver. A method called image steganography is used, which assures security to the concealed communication and protection of the information. In some of the receiver images, image steganography conceals the secret message and transmits the secret message so that the message is noticeable only to the transmitter and the receiver. Hence, this paper presents an algorithm for image steganography by exploiting sparse representation, and a method called Enhanced Whale Optimization Algorithm (WOA) in order to effectual selection of the pixels in order to embed the secret audio signal in the image. Enhanced WOA based pixel chosen process exploits a fitness function that is on the basis of the cost function. In order to evaluate the fitness, cost function computes the entropy, edge, and pixel intensity. Experimentation has been performed and a comparison of the proposed algorithm with the conventional algorithms regarding the PSNR and MSE. Moreover, it decides the proposed Enhanced WOA, as an effectual algorithm. to resolve the aforesaid issues, a (k, n) threshold partial reversible Absolute Moment Block Truncation Coding (AMBTC) on the basis of the SIS model with authentication and steganography was developed. Using the polynomial on the basis of the SIS in GF (28), a secret image was partition into n noise-similar to shares. They were hidden into the AMBTC cover image with parity bits using the developed embedding methods, and n meaningful stego images were modeled in order to competently deal with the shares. Authentication was used as a result that the reliability of stego image was confirmed. Adequate stego images can completely restructure the secret.","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132722172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
: From the experts and researchers, data publishing is the center of attention in the latest technology, which receives great interest. The idea of data publishing faces a large number of security problems chiefly, while any trusted organization presents data to the third party, personal information requires not to be revealed. Hence, to keep the data privacy, this work presents a method for privacy preserved collaborative data publishing by exploiting the Weed and Particle Swarm Optimization algorithm (W-PSO) for that a C-mixture parameter is utilized. The parameter of C-mixture improves data privacy if the data does not assure privacy constraints, like l -diversity, m -privacy and k -anonymity. The least fitness value is controlled which is based upon the least value of the widespread information loss and the least value of the average equivalence class size. The minimum value of the fitness assures the utmost utility and the least privacy. Simulation is performed by exploiting the adult dataset and the proposed method is superior to the conventional algorithms regarding the widespread information loss and the average equivalence class metric and attained minimum values.
从专家和研究人员的角度来看,数据出版是最新技术的关注焦点,受到极大的兴趣。数据发布的理念主要面临着大量的安全问题,而任何受信任的组织都可以将数据提供给第三方,个人信息要求不被泄露。因此,为了保护数据隐私,本文提出了一种利用杂草和粒子群优化算法(W-PSO)保护隐私的协同数据发布方法,该方法使用c -混合参数。如果数据不保证隐私约束,如l -diversity, m -privacy和k -anonymity,则C-mixture参数可以提高数据的隐私性。最小适应度是根据广泛信息损失的最小值和平均等效类大小的最小值来控制的。适应度的最小值保证了最大的效用和最小的隐私。利用成人数据集进行了仿真,结果表明,该方法在广泛的信息丢失和平均等价类度量方面优于传统算法,并获得了最小值。
{"title":"Hybrid Weed-Particle Swarm Optimization Algorithm and CMixture for Data Publishing","authors":"Yogesh R. Kulkarni","doi":"10.46253/j.mr.v2i3.a4","DOIUrl":"https://doi.org/10.46253/j.mr.v2i3.a4","url":null,"abstract":": From the experts and researchers, data publishing is the center of attention in the latest technology, which receives great interest. The idea of data publishing faces a large number of security problems chiefly, while any trusted organization presents data to the third party, personal information requires not to be revealed. Hence, to keep the data privacy, this work presents a method for privacy preserved collaborative data publishing by exploiting the Weed and Particle Swarm Optimization algorithm (W-PSO) for that a C-mixture parameter is utilized. The parameter of C-mixture improves data privacy if the data does not assure privacy constraints, like l -diversity, m -privacy and k -anonymity. The least fitness value is controlled which is based upon the least value of the widespread information loss and the least value of the average equivalence class size. The minimum value of the fitness assures the utmost utility and the least privacy. Simulation is performed by exploiting the adult dataset and the proposed method is superior to the conventional algorithms regarding the widespread information loss and the average equivalence class metric and attained minimum values.","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122394010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The insufficient land cover data contain mainly imperfect the consequence and effects of land cover. Although satellite imaging or remote sensing is used in mapping various spatial and temporal scales, however, its complete endeavor was not hitherto recognized. Therefore, this paper aims to employ a new land cover classification technique by optimal deep learning architecture. Moreover, it comprises three major stages such as segmentation, feature classification, and extraction. At first, the land cover image is segmented and given to the feature extraction process. For feature extraction, VI, like SR, Kauth–Thomas Tasseled Cap and NDVI, are extracted. Moreover, these features are classified by exploiting CNN and NN in both the classifiers, by Enhanced Crow Search Algorithm the number of hidden neurons is optimized. The optimization of hidden neurons is performed so that the classification accuracy must be maximum that is considered as the main contribution.
{"title":"Enhanced Crow Search Optimization Algorithm and Hybrid NN-CNN Classifiers for Classification of Land Cover Images","authors":"M. Gangappa","doi":"10.46253/j.mr.v2i3.a2","DOIUrl":"https://doi.org/10.46253/j.mr.v2i3.a2","url":null,"abstract":"The insufficient land cover data contain mainly imperfect the consequence and effects of land cover. Although satellite imaging or remote sensing is used in mapping various spatial and temporal scales, however, its complete endeavor was not hitherto recognized. Therefore, this paper aims to employ a new land cover classification technique by optimal deep learning architecture. Moreover, it comprises three major stages such as segmentation, feature classification, and extraction. At first, the land cover image is segmented and given to the feature extraction process. For feature extraction, VI, like SR, Kauth–Thomas Tasseled Cap and NDVI, are extracted. Moreover, these features are classified by exploiting CNN and NN in both the classifiers, by Enhanced Crow Search Algorithm the number of hidden neurons is optimized. The optimization of hidden neurons is performed so that the classification accuracy must be maximum that is considered as the main contribution.","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115905560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generally, Tuberculosis (TB) is an extremely infectious disease and it is a significant medical issue everywhere throughout the globe. The exact recognition of TB is the main concern faced by the majority of conventional algorithms. Hence, this paper addresses these problems and presented a successful method for recognizing TB utilizing the modular neural network. Moreover, for transforming the RGB image to LUV space, the color space transformation is utilized. At that point, adaptive thresholding is done for image segmentation and several features, such as density, coverage, color histogram, length, area, and texture features, are extracted to enable effectual classification. Subsequent to the feature extraction, the size of the features is decreased by exploiting Principal Component Analysis (PCA). For the classification, the extracted features are exposed to Whale Optimization Algorithm-based Convolutional Neural Network (WOA-CNN). Subsequently, the image level features, such as bacilli area, bacilli count, scattering coefficients and skeleton features are considered to do severity detection utilizing proposed Enhanced Whale Optimization Algorithm-based Modular Neural Network (EWOA-MNN). In conclusion, the inflection level is resolved to utilize density, entropy, and detection percentage. The proposed method is modeled by enhancing the WOA method.
{"title":"Enhanced WOA and Modular Neural Network for Severity Analysis of Tuberculosis","authors":"S. ChithraR","doi":"10.46253/j.mr.v2i3.a5","DOIUrl":"https://doi.org/10.46253/j.mr.v2i3.a5","url":null,"abstract":"Generally, Tuberculosis (TB) is an extremely infectious disease and it is a significant medical issue everywhere throughout the globe. The exact recognition of TB is the main concern faced by the majority of conventional algorithms. Hence, this paper addresses these problems and presented a successful method for recognizing TB utilizing the modular neural network. Moreover, for transforming the RGB image to LUV space, the color space transformation is utilized. At that point, adaptive thresholding is done for image segmentation and several features, such as density, coverage, color histogram, length, area, and texture features, are extracted to enable effectual classification. Subsequent to the feature extraction, the size of the features is decreased by exploiting Principal Component Analysis (PCA). For the classification, the extracted features are exposed to Whale Optimization Algorithm-based Convolutional Neural Network (WOA-CNN). Subsequently, the image level features, such as bacilli area, bacilli count, scattering coefficients and skeleton features are considered to do severity detection utilizing proposed Enhanced Whale Optimization Algorithm-based Modular Neural Network (EWOA-MNN). In conclusion, the inflection level is resolved to utilize density, entropy, and detection percentage. The proposed method is modeled by enhancing the WOA method.","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115286023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
: Data present in great quantity raises the difficulty of managing them that affects the effectual decision-making procedure. Therefore, data clustering achieves notable significance in knowledge extraction and a well-organized clustering algorithm endorses the effectual decision making. For that reason, an algorithm for data clustering by exploiting the DIGWO method is presented in this paper, which decides the optimal centroid to perform the clustering procedure. The developed DIGWO technique exploits the calculation steps of the Dragonfly Algorithm (DA) with the incorporation of the Improved Grey Wolf Optimization (IGWO) with a novel formulated fitness model. Moreover, the proposed method exploits the least fitness measure to position the optimal centroid and the fitness measure based upon three constraints, such as intra-cluster distance, intercluster distance, and cluster density. The optimal centroid ensuing to the minimum value of the fitness is exploited for clustering the data. Simulation is performed by exploiting three datasets and the comparative evaluation is performed that shows that the performance of the developed method is better than the conventional algorithms such as Grey Wolf Optimization (GWO), Dragonfly and Particle Swarm Optimization (PSO).
{"title":"DIGWO: Hybridization of Dragonfly Algorithm with Improved Grey Wolf Optimization Algorithm for Data Clustering","authors":"A. N. Jadhav","doi":"10.46253/j.mr.v2i3.a1","DOIUrl":"https://doi.org/10.46253/j.mr.v2i3.a1","url":null,"abstract":": Data present in great quantity raises the difficulty of managing them that affects the effectual decision-making procedure. Therefore, data clustering achieves notable significance in knowledge extraction and a well-organized clustering algorithm endorses the effectual decision making. For that reason, an algorithm for data clustering by exploiting the DIGWO method is presented in this paper, which decides the optimal centroid to perform the clustering procedure. The developed DIGWO technique exploits the calculation steps of the Dragonfly Algorithm (DA) with the incorporation of the Improved Grey Wolf Optimization (IGWO) with a novel formulated fitness model. Moreover, the proposed method exploits the least fitness measure to position the optimal centroid and the fitness measure based upon three constraints, such as intra-cluster distance, intercluster distance, and cluster density. The optimal centroid ensuing to the minimum value of the fitness is exploited for clustering the data. Simulation is performed by exploiting three datasets and the comparative evaluation is performed that shows that the performance of the developed method is better than the conventional algorithms such as Grey Wolf Optimization (GWO), Dragonfly and Particle Swarm Optimization (PSO).","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133333744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper intends to develop a novel breast cancer detection model for classifying the normal, benign or malignant patterns in a mammogram. The diagnosis process is done based on three stages such as pre-processing, feature extraction and classification. Initially, the Discrete Fourier Transform (DFT) is applied in the processing stage. Next, to pre-processing, the Gray Level Co-Occurrence Matrix (GLCM) features of the image are extracted. The GLCM-based features are then classified using Support Vector Machine (SVM) for classifying the mammogram. Further, the weights of the SVM are optimized using the Grey Wolf optimization (GWO) model for improving the classification accuracy. This classification mechanism is used to diagnose the benign and malignant patterns in a mammogram. Moreover, the proposed scheme is evaluated over traditional models such as GA, PSO and FF as well as the outcomes is verified.
{"title":"Breast Cancer Detection by Optimal Classification using GWO Algorithm","authors":"V. Vinolin","doi":"10.46253/j.mr.v2i2.a2","DOIUrl":"https://doi.org/10.46253/j.mr.v2i2.a2","url":null,"abstract":"This paper intends to develop a novel breast cancer detection model for classifying the normal, benign or malignant patterns in a mammogram. The diagnosis process is done based on three stages such as pre-processing, feature extraction and classification. Initially, the Discrete Fourier Transform (DFT) is applied in the processing stage. Next, to pre-processing, the Gray Level Co-Occurrence Matrix (GLCM) features of the image are extracted. The GLCM-based features are then classified using Support Vector Machine (SVM) for classifying the mammogram. Further, the weights of the SVM are optimized using the Grey Wolf optimization (GWO) model for improving the classification accuracy. This classification mechanism is used to diagnose the benign and malignant patterns in a mammogram. Moreover, the proposed scheme is evaluated over traditional models such as GA, PSO and FF as well as the outcomes is verified.","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"304 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125528356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This framework introduces a new automatic image forgery detection approach that involves four main stages like (i) Illumination map computation, (ii) Face detection, (iii) Feature extraction, and (iv) Classification. Initially, the processing of input image is exploited by means of illumination map estimation, which acquires two computation processes called Gray world estimates and Inverse-Intensity chromaticity. Subsequent to this, the Viola-Jones algorithm is employed for the face detection process, which is the second phase, in order to detect the face image clearly. Once after the detection process, the obtained facial image is subjected to feature extraction. For this, Grey Level Co-occurrence Matrix (GLCM) is exploited that extract the facial features from the image. After this, the classification process is carried out for the extracted facial features by employing the Neural Network (NN) classifier. On the whole, this paper mainly concerned over the optimization concept, in which the weight of the NN is optimally selected by using the renowned optimization algorithm named Whale Optimization Algorithm (WOA). To the end, the performance of the implemented model is compared over the other classical models like k-nearest neighbor (kNN), NN and Support Vector Machine (SVM) regarding certain measures like Accuracy, Sensitivity, and Specificity.
{"title":"Face Image Forgery Detection by Weight Optimized Neural Network Model","authors":"R. Cristin","doi":"10.46253/j.mr.v2i2.a3","DOIUrl":"https://doi.org/10.46253/j.mr.v2i2.a3","url":null,"abstract":"This framework introduces a new automatic image forgery detection approach that involves four main stages like (i) Illumination map computation, (ii) Face detection, (iii) Feature extraction, and (iv) Classification. Initially, the processing of input image is exploited by means of illumination map estimation, which acquires two computation processes called Gray world estimates and Inverse-Intensity chromaticity. Subsequent to this, the Viola-Jones algorithm is employed for the face detection process, which is the second phase, in order to detect the face image clearly. Once after the detection process, the obtained facial image is subjected to feature extraction. For this, Grey Level Co-occurrence Matrix (GLCM) is exploited that extract the facial features from the image. After this, the classification process is carried out for the extracted facial features by employing the Neural Network (NN) classifier. On the whole, this paper mainly concerned over the optimization concept, in which the weight of the NN is optimally selected by using the renowned optimization algorithm named Whale Optimization Algorithm (WOA). To the end, the performance of the implemented model is compared over the other classical models like k-nearest neighbor (kNN), NN and Support Vector Machine (SVM) regarding certain measures like Accuracy, Sensitivity, and Specificity.","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126314600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
: Electroencephalogram (EEG) is the recording of the electrical activity of the brain. The waveforms that are recorded from the brain regions show the cortical activity. The integration of EEG signals with other bio-signals is known as artifacts. Some of the artifacts are Electrooculogram (EOG), Electrocardiogram (ECG), and Electromyogram (EMG). The artifacts removed from the EEG signal are very challenging in medical. This paper presents the Dragonfly Levenberg Marquardt (DrLM) optimization-based Neural Network (NN) to remove the artifacts from EEG. Initially, the EEG signal is subjected to adaptive filter for determining the optimal weights based on Dragonfly Algorithm (DA) and LM. These two approaches are hybridized and given to the NN to identify the weights. At last, the artifacts are removed from the EEG signal. The performance of DrLM-NN is evaluated in terms of SNR, MSE, and RMSE. The proposed artifact removal method achieves the maximum SNR of 45.67, minimal MSE of 2982, and minimal RMSE of 1.11 that indicates its superiority.
{"title":"Artifacts Removal using Dragonfly Levenberg Marquardt-Based Learning Algorithm from Electroencephalogram Signal","authors":"Quazi M. H Swami","doi":"10.46253/j.mr.v2i2.a1","DOIUrl":"https://doi.org/10.46253/j.mr.v2i2.a1","url":null,"abstract":": Electroencephalogram (EEG) is the recording of the electrical activity of the brain. The waveforms that are recorded from the brain regions show the cortical activity. The integration of EEG signals with other bio-signals is known as artifacts. Some of the artifacts are Electrooculogram (EOG), Electrocardiogram (ECG), and Electromyogram (EMG). The artifacts removed from the EEG signal are very challenging in medical. This paper presents the Dragonfly Levenberg Marquardt (DrLM) optimization-based Neural Network (NN) to remove the artifacts from EEG. Initially, the EEG signal is subjected to adaptive filter for determining the optimal weights based on Dragonfly Algorithm (DA) and LM. These two approaches are hybridized and given to the NN to identify the weights. At last, the artifacts are removed from the EEG signal. The performance of DrLM-NN is evaluated in terms of SNR, MSE, and RMSE. The proposed artifact removal method achieves the maximum SNR of 45.67, minimal MSE of 2982, and minimal RMSE of 1.11 that indicates its superiority.","PeriodicalId":167187,"journal":{"name":"Multimedia Research","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123349259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}