{"title":"Meme Expressive Classification in Multimodal State with Feature Extraction in Deep Learning","authors":"A. Barveen, S. Geetha, Mohamad Faizal","doi":"10.1109/ICEEICT56924.2023.10157066","DOIUrl":null,"url":null,"abstract":"Memes are a socially interactive way to communicate online. Memes are used by users to communicate with one another on social networking sites and other forums. Memes essentially focus on speech recognition and image macros. While a meme is being created, it focuses on the semiotic type of resources that the internet community interprets with other resources, which facilitates the interaction among the internet and meme creators. Memes recreate based on various approaches, which fall under various acts such as existing speech acts. Based on the expressive face with captioned short texts, even the short text is exaggerated. Every year, meme mimicking applications are created that allow users to use the imitated meme expressions. Memes represent the shared texts of the younger generations on various social platforms. The classifications of sentiment based on the various memetic expressions are the most efficient way to analyse those feelings and emotions. HOG feature extraction allows the images to be segmented into blocks of smaller size by using a single feature vector for dimension, which characterizes the local object appearances to characterize the meme classification. The existence of specific characteristics, including such edges, angles, or patterns, is then analyzed by combining HOG features using multi-feature analysis on patches. Based upon the classification methodology, it classifies the sentiments, which tend to improve the learning process in an efficient manner. By combining a deep learning approach with a recurrent neural network, the extended LSTM-RNN can identify subtle nuances in memes, allowing for more accurate and detailed meme classification. This proposed method effectively evaluates several classification techniques, including CNN and Extended LSTM-RNN for meme image characterization. Through training and validation, Extended LSTM-RNN achieved 0.98% accuracy with better performance than CNN.","PeriodicalId":345324,"journal":{"name":"2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEEICT56924.2023.10157066","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Memes are a socially interactive way to communicate online. Memes are used by users to communicate with one another on social networking sites and other forums. Memes essentially focus on speech recognition and image macros. While a meme is being created, it focuses on the semiotic type of resources that the internet community interprets with other resources, which facilitates the interaction among the internet and meme creators. Memes recreate based on various approaches, which fall under various acts such as existing speech acts. Based on the expressive face with captioned short texts, even the short text is exaggerated. Every year, meme mimicking applications are created that allow users to use the imitated meme expressions. Memes represent the shared texts of the younger generations on various social platforms. The classifications of sentiment based on the various memetic expressions are the most efficient way to analyse those feelings and emotions. HOG feature extraction allows the images to be segmented into blocks of smaller size by using a single feature vector for dimension, which characterizes the local object appearances to characterize the meme classification. The existence of specific characteristics, including such edges, angles, or patterns, is then analyzed by combining HOG features using multi-feature analysis on patches. Based upon the classification methodology, it classifies the sentiments, which tend to improve the learning process in an efficient manner. By combining a deep learning approach with a recurrent neural network, the extended LSTM-RNN can identify subtle nuances in memes, allowing for more accurate and detailed meme classification. This proposed method effectively evaluates several classification techniques, including CNN and Extended LSTM-RNN for meme image characterization. Through training and validation, Extended LSTM-RNN achieved 0.98% accuracy with better performance than CNN.