Pub Date : 2024-08-14DOI: 10.1109/OJCS.2024.3442921
Hemant Gupta;Amiya Nayak
The Internet of Things (IoT) enables the linkage between the physical and digital domains, with wireless sensor networks (WSNs) playing a vital role in this procedure. The market is saturated with an abundance of IoT devices, a substantial proportion of which are designed for consumer use and have restricted power and memory capabilities. Our analysis found that there is very little research done on defining the security requirements of the IoT ecosystem. A crucial first step in the design process of a secure product entails meticulously scrutinizing and recording the precise security requirements. This paper focuses on defining security requirements for Vehicle-to-Vehicle (V2V) communication systems. The requirements are specified utilizing the Message Queuing Telemetry Transport for Sensor Network (MQTT-SN) communication protocol architecture, specifically tailored for use in sensor networks. The modified Security Requirement Engineering Process (SREP) and Security Quality Requirement Engineering (SQUARE) methodologies have been used in this paper for the case study. The security of the communication between the ClientApp and the road-side infrastructure is our main priority.
{"title":"Publish Subscribe System Security Requirement: A Case Study for V2V Communication","authors":"Hemant Gupta;Amiya Nayak","doi":"10.1109/OJCS.2024.3442921","DOIUrl":"https://doi.org/10.1109/OJCS.2024.3442921","url":null,"abstract":"The Internet of Things (IoT) enables the linkage between the physical and digital domains, with wireless sensor networks (WSNs) playing a vital role in this procedure. The market is saturated with an abundance of IoT devices, a substantial proportion of which are designed for consumer use and have restricted power and memory capabilities. Our analysis found that there is very little research done on defining the security requirements of the IoT ecosystem. A crucial first step in the design process of a secure product entails meticulously scrutinizing and recording the precise security requirements. This paper focuses on defining security requirements for Vehicle-to-Vehicle (V2V) communication systems. The requirements are specified utilizing the Message Queuing Telemetry Transport for Sensor Network (MQTT-SN) communication protocol architecture, specifically tailored for use in sensor networks. The modified Security Requirement Engineering Process (SREP) and Security Quality Requirement Engineering (SQUARE) methodologies have been used in this paper for the case study. The security of the communication between the ClientApp and the road-side infrastructure is our main priority.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"5 ","pages":"389-405"},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10636299","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142084516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-14DOI: 10.1109/OJCS.2024.3443511
Syed Sameen Ahmad Rizvi;Aryan Seth;Jagat Sesh Challa;Pratik Narang
Detecting facial expressions is a challenging task in the field of computer vision. Several datasets and algorithms have been proposed over the past two decades; however, deploying them in real-world, in-the-wild scenarios hampers the overall performance. This is because the training data does not completely represent socio-cultural and ethnic diversity; the majority of the datasets consist of American and Caucasian populations. On the contrary, in a diverse and heterogeneous population distribution like the Indian subcontinent, the need for a significantly large enough dataset representing all the ethnic groups is even more critical. To address this, we present InFER++, an India-specific, multi-ethnic, real-world, in-the-wild facial expression dataset consisting of seven basic expressions. To the best of our knowledge, this is the largest India-specific facial expression dataset. Our cross-dataset analysis of RAF-DB vs InFER++ shows that models trained on RAF-DB were not generalizable to ethnic datasets like InFER++. This is because the facial expressions change with respect to ethnic and socio-cultural factors. We also present LiteXpressionNet, a lightweight deep facial expression network that outperforms many existing lightweight models with considerably fewer FLOPs and parameters. The proposed model is inspired by MobileViTv2 architecture, which utilizes GhostNetv2 blocks to increase parametrization while reducing latency and FLOP requirements. The model is trained with a novel objective function that combines early learning regularization and symmetric cross-entropy loss to mitigate human uncertainties and annotation bias in most real-world facial expression datasets.
{"title":"InFER++: Real-World Indian Facial Expression Dataset","authors":"Syed Sameen Ahmad Rizvi;Aryan Seth;Jagat Sesh Challa;Pratik Narang","doi":"10.1109/OJCS.2024.3443511","DOIUrl":"https://doi.org/10.1109/OJCS.2024.3443511","url":null,"abstract":"Detecting facial expressions is a challenging task in the field of computer vision. Several datasets and algorithms have been proposed over the past two decades; however, deploying them in real-world, in-the-wild scenarios hampers the overall performance. This is because the training data does not completely represent socio-cultural and ethnic diversity; the majority of the datasets consist of American and Caucasian populations. On the contrary, in a diverse and heterogeneous population distribution like the Indian subcontinent, the need for a significantly large enough dataset representing all the ethnic groups is even more critical. To address this, we present InFER++, an India-specific, multi-ethnic, real-world, in-the-wild facial expression dataset consisting of seven basic expressions. To the best of our knowledge, this is the largest India-specific facial expression dataset. Our cross-dataset analysis of RAF-DB vs InFER++ shows that models trained on RAF-DB were not generalizable to ethnic datasets like InFER++. This is because the facial expressions change with respect to ethnic and socio-cultural factors. We also present LiteXpressionNet, a lightweight deep facial expression network that outperforms many existing lightweight models with considerably fewer FLOPs and parameters. The proposed model is inspired by MobileViTv2 architecture, which utilizes GhostNetv2 blocks to increase parametrization while reducing latency and FLOP requirements. The model is trained with a novel objective function that combines early learning regularization and symmetric cross-entropy loss to mitigate human uncertainties and annotation bias in most real-world facial expression datasets.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"5 ","pages":"406-417"},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10636346","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-19DOI: 10.1109/OJCS.2024.3431229
Mumtahina Ahmed;Uland Rozario;Md Mohshin Kabir;Zeyar Aung;Jungpil Shin;M. F. Mridha
Classifying music genres has been a significant problem in the decade of seamless music streaming platforms and countless content creations. An accurate music genre classification is a fundamental task with applications in music recommendation, content organization, and understanding musical trends. This study presents a comprehensive approach to music genre classification using deep learning and advanced audio analysis techniques. In this study, a deep learning method was used to tackle the task of music genre classification. For this study, the GTZAN dataset was chosen for music genre classification. This study examines the challenge of music genre categorization using Convolutional Neural Networks (CNN), Feedforward Neural Networks (FNN), Support Vector Machine (SVM), k-nearest Neighbors (kNN), and Long Short-term Memory (LSTM) models on the dataset. This study precisely cross-validates the model's output following feature extraction from pre-processed audio data and then evaluates its performance. The modified CNN model performs better than conventional NN models by using its capacity to capture complex spectrogram patterns. These results highlight how deep learning algorithms may improve systems for categorizing music genres, with implications for various music-related applications and user interfaces. Up to this point, 92.7% of the GTZAN dataset's correctness has been achieved on the GTZAN dataset and 91.6% on the ISMIR2004 Ballroom dataset.
{"title":"Musical Genre Classification Using Advanced Audio Analysis and Deep Learning Techniques","authors":"Mumtahina Ahmed;Uland Rozario;Md Mohshin Kabir;Zeyar Aung;Jungpil Shin;M. F. Mridha","doi":"10.1109/OJCS.2024.3431229","DOIUrl":"10.1109/OJCS.2024.3431229","url":null,"abstract":"Classifying music genres has been a significant problem in the decade of seamless music streaming platforms and countless content creations. An accurate music genre classification is a fundamental task with applications in music recommendation, content organization, and understanding musical trends. This study presents a comprehensive approach to music genre classification using deep learning and advanced audio analysis techniques. In this study, a deep learning method was used to tackle the task of music genre classification. For this study, the GTZAN dataset was chosen for music genre classification. This study examines the challenge of music genre categorization using Convolutional Neural Networks (CNN), Feedforward Neural Networks (FNN), Support Vector Machine (SVM), k-nearest Neighbors (kNN), and Long Short-term Memory (LSTM) models on the dataset. This study precisely cross-validates the model's output following feature extraction from pre-processed audio data and then evaluates its performance. The modified CNN model performs better than conventional NN models by using its capacity to capture complex spectrogram patterns. These results highlight how deep learning algorithms may improve systems for categorizing music genres, with implications for various music-related applications and user interfaces. Up to this point, 92.7% of the GTZAN dataset's correctness has been achieved on the GTZAN dataset and 91.6% on the ISMIR2004 Ballroom dataset.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"5 ","pages":"457-467"},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10605044","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141741079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.1109/OJCS.2024.3428970
Suhaib Chughtai;Zakaria Senousy;Ahmed Mahany;Nouh Sabri Elmitwally;Khalid N. Ismail;Mohamed Medhat Gaber;Mohammed M. Abdelsamea
Colorectal cancer (CRC) is the second leading cause of cancer-related mortality. Precise diagnosis of CRC plays a crucial role in increasing patient survival rates and formulating effective treatment strategies. Deep learning algorithms have demonstrated remarkable proficiency in the precise categorization of histopathology images. In this article, we introduce a novel deep learning model, termed DeepCon