{"title":"Deep 3D-2D Convolutional Neural Networks Combined With Mobinenetv2 For Hyperspectral Image Classification","authors":"DouglasOmwenga Nyabuga","doi":"10.1145/3582177.3582185","DOIUrl":null,"url":null,"abstract":"Convolutional neural networks (CNNs), one of the most successful models for visual identification, have shown excellent performance outcomes in different visual recognition challenges, attracting much interest in recent years. However, deploying CNN models to hyperspectral imaging (HSI) data continues to be a challenge due to the strongly correlated bands and insufficient training sets. Furthermore, HSI categorization is hugely dependent on spectral-spatial information. Hence, a 2D-CNN is a possible technique to analyze these features. However, because of the volume and spectral dimensions, a 3D CNN can be an option but is more computationally expensive. Furthermore, the models underperform in areas with comparable spectrums due to their inability to extract feature maps of high quality. This work, therefore, proposes a 3D/2D CNN combined with the MobineNetV2 model that uses both spectral-spatial feature maps to achieve competitive performance. First, the HSI data cube is split into small overlapping 3-D batches using the principal component analysis (PCA) to get the desired dimensions. These batches are then processed to build 3-D feature maps over many contiguous bands using a 3D convolutional kernel function, which retains the spectral properties. The performance of our model is validated using three benchmark HSI data sets (i.e., Pavia University, Indian Pines, and Salinas Scene). The results are then compared with different state-of-the-art (SOTA) methods.","PeriodicalId":170327,"journal":{"name":"Proceedings of the 2023 5th International Conference on Image Processing and Machine Vision","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 5th International Conference on Image Processing and Machine Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3582177.3582185","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Convolutional neural networks (CNNs), one of the most successful models for visual identification, have shown excellent performance outcomes in different visual recognition challenges, attracting much interest in recent years. However, deploying CNN models to hyperspectral imaging (HSI) data continues to be a challenge due to the strongly correlated bands and insufficient training sets. Furthermore, HSI categorization is hugely dependent on spectral-spatial information. Hence, a 2D-CNN is a possible technique to analyze these features. However, because of the volume and spectral dimensions, a 3D CNN can be an option but is more computationally expensive. Furthermore, the models underperform in areas with comparable spectrums due to their inability to extract feature maps of high quality. This work, therefore, proposes a 3D/2D CNN combined with the MobineNetV2 model that uses both spectral-spatial feature maps to achieve competitive performance. First, the HSI data cube is split into small overlapping 3-D batches using the principal component analysis (PCA) to get the desired dimensions. These batches are then processed to build 3-D feature maps over many contiguous bands using a 3D convolutional kernel function, which retains the spectral properties. The performance of our model is validated using three benchmark HSI data sets (i.e., Pavia University, Indian Pines, and Salinas Scene). The results are then compared with different state-of-the-art (SOTA) methods.