Omar Abdullatif Jassim, Mohammed Jawad Abed, Zenah Hadi Saied Saied
{"title":"Indoor/Outdoor Deep Learning Based Image Classification for Object Recognition Applications","authors":"Omar Abdullatif Jassim, Mohammed Jawad Abed, Zenah Hadi Saied Saied","doi":"10.21123/bsj.2023.8177","DOIUrl":null,"url":null,"abstract":"With the rapid development of smart devices, people's lives have become easier, especially for visually disabled or special-needs people. The new achievements in the fields of machine learning and deep learning let people identify and recognise the surrounding environment. In this study, the efficiency and high performance of deep learning architecture are used to build an image classification system in both indoor and outdoor environments. The proposed methodology starts with collecting two datasets (indoor and outdoor) from different separate datasets. In the second step, the collected dataset is split into training, validation, and test sets. The pre-trained GoogleNet and MobileNet-V2 models are trained using the indoor and outdoor sets, resulting in four trained models. The test sets are used to evaluate the trained models using many evaluation metrics (accuracy, TPR, FNR, PPR, FDR). Results of Google Net model indicate the high performance of the designed models with 99.34% and 99.76% accuracies for indoor and outdoor datasets, respectively. For Mobile Net models, the result accuracies are 99.27% and 99.68% for indoor and outdoor sets, respectively. The proposed methodology is compared with similar ones in the field of object recognition and image classification, and the comparative study proves the transcendence of the propsed system.","PeriodicalId":8687,"journal":{"name":"Baghdad Science Journal","volume":"92 13","pages":""},"PeriodicalIF":1.2000,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Baghdad Science Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21123/bsj.2023.8177","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
With the rapid development of smart devices, people's lives have become easier, especially for visually disabled or special-needs people. The new achievements in the fields of machine learning and deep learning let people identify and recognise the surrounding environment. In this study, the efficiency and high performance of deep learning architecture are used to build an image classification system in both indoor and outdoor environments. The proposed methodology starts with collecting two datasets (indoor and outdoor) from different separate datasets. In the second step, the collected dataset is split into training, validation, and test sets. The pre-trained GoogleNet and MobileNet-V2 models are trained using the indoor and outdoor sets, resulting in four trained models. The test sets are used to evaluate the trained models using many evaluation metrics (accuracy, TPR, FNR, PPR, FDR). Results of Google Net model indicate the high performance of the designed models with 99.34% and 99.76% accuracies for indoor and outdoor datasets, respectively. For Mobile Net models, the result accuracies are 99.27% and 99.68% for indoor and outdoor sets, respectively. The proposed methodology is compared with similar ones in the field of object recognition and image classification, and the comparative study proves the transcendence of the propsed system.
期刊介绍:
The journal publishes academic and applied papers dealing with recent topics and scientific concepts. Papers considered for publication in biology, chemistry, computer sciences, physics, and mathematics. Accepted papers will be freely downloaded by professors, researchers, instructors, students, and interested workers. ( Open Access) Published Papers are registered and indexed in the universal libraries.