{"title":"Object Recognition using TensorFlow","authors":"Nahuel E. Albayrak","doi":"10.1109/isec49744.2020.9397835","DOIUrl":null,"url":null,"abstract":"Computers can apply vision technologies using cameras and artificial intelligence software to achieve image recognition and identify objects, places, and people. The objective of this project is to capture the image of an automobile as it drives by, identify its model and color, and determine its location, travel direction, and speed. This system can be used to assist law enforcement with vehicle identification in an emergency such as an Amber alert or to detect traffic infractions. For this purpose, we constructed, trained, and applied an object detection model using TensorFlow. First, an image capturing system was built using camera lenses (Raspberry Pi Camera V2-8) and Raspberry Pi (Raspberry Pi 4) small computers. Next, the computers were set up with a software application called TensorFlow. The system was trained to recognize an automobile’s model and color by processing a variety of car images. Pictures of different cars were uploaded from Google images and resized highlighting the features of the vehicle. Finally, code was developed in Python to create a universal clock for each camera that recorded the detection time. Five trials were conducted using 2 automobiles available for testing. The cars were recognized by the model with 87 percent certainty in each of the 5 trials. That information was recorded on a table together with the time of capture and the location of the camera. The information from the table was used to successfully identify a specific car’s location and speed, with a few limitations. Because of budget restrictions only two cameras were built and two models were used for training. The information from the cameras was not transmitted in real time because wifi or LTE capability are not available at this time. An extension of this research will include multiple cameras, multiple models and real time data transmission.","PeriodicalId":355861,"journal":{"name":"2020 IEEE Integrated STEM Education Conference (ISEC)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Integrated STEM Education Conference (ISEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/isec49744.2020.9397835","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Computers can apply vision technologies using cameras and artificial intelligence software to achieve image recognition and identify objects, places, and people. The objective of this project is to capture the image of an automobile as it drives by, identify its model and color, and determine its location, travel direction, and speed. This system can be used to assist law enforcement with vehicle identification in an emergency such as an Amber alert or to detect traffic infractions. For this purpose, we constructed, trained, and applied an object detection model using TensorFlow. First, an image capturing system was built using camera lenses (Raspberry Pi Camera V2-8) and Raspberry Pi (Raspberry Pi 4) small computers. Next, the computers were set up with a software application called TensorFlow. The system was trained to recognize an automobile’s model and color by processing a variety of car images. Pictures of different cars were uploaded from Google images and resized highlighting the features of the vehicle. Finally, code was developed in Python to create a universal clock for each camera that recorded the detection time. Five trials were conducted using 2 automobiles available for testing. The cars were recognized by the model with 87 percent certainty in each of the 5 trials. That information was recorded on a table together with the time of capture and the location of the camera. The information from the table was used to successfully identify a specific car’s location and speed, with a few limitations. Because of budget restrictions only two cameras were built and two models were used for training. The information from the cameras was not transmitted in real time because wifi or LTE capability are not available at this time. An extension of this research will include multiple cameras, multiple models and real time data transmission.