Abdulrahman Fahim, E. Papalexakis, S. Krishnamurthy, Amit K. Roy Chowdhury, L. Kaplan, T. Abdelzaher
{"title":"AcTrak: Controlling a Steerable Surveillance Camera using Reinforcement Learning","authors":"Abdulrahman Fahim, E. Papalexakis, S. Krishnamurthy, Amit K. Roy Chowdhury, L. Kaplan, T. Abdelzaher","doi":"10.1145/3585316","DOIUrl":null,"url":null,"abstract":"Steerable cameras that can be controlled via a network, to retrieve telemetries of interest have become popular. In this paper, we develop a framework called AcTrak, to automate a camera’s motion to appropriately switch between (a) zoom ins on existing targets in a scene to track their activities, and (b) zoom out to search for new targets arriving to the area of interest. Specifically, we seek to achieve a good trade-off between the two tasks, i.e., we want to ensure that new targets are observed by the camera before they leave the scene, while also zooming in on existing targets frequently enough to monitor their activities. There exist prior control algorithms for steering cameras to optimize certain objectives; however, to the best of our knowledge, none have considered this problem, and do not perform well when target activity tracking is required. AcTrak automatically controls the camera’s PTZ configurations using reinforcement learning (RL), to select the best camera position given the current state. Via simulations using real datasets, we show that AcTrak detects newly arriving targets 30% faster than a non-adaptive baseline and rarely misses targets, unlike the baseline which can miss up to 5% of the targets. We also implement AcTrak to control a real camera and demonstrate that in comparison with the baseline, it acquires about 2× more high resolution images of targets.","PeriodicalId":7055,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":2.0000,"publicationDate":"2023-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Cyber-Physical Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3585316","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Steerable cameras that can be controlled via a network, to retrieve telemetries of interest have become popular. In this paper, we develop a framework called AcTrak, to automate a camera’s motion to appropriately switch between (a) zoom ins on existing targets in a scene to track their activities, and (b) zoom out to search for new targets arriving to the area of interest. Specifically, we seek to achieve a good trade-off between the two tasks, i.e., we want to ensure that new targets are observed by the camera before they leave the scene, while also zooming in on existing targets frequently enough to monitor their activities. There exist prior control algorithms for steering cameras to optimize certain objectives; however, to the best of our knowledge, none have considered this problem, and do not perform well when target activity tracking is required. AcTrak automatically controls the camera’s PTZ configurations using reinforcement learning (RL), to select the best camera position given the current state. Via simulations using real datasets, we show that AcTrak detects newly arriving targets 30% faster than a non-adaptive baseline and rarely misses targets, unlike the baseline which can miss up to 5% of the targets. We also implement AcTrak to control a real camera and demonstrate that in comparison with the baseline, it acquires about 2× more high resolution images of targets.