Ann-Kristin Grosselfinger, David Münch, W. Hübner, Michael Arens
{"title":"基于特征的半固定多摄像头组件自动配置","authors":"Ann-Kristin Grosselfinger, David Münch, W. Hübner, Michael Arens","doi":"10.1117/12.2027311","DOIUrl":null,"url":null,"abstract":"Autonomously operating semi-stationary multi-camera components are the core modules of ad-hoc multi-view methods. On the one hand a situation recognition system needs overview of an entire scene, as given by a wide-angle camera, and on the other hand a close-up view from e.g. an active pan-tilt-zoom (PTZ) camera of interesting agents is required to further increase the information to e.g. identify those agents. To configure such a system we set the field of view (FOV) of the overview-camera in correspondence to the motor configuration of a PTZ camera. Images are captured from a uniformly moving PTZ camera until the entire field of view of the master camera is covered. Along the way, a lookup table (LUT) of motor coordinates of the PTZ camera and image coordinates in the master camera is generated. To match each pair of images, features (SIFT, SURF, ORB, STAR, FAST, MSER, BRISK, FREAK) are detected, selected by nearest neighbor distance ratio (NNDR), and matched. A homography is estimated to transform the PTZ image to the master image. With that information comprehensive LUTs are calculated via barycentric coordinates and stored for every pixel of the master image. In this paper the robustness, accuracy, and runtime are quantitatively evaluated for different features.","PeriodicalId":344928,"journal":{"name":"Optics/Photonics in Security and Defence","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Feature-based automatic configuration of semi-stationary multi-camera components\",\"authors\":\"Ann-Kristin Grosselfinger, David Münch, W. Hübner, Michael Arens\",\"doi\":\"10.1117/12.2027311\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Autonomously operating semi-stationary multi-camera components are the core modules of ad-hoc multi-view methods. On the one hand a situation recognition system needs overview of an entire scene, as given by a wide-angle camera, and on the other hand a close-up view from e.g. an active pan-tilt-zoom (PTZ) camera of interesting agents is required to further increase the information to e.g. identify those agents. To configure such a system we set the field of view (FOV) of the overview-camera in correspondence to the motor configuration of a PTZ camera. Images are captured from a uniformly moving PTZ camera until the entire field of view of the master camera is covered. Along the way, a lookup table (LUT) of motor coordinates of the PTZ camera and image coordinates in the master camera is generated. To match each pair of images, features (SIFT, SURF, ORB, STAR, FAST, MSER, BRISK, FREAK) are detected, selected by nearest neighbor distance ratio (NNDR), and matched. A homography is estimated to transform the PTZ image to the master image. With that information comprehensive LUTs are calculated via barycentric coordinates and stored for every pixel of the master image. In this paper the robustness, accuracy, and runtime are quantitatively evaluated for different features.\",\"PeriodicalId\":344928,\"journal\":{\"name\":\"Optics/Photonics in Security and Defence\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-10-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics/Photonics in Security and Defence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2027311\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics/Photonics in Security and Defence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2027311","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Feature-based automatic configuration of semi-stationary multi-camera components
Autonomously operating semi-stationary multi-camera components are the core modules of ad-hoc multi-view methods. On the one hand a situation recognition system needs overview of an entire scene, as given by a wide-angle camera, and on the other hand a close-up view from e.g. an active pan-tilt-zoom (PTZ) camera of interesting agents is required to further increase the information to e.g. identify those agents. To configure such a system we set the field of view (FOV) of the overview-camera in correspondence to the motor configuration of a PTZ camera. Images are captured from a uniformly moving PTZ camera until the entire field of view of the master camera is covered. Along the way, a lookup table (LUT) of motor coordinates of the PTZ camera and image coordinates in the master camera is generated. To match each pair of images, features (SIFT, SURF, ORB, STAR, FAST, MSER, BRISK, FREAK) are detected, selected by nearest neighbor distance ratio (NNDR), and matched. A homography is estimated to transform the PTZ image to the master image. With that information comprehensive LUTs are calculated via barycentric coordinates and stored for every pixel of the master image. In this paper the robustness, accuracy, and runtime are quantitatively evaluated for different features.