Teli Ma, Yizhi Wang, Jinxin Shao, Baochang Zhang, D. Doermann
{"title":"Orthogonal Features Fusion Network for Anomaly Detection","authors":"Teli Ma, Yizhi Wang, Jinxin Shao, Baochang Zhang, D. Doermann","doi":"10.1109/VCIP49819.2020.9301755","DOIUrl":null,"url":null,"abstract":"Generative models have been successfully used for anomaly detection, which however need a large number of parameters and computation overheads, especially when training spatial and temporal networks in the same framework. In this paper, we introduce a novel network architecture, Orthogonal Features Fusion Network (OFF-Net), to solve the anomaly detection problem. We show that the convolutional feature maps used for generating future frames are orthogonal with each other, which can improve representation capacity of generative models and strengthen temporal connections between adjacent images. We lead a simple but effective module easily mounted on convolutional neural networks (CNNs) with negligible additional parameters added, which can replace the widely-used optical flow n etwork a nd s ignificantly im prove th e pe rformance for anomaly detection. Extensive experiment results demonstrate the effectiveness of OFF-Net that we outperform the state-of-the-art model 1.7% in terms of AUC. We save around 85M-space parameters compared with the prevailing prior arts using optical flow n etwork w ithout c omprising t he performance.","PeriodicalId":431880,"journal":{"name":"2020 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Visual Communications and Image Processing (VCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP49819.2020.9301755","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Generative models have been successfully used for anomaly detection, which however need a large number of parameters and computation overheads, especially when training spatial and temporal networks in the same framework. In this paper, we introduce a novel network architecture, Orthogonal Features Fusion Network (OFF-Net), to solve the anomaly detection problem. We show that the convolutional feature maps used for generating future frames are orthogonal with each other, which can improve representation capacity of generative models and strengthen temporal connections between adjacent images. We lead a simple but effective module easily mounted on convolutional neural networks (CNNs) with negligible additional parameters added, which can replace the widely-used optical flow n etwork a nd s ignificantly im prove th e pe rformance for anomaly detection. Extensive experiment results demonstrate the effectiveness of OFF-Net that we outperform the state-of-the-art model 1.7% in terms of AUC. We save around 85M-space parameters compared with the prevailing prior arts using optical flow n etwork w ithout c omprising t he performance.